Patents by Inventor Noshir Wadia

Noshir Wadia has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20070240162
    Abstract: Automated or autonomic techniques for managing deployment of one or more resources in a computing environment based on varying workload levels. The automated techniques may comprise predicting a future workload level based on data associated with the computing environment. Then, an estimation is performed to determine whether a current resource deployment is insufficient, sufficient, or overly sufficient to satisfy the future workload level. Then, one or more actions are caused to be taken when the current resource deployment is estimated to be insufficient or overly sufficient to satisfy the future workload level. Actions may comprise resource provisioning, resource tuning and/or admission control.
    Type: Application
    Filed: June 15, 2007
    Publication date: October 11, 2007
    Applicant: International Business Machines Corporation
    Inventors: David Coleman, Steven Froehlich, Joseph Hellerstein, Lawrence Hsiung, Edwin Lassettre, Todd Mummert, Mukund Raghavachari, Lance Russell, Maheswaran Surendra, Noshir Wadia, Peng Ye
  • Publication number: 20070038493
    Abstract: Techniques are provided for automating allocation of resources based on business decisions. An impact of a business decision is quantified in terms of information technology (IT) metrics. The resources that may be needed to address the impact are estimated. The estimated resources are provisioned.
    Type: Application
    Filed: August 12, 2005
    Publication date: February 15, 2007
    Inventors: Jayashree Subrahmonia, Rahul Jain, Noshir Wadia, Peng Ye, Rekha Garapati, Cynthia Craine
  • Publication number: 20060149611
    Abstract: Techniques are provided for allocating resources. Performance metrics for a transaction are received. It is determined whether one or more service level objectives are being violated based on the received performance metrics. In response to determining that the one or more service level objectives are being violated, additional resources are allocated to the transaction. In response to allocating the additional resources, a resource allocation event is published.
    Type: Application
    Filed: December 30, 2004
    Publication date: July 6, 2006
    Inventors: Catherine Diep, Lawrence Hsiung, Luis Ostdiek, Jayashree Subrahmonia, Noshir Wadia, Peng Ye
  • Publication number: 20060136448
    Abstract: An apparatus, system, and method are disclosed for provisioning database resource within a grid database system. The federation apparatus includes an analysis module and a provision module. The analysis module analyzes a data query stream from an application to a database instance and determines if the data query stream exhibits a predetermined performance attribute. The provision module provisions a database resource in response to a determination that the data query stream exhibits the predetermined performance attribute. The provisioned database resource may be a database instance or a cache. The provisioning of the new database resource advantageously is substantially transparent to a client on the database system.
    Type: Application
    Filed: December 20, 2004
    Publication date: June 22, 2006
    Inventors: Enzo Cialini, Laura Haas, Balakrishna Iyer, Allen Luniewski, Jayashree Subrahmonia, Noshir Wadia, Hansjorg Zeller
  • Publication number: 20060031242
    Abstract: Provided are a method, system, and program for distributing application transactions among work servers. Application transaction rates are determined for a plurality of applications supplying transactions to process. For each application, available partitions in at least one server are assigned to process the application transactions based on partition transaction rates of partitions in the servers. For each application, a determination is made of weights for each server including partitions assigned to the application based on a number of partitions in the server assigned to the application. The determined weights for each application are used to distribute application transactions among the servers including partitions assigned to the application.
    Type: Application
    Filed: August 3, 2004
    Publication date: February 9, 2006
    Inventors: Harold Hall, Lawrence Hsiung, Luis Ostdiek, Noshir Wadia, Peng Ye
  • Publication number: 20050188075
    Abstract: A server allocation controller provides an improved distributed data processing system for facilitating dynamic allocation of computing resources. The server allocation controller supports transaction and parallel services across multiple data centers enabling dynamic allocation of computing resources based on the current workload and service level agreements. The server allocation controller provides a method for dynamic re-partitioning of the workload to handle workload surges. Computing resources are dynamically assigned among transaction and parallel application classes, based on the current and predicted workload. Based on a service level agreement, the server allocation controller monitors and predicts the load on the system. If the current or predicted load cannot be handled with the current system configuration the server allocation controller determines additional resources needed to handle the current or predicted workload. The server cluster is reconfigured to meet the service level agreement.
    Type: Application
    Filed: January 22, 2004
    Publication date: August 25, 2005
    Applicant: International Business Machines Corporation
    Inventors: Daniel Dias, Edwin Lassettre, Avraham Leff, Marcos Novaes, James Rayfield, Noshir Wadia, Peng Ye
  • Publication number: 20050165925
    Abstract: An on-demand manager provides an improved distributed data processing system for facilitating dynamic allocation of computing resources among multiple domains based on a current workload and service level agreements. Based on a service level agreement, the on-demand manager monitors and predicts the load on the system. If the current or predicted load cannot be handled with the current system configuration, the on-demand manager determines additional resources needed to handle the workload. If the service level agreement violations cannot be handled by reconfiguring resources at a domain, the on-demand manager sends a resource request to other domains. These other domains analyze their own commitments and may accept the resource request, reject the request, or counter-propose with an offer of resources and a corresponding service level agreement.
    Type: Application
    Filed: January 22, 2004
    Publication date: July 28, 2005
    Applicant: International Business Machines Corporation
    Inventors: Asit Dan, Daniel Dias, Richard King, Avraham Leff, James Rayfield, Noshir Wadia
  • Publication number: 20050122987
    Abstract: An apparatus and method are provided for modeling queuing systems with highly variable traffic arrival rates. The apparatus and method include a means to associate a value with a pattern of highly variable arrival rates that is simple and intuitive, and a means to accurately model queuing delays in systems that are characterized by bursts of arrival activity. The queuing delay is determined by a sum of queuing delays after first applying a weighting factor to the queuing delay based upon a random arrival rate, and a different weighting factor to the queuing delay based upon a bursty variable arrival rate. The weighting factors are variants of the server utilization. The model facilitates specification of server characteristics and configurations to meet response time metrics.
    Type: Application
    Filed: December 9, 2003
    Publication date: June 9, 2005
    Inventors: Michael Ignatowski, Noshir Wadia
  • Publication number: 20050125213
    Abstract: An apparatus, system, and method are provided for modeling and analyzing a plurality of computing workloads. The apparatus, system, and method include a data collection module for gathering performance data associated with operation of the computer system. A modeling module executes a plurality of models in parallel, in series, or according to a hierarchical relationship. A data analysis module presents analysis data compiled from the modeling module to a user, typically in the form of a graph. And finally, a framework manages the data collection module, the modeling module, and the data analysis module according to a predefined data and model flow.
    Type: Application
    Filed: December 4, 2003
    Publication date: June 9, 2005
    Inventors: Yin Chen, Rhonda Childress, Catherine Crawford, Noshir Wadia, Peng Ye
  • Publication number: 20050086331
    Abstract: The present invention discloses a method, system and article of manufacture for autonomic identification of an optimum hardware configuration for a Web infrastructure. A plurality of performance objectives and a plurality of best practice rules for the Web infrastructure are established first. Then, a search space and a current configuration performance index within the search space are established. Next, a database of available hardware models is searched for a best-fit configuration based on the established plurality of best practice rules and the established current configuration performance index. The performance data of the found best-fit configuration is calculated using a performance simulator and then compared to the established plurality of performance objectives. If the calculated performance data meet the established plurality of performance objectives, then the best-fit configuration is designated as the optimum hardware configuration.
    Type: Application
    Filed: October 15, 2003
    Publication date: April 21, 2005
    Inventors: Noshir Wadia, Peng Ye