Patents by Inventor HariGovind V. Ramasamy

HariGovind V. Ramasamy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200233688
    Abstract: A method of organizing computer resources includes receiving a specification defining a plurality of quiescence groups of independent component instances for each of at least two services, and performing a first load balancing of the quiescence groups across a plurality of physical servers to define a plurality of supergroups while assigning each of the physical servers across the supergroups.
    Type: Application
    Filed: January 23, 2019
    Publication date: July 23, 2020
    Inventors: RICHARD E. HARPER, HARIGOVIND V. RAMASAMY, VALENTINA SALAPURA, SANDHYA KAPOOR, LONG WANG
  • Publication number: 20200226158
    Abstract: A base query having a plurality of base query terms is obtained. A plurality of problem log files are accessed. Words, contained in a corpus vocabulary, are extracted from the plurality of problem log files. Based on the words extracted from the plurality of problem log files, a first expanded query is generated from the base query. The corpus is queried, via a query engine and a corpus index, with a second expanded query related to the first expanded query.
    Type: Application
    Filed: March 23, 2020
    Publication date: July 16, 2020
    Inventors: Russell W. Bergs, Yu Deng, Kaoutar El Maghraoui, Matthew R. Koozer, HariGovind V. Ramasamy, Soumitra Sarkar, Rongda Zhu
  • Patent number: 10701141
    Abstract: Server resources in a data center are disaggregated into shared server resource pools. Servers are constructed dynamically, on-demand and based on a tenant's workload requirements, by allocating from these resource pools. The system also includes a license manager that operates to manage a pool of licenses that are available to be associated with resources drawn from the server resource pools. Upon provisioning of a server entity composed of resources drawn from the server resource pools, the license manager determines a license configuration suitable for the server entity. In response to receipt of information indicating a change in a composition of the server entity (e.g., as a workload is processed), the license manager determines whether an adjustment to the license configuration is required. If so, an adjusted license configuration for the server entity is determined and tracked to the tenant. The data center thus allocates appropriate licenses to server entities as required.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: June 30, 2020
    Assignee: International Business Machines Corporation
    Inventors: Valentina Salapura, John Alan Bivens, Min Li, Ruchi Mahindru, Harigovind V. Ramasamy, Yaoping Ruan, Eugen Schenfeld
  • Publication number: 20200174843
    Abstract: Direct inter-processor communication is enabled with respect to data in a memory location without having to switch specific circuits through a switching element (e.g., an optical switch). Rather, in this approach a memory pool is augmented to include a dedicated portion that serves as a disaggregated memory common space for communicating processors. The approach obviates the requirement of switching of physical memory modules through the optical switch to enable the processor-to-processor communication. Rather, processors (communicating with another) have an overlapping ability to access the same memory module in the pool; thus, there is no longer a need to change physical optical switch circuits to facilitate the inter-processor communication. The disaggregated memory common space is shared among the processors, which can access the common space for reads and writes, although particular locations in the memory common space for reads and writes are different.
    Type: Application
    Filed: November 29, 2018
    Publication date: June 4, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Yaoping RUAN, John A. BIVENS, Min LI, Ruchi MAHINDRU, HariGovind V. RAMASAMY, Valentina SALAPURA, Eugen SCHENFELD
  • Publication number: 20200175183
    Abstract: A group of processors in a processor pool comprise a secure “enclave” in which user code is executable and user data is readable solely with the enclave. This is facilitated through the key management scheme described that includes two sets of key-pairs, namely: a processor group key-pair, and a separate user key-pair (typically one per-user, although a user may have multiple such key-pairs). The processor group key-pair is associated with all (or some define subset of) the processors in the group. This key-pair is used to securely communicate a user private key among the processors. The user private key, however, is not transmitted to non-members of the group. Further, preferably the user private key is refreshed periodically or upon any membership change (in the group) to ensure that non-members or ex-members cannot decipher the encrypted user key.
    Type: Application
    Filed: November 29, 2018
    Publication date: June 4, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: HariGovind V. RAMASAMY, John A. BIVENS, Ruchi MAHINDRU, Valentina SALAPURA, Min LI, Yaoping RUAN, Eugen SCHENFELD
  • Publication number: 20200174949
    Abstract: Server resources in a data center are disaggregated into shared server resource pools, which include a pool of secure processors. Advantageously, servers are constructed dynamically, on-demand and based on a tenant's workload requirements, by allocating from these resource pools. According to this disclosure, secure processor modules for new servers are allocated to provide security for data-in-use (and data-at-rest) in a dynamic fashion so that virtual and non-virtual capacity can be adjusted in the disaggregate compute system without any downtime, e.g., based on workload security requirements and data sensitivity characteristics. The approach herein optimizes an overall utilization of an available secure processors resource pool in the disaggregated environment. The resulting disaggregate compute system that is configured according to the approach cryptographically-protects workload data whenever it is outside the CPU chip.
    Type: Application
    Filed: November 29, 2018
    Publication date: June 4, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: HariGovind V. RAMASAMY, Eugen SCHENFELD, Valentina SALAPURA, John A. BIVENS, Yaoping RUAN, Min LI, Ashish KUNDU, Ruchi MAHINDRU, Richard H. BOIVIE
  • Publication number: 20200174899
    Abstract: A new approach to resiliency management is provided in a data center wherein servers are constructed dynamically, on-demand and based on workload requirements and a tenant's resiliency requirements by allocating resources from these pools. In this approach, a set of functionally-equivalent “interchangeable compute units” (ICUs) are composed of resources from resource pools that have been extended to include not only different resource types (CPU, memory, accelerators), but also resources of different specifications (specs) and flavors. As a workload is being processed, the health or status of the resources are monitored. Upon a performance issue or failure event, a resiliency manager can swap out a current ICU and replace it with a functionally-equivalent ICU. Preferably, individual ICUs are hosted on one of: resources of a same type each with different specifications, and resources of a same type and specification and different flavors.
    Type: Application
    Filed: November 29, 2018
    Publication date: June 4, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: HariGovind V. RAMASAMY, Eugen SCHENFELD, Valentina SALAPURA, John A. BIVENS, Min LI, Ruchi MAHINDRU, Yaoping RUAN
  • Publication number: 20200174847
    Abstract: MapReduce processing is carried out in a disaggregated compute environment comprising a set of resource pools that comprise a processor pool, and a memory pool. Upon receipt of a MapReduce job, a task scheduler allocates resources from the set of resource pools, the resources including one or more processors drawn from the processor pool, and one or more memory modules drawn from the memory pool. The task scheduler then schedules a set of tasks required by the MapReduce job. At least one particular task in the set is scheduled irrespective of a location of data required for the particular task. In association with a shuffle phase of the MapReduce job, and in connection with the particular task, at least one connection between a processor and at least one memory module is dynamically rewired based on the location of the data required for the particular task, thereby obviating network transfer of that data.
    Type: Application
    Filed: November 29, 2018
    Publication date: June 4, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Min LI, John A. BIVENS, Ruchi MAHINDRU, HariGovind V. RAMASAMY, Yaoping RUAN, Valentina SALAPURA, Eugen SCHENFELD
  • Publication number: 20200174838
    Abstract: Server resources in a data center are disaggregated into shared server resource pools, including an accelerator (e.g., FPGA) pool. Servers are constructed dynamically, on-demand and based on workload requirements, by allocating from these resource pools. According to this disclosure, accelerator utilization in the data center is managed proactively by assigning accelerators to workloads in a fine granularity and agile way, and de-provisioning them when no longer needed. In this manner, the approach is especially advantageous to automatically provision accelerators for data analytic workloads. The approach thus provides for a “micro-service” enabling data analytic workloads to automatically and transparently use FPGA resources without providing (e.g., to the data center customer) the underlying provisioning details. Preferably, the approach dynamically determines the number and the type of FPGAs to use, and then during runtime auto-scales the FPGAs based on workload.
    Type: Application
    Filed: November 29, 2018
    Publication date: June 4, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Min LI, John A. BIVENS, Ruchi MAHINDRU, HariGovind V. RAMASAMY, Yaoping RUAN, Valentina SALAPURA, Eugen SCHENFELD
  • Patent number: 10628467
    Abstract: A base query having a plurality of base query terms is obtained. A plurality of problem log files are accessed. Words, contained in a corpus vocabulary, are extracted from the plurality of problem log files. Based on the words extracted from the plurality of problem log files, a first expanded query is generated from the base query. The corpus is queried, via a query engine and a corpus index, with a second expanded query related to the first expanded query.
    Type: Grant
    Filed: February 10, 2018
    Date of Patent: April 21, 2020
    Assignee: International Business Machines Corporation
    Inventors: Russell W. Bergs, Yu Deng, Kaoutar El Maghraoui, Matthew R. Koozer, HariGovind V. Ramasamy, Soumitra Sarkar, Rongda Zhu
  • Patent number: 10601725
    Abstract: Various embodiments for agile component-level resource provisioning in a disaggregated cloud computing environment, by a processor device, are provided. Respective members of pools of hardware resources within the disaggregated cloud computing environment are allocated to each respective one of a plurality of tenants according to one of a plurality of service level agreement (SLA) classes. Each respective one of the plurality of SLA classes is characterized by a given response time for the allocation of the respective members of the pools of hardware resources corresponding to a requested workload by the tenant.
    Type: Grant
    Filed: May 16, 2016
    Date of Patent: March 24, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Min Li, John A. Bivens, Ruchi Mahindru, HariGovind V. Ramasamy, Yaoping Ruan, Valentina Salapura, Eugen Schenfeld
  • Publication number: 20200050445
    Abstract: Embodiments for performing rolling software upgrades in a disaggregated computing environment. A rolling upgrade manager is provided for upgrading one or more disaggregated servers. A designated memory area is used for storing an updated software component, and a disaggregated server is switched to the designated memory area from a currently assigned memory area when performing the software upgrade. A process state and program data is maintained in the currently assigned memory area while maintaining the updated software component in the designated memory area such that the process state and program data are read from the currently assigned memory area and the updated software component is read from the designated memory area during currently executing operations of the disaggregated server.
    Type: Application
    Filed: October 22, 2019
    Publication date: February 13, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Valentina SALAPURA, John A. BIVENS, Min LI, Ruchi MAHINDRU, HariGovind V. RAMASAMY, Yaoping RUAN, Eugen SCHENFELD
  • Publication number: 20200034196
    Abstract: Identify individual machines of a multi-machine computing system. Construct a graph of dependencies among the machines. Obtain estimated total administration times and administration priorities for each of the machines. Identify availability of administration resources to assist in administration of one or more of the machines. Select a first set of machines for administration in response to the graph, administration priorities, estimated total administration times, and availability of the first set of administration resources, and administer the first set of machines in parallel using the first set of administration resources. Update the graph in response to administration of the first set of machines. Select a subsequent set of machines for administration in response to the updated graph, administration priorities, estimated total administration times, and availability of a subsequent set of administration resources.
    Type: Application
    Filed: September 21, 2019
    Publication date: January 30, 2020
    Inventors: Richard E. Harper, Ruchi Mahindru, HariGovind V. Ramasamy, Long Wang
  • Patent number: 10545560
    Abstract: For power management in a computing system, component utilization is dynamically managed within the computing system according to a calculated aggregate energy consumed by each one of a set of processors. Each of a plurality of energy factors are measured individually between each one of the set of processors to accumulate the calculated aggregate energy in real time.
    Type: Grant
    Filed: October 10, 2016
    Date of Patent: January 28, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ruchi Mahindru, John A. Bivens, Koushik K. Das, Min Li, HariGovind V. Ramasamy, Yaoping Ruan, Valentina Salapura, Eugen Schenfeld
  • Patent number: 10534598
    Abstract: Embodiments for performing rolling software upgrades in a disaggregated computing environment. A rolling upgrade manager is provided for upgrading one or more disaggregated servers. A designated memory area is used for storing an updated software component, and a disaggregated server is switched to the designated memory area from a currently assigned memory area when performing the software upgrade.
    Type: Grant
    Filed: January 4, 2017
    Date of Patent: January 14, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Valentina Salapura, John A. Bivens, Min Li, Ruchi Mahindru, HariGovind V. Ramasamy, Yaoping Ruan, Eugen Schenfeld
  • Publication number: 20200004593
    Abstract: Respective memory devices are assigned to respective processor devices in a disaggregated computing system, the disaggregated computing system having at least a pool of the memory devices and a pool of the processor devices. An iterative learning algorithm is used to define data boundaries of a dataset for performing an analytic function on the dataset simultaneous to a primary compute task, unrelated to the analytic function, being performed on the dataset in the pool of memory devices using memory bandwidth not currently committed to the primary compute task, thereby efficiently employing the unused memory bandwidth to prevent underutilization of the pool of memory devices.
    Type: Application
    Filed: September 9, 2019
    Publication date: January 2, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: John A. BIVENS, Min LI, Ruchi MAHINDRU, HariGovind V. RAMASAMY, Yaoping RUAN, Valentina SALAPURA, Eugen SCHENFELD
  • Patent number: 10503401
    Abstract: Various embodiments for optimizing memory bandwidth in a disaggregated computing system, by a processor device, are provided. Respective memory devices are assigned to respective processor devices in the disaggregated computing system, the disaggregated computing system having at least a pool of the memory devices and a pool of the processor devices. An iterative learning algorithm is used to define data boundaries of a dataset for performing an analytic function on the dataset using memory bandwidth not currently committed to a primary compute task.
    Type: Grant
    Filed: August 15, 2016
    Date of Patent: December 10, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: John A. Bivens, Min Li, Ruchi Mahindru, HariGovind V. Ramasamy, Yaoping Ruan, Valentina Salapura, Eugen Schenfeld
  • Patent number: 10445138
    Abstract: Identify individual machines of a multi-machine computing system. Construct a graph of dependencies among the machines. Obtain estimated total administration times and administration priorities for each of the machines. Identify availability of administration resources to assist in administration of one or more of the machines. Select a first set of machines for administration in response to the graph, administration priorities, estimated total administration times, and availability of the first set of administration resources, and administer the first set of machines in parallel using the first set of administration resources. Update the graph in response to administration of the first set of machines. Select a subsequent set of machines for administration in response to the updated graph, administration priorities, estimated total administration times, and availability of a subsequent set of administration resources.
    Type: Grant
    Filed: December 31, 2017
    Date of Patent: October 15, 2019
    Assignee: International Business Machines Corporation
    Inventors: Richard E. Harper, Ruchi Mahindru, HariGovind V. Ramasamy, Long Wang
  • Publication number: 20190310897
    Abstract: For measuring component utilization in a computing system, a server energy utilization reading of a statistical significant number of servers out of a total number of servers located in the datacenter is obtained by measuring, at predetermined intervals, a collective energy consumed by all processing components within each server. The collective energy is measured by virtually probing thereby monitoring an energy consumption of individual ones of all the processing components to each collect an individual energy utilization reading, where the individual energy utilization reading is aggregated over a predetermined time period to collect an energy consumption pattern associated with the server utilization reading.
    Type: Application
    Filed: June 20, 2019
    Publication date: October 10, 2019
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ruchi MAHINDRU, John A. BIVENS, Koushik K. DAS, Min LI, HariGovind V. RAMASAMY, Yaoping RUAN, Valentina SALAPURA, Eugen SCHENFELD
  • Patent number: 10409509
    Abstract: A memory management service occupies a configurable portion of an overall memory system in a disaggregate compute environment. The service provides optimized data organization capabilities over the pool of real memory accessible to the system. The service enables various types of data stores to be implemented in hardware, including at a data structure level. Storage capacity conservation is enabled through the creation and management of high-performance, re-usable data structure implementations across the memory pool, and then using analytics (e.g., multi-tenant similarity and duplicate detection) to determine when data organizations should be used. The service also may re-align memory to different data structures that may be more efficient given data usage and distribution patterns. The service also advantageously manages automated backups efficiently.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: September 10, 2019
    Assignee: International Business Machines Corporation
    Inventors: John Alan Bivens, Koushik K. Das, Min Li, Ruchi Mahindru, Harigovind V. Ramasamy, Yaoping Ruan, Valentina Salapura, Eugen Schenfeld