Patents by Inventor Eugen Schenfeld

Eugen Schenfeld has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180074741
    Abstract: A memory management service occupies a configurable portion of an overall memory system in a disaggregate compute environment. The service provides optimized data organization capabilities over the pool of real memory accessible to the system. The service enables various types of data stores to be implemented in hardware, including at a data structure level. Storage capacity conservation is enabled through the creation and management of high-performance, re-usable data structure implementations across the memory pool, and then using analytics (e.g., multi-tenant similarity and duplicate detection) to determine when data organizations should be used. The service also may re-align memory to different data structures that may be more efficient given data usage and distribution patterns. The service also advantageously manages automated backups efficiently.
    Type: Application
    Filed: November 3, 2017
    Publication date: March 15, 2018
    Applicant: International Business Machines Corporation
    Inventors: John Alan Bivens, Koushik K. Das, Min Li, Ruchi Mahindru, Harigovind V. Ramasamy, Yaoping Ruan, Valentina Salapura, Eugen Schenfeld
  • Patent number: 9916636
    Abstract: Server resources in a data center are disaggregated into shared server resource pools, including a graphics processing unit (GPU) pool. Servers are constructed dynamically, on-demand and based on workload requirements, by allocating from these resource pools. According to this disclosure, GPU utilization in the data center is managed proactively by assigning GPUs to workloads in a fine granularity and agile way, and de-provisioning them when no longer needed. In this manner, the approach is especially advantageous to automatically provision GPUs for data analytic workloads. The approach thus provides for a “micro-service” enabling data analytic workloads to automatically and transparently use GPU resources without providing (e.g., to the data center customer) the underlying provisioning details. Preferably, the approach dynamically determines the number and the type of GPUs to use, and then during runtime auto-scales the GPUs based on workload.
    Type: Grant
    Filed: April 8, 2016
    Date of Patent: March 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Min Li, John Alan Bivens, Koushik K. Das, Ruchi Mahindru, Harigovind V. Ramasamy, Yaoping Ruan, Valentina Salapura, Eugen Schenfeld
  • Publication number: 20180007127
    Abstract: Server resources in a data center are disaggregated into shared server resource pools. Servers are constructed dynamically, on-demand and based on a tenant's workload requirements, by allocating from these resource pools. The system also includes a license manager that operates to manage a pool of licenses that are available to be associated with resources drawn from the server resource pools. Upon provisioning of a server entity composed of resources drawn from the server resource pools, the license manager determines a license configuration suitable for the server entity. In response to receipt of information indicating a change in a composition of the server entity (e.g., as a workload is processed), the license manager determines whether an adjustment to the license configuration is required. If so, an adjusted license configuration for the server entity is determined and tracked to the tenant. The data center thus allocates appropriate licenses to server entities as required.
    Type: Application
    Filed: June 30, 2016
    Publication date: January 4, 2018
    Inventors: Valentina Salapura, John Alan Bivens, Min Li, Ruchi Mahindru, Harigovind V. Ramasamy, Yaoping Ruan, Eugen Schenfeld
  • Publication number: 20170331759
    Abstract: Various embodiments for agile component-level resource provisioning in a disaggregated cloud computing environment, by a processor device, are provided. Respective members of pools of hardware resources within the disaggregated cloud computing environment are allocated to each respective one of a plurality of tenants according to one of a plurality of service level agreement (SLA) classes. Each respective one of the plurality of SLA classes is characterized by a given response time for the allocation of the respective members of the pools of hardware resources corresponding to a requested workload by the tenant.
    Type: Application
    Filed: May 16, 2016
    Publication date: November 16, 2017
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Min LI, John A. BIVENS, Ruchi MAHINDRU, HariGovind V. RAMASAMY, Yaoping RUAN, Valentina SALAPURA, Eugen SCHENFELD
  • Publication number: 20170329519
    Abstract: Various embodiments for optimizing memory bandwidth in a disaggregated computing system, by a processor device, are provided. Respective memory devices are assigned to respective processor devices in the disaggregated computing system, the disaggregated computing system having at least a pool of the memory devices and a pool of the processor devices. An analytic function is performed on data resident in the pool of the memory devices using memory bandwidth not currently committed to a primary compute task.
    Type: Application
    Filed: May 16, 2016
    Publication date: November 16, 2017
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: John A. BIVENS, Min LI, Ruchi MAHINDRU, HariGovind V. RAMASAMY, Yaoping RUAN, Valentina SALAPURA, Eugen SCHENFELD
  • Publication number: 20170331763
    Abstract: Various embodiments for elastic resource provisioning in a disaggregated cloud computing environment, by a processor device, are provided. Respective members of pools of hardware resources within the disaggregated cloud computing environment are provisioned to a tenant according to an application-level service level agreement (SLA). Upon detecting a potential violation of the application-level SLA, additional respective members of the pools of hardware resources are provisioned on a component level to the tenant to avoid a violation of the SLA by one of a scale-up process and a scale-out process.
    Type: Application
    Filed: May 16, 2016
    Publication date: November 16, 2017
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Min LI, John A. BIVENS, Ruchi MAHINDRU, HariGovind V. RAMASAMY, Yaoping RUAN, Valentina SALAPURA, Eugen SCHENFELD
  • Publication number: 20170329520
    Abstract: Various embodiments for optimizing memory bandwidth in a disaggregated computing system, by a processor device, are provided. Respective memory devices are assigned to respective processor devices in the disaggregated computing system, the disaggregated computing system having at least a pool of the memory devices and a pool of the processor devices. An iterative learning algorithm is used to define data boundaries of a dataset for performing an analytic function on the dataset using memory bandwidth not currently committed to a primary compute task.
    Type: Application
    Filed: August 15, 2016
    Publication date: November 16, 2017
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: John A. BIVENS, Min LI, Ruchi MAHINDRU, HariGovind V. RAMASAMY, Yaoping RUAN, Valentina SALAPURA, Eugen SCHENFELD
  • Patent number: 9811281
    Abstract: A memory management service occupies a configurable portion of an overall memory system in a disaggregate compute environment. The service provides optimized data organization capabilities over the pool of real memory accessible to the system. The service enables various types of data stores to be implemented in hardware, including at a data structure level. Storage capacity conservation is enabled through the creation and management of high-performance, re-usable data structure implementations across the memory pool, and then using analytics (e.g., multi-tenant similarity and duplicate detection) to determine when data organizations should be used. The service also may re-align memory to different data structures that may be more efficient given data usage and distribution patterns. The service also advantageously manages automated backups efficiently.
    Type: Grant
    Filed: April 7, 2016
    Date of Patent: November 7, 2017
    Assignee: International Business Machines Corporation
    Inventors: John Alan Bivens, Koushik K. Das, Min Li, Ruchi Mahindru, Harigovind V. Ramasamy, Yaoping Ruan, Valentina Salapura, Eugen Schenfeld
  • Publication number: 20170310607
    Abstract: Various embodiments for allocating resources in a disaggregated cloud computing environment, by a processor device, are provided. Respective members of a pool of hardware resources are assigned to each one of a plurality of tenants based upon a classification of the respective members of the pool of hardware resources. The respective members of the pool of hardware resources are assigned to each one of the plurality of tenants independently of a hardware enclosure in which the respective members of the pool of hardware resources are physically located.
    Type: Application
    Filed: April 21, 2016
    Publication date: October 26, 2017
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Yaoping RUAN, John A. BIVENS, Koushik K. DAS, Min LI, Ruchi MAHINDRU, HariGovind V. RAMASAMY, Valentina SALAPURA, Eugen SCHENFELD
  • Patent number: 9798587
    Abstract: An embodiment of the invention includes applying a first partition to a plurality of LPs, wherein a particular LP is assigned to a first set of LPs. A second partition is applied to the LPs, wherein the particular LP is assigned to an LP set different from the first set. For both the first and second partitions, lookahead values and transit times are determined for each of the LPs and related links. For the first partition, a first system progression rate is computed using a specified function with the lookahead values and transit times determined for the first partition. For the second partition, a second system progression rate is computed using the specified function with the lookahead values and transit times determined for the second partition. The first and second system progression rates are compared to determine which is the lowest.
    Type: Grant
    Filed: May 13, 2011
    Date of Patent: October 24, 2017
    Assignee: International Business Machines Corporation
    Inventors: Cheng-Hong Li, Alfred J. Park, Eugen Schenfeld
  • Publication number: 20170293994
    Abstract: Server resources in a data center are disaggregated into shared server resource pools, including a graphics processing unit (GPU) pool. Servers are constructed dynamically, on-demand and based on workload requirements, by allocating from these resource pools. According to this disclosure, GPU utilization in the data center is managed proactively by assigning GPUs to workloads in a fine granularity and agile way, and de-provisioning them when no longer needed. In this manner, the approach is especially advantageous to automatically provision GPUs for data analytic workloads. The approach thus provides for a “micro-service” enabling data analytic workloads to automatically and transparently use GPU resources without providing (e.g., to the data center customer) the underlying provisioning details. Preferably, the approach dynamically determines the number and the type of GPUs to use, and then during runtime auto-scales the GPUs based on workload.
    Type: Application
    Filed: April 8, 2016
    Publication date: October 12, 2017
    Inventors: Min Li, John Alan Bivens, Koushik K. Das, Ruchi Mahindru, Harigovind V. Ramasamy, Yaoping Ruan, Valentina Salapura, Eugen Schenfeld
  • Publication number: 20170295108
    Abstract: Server resources in a data center are disaggregated into shared server resource pools. Servers are constructed dynamically, on-demand and based on workload requirements and a tenant's resiliency requirements (e.g., as specified in an SLA), by allocating from these resource pools. A disaggregated compute system of this type keeps track of resources that are available in the shared server resource pools, and it manages those resources based on that information and the health of the resources. As a workload is processed by the server entity and component resources fail, the server entity composition is changed, e.g. by allocating other resources to the server entity, or by transitioning to other server entities, to ensure that a resiliency requirement is maintained.
    Type: Application
    Filed: April 7, 2016
    Publication date: October 12, 2017
    Inventors: Ruchi Mahindru, John Alan Bivens, Koushik K. Das, Min Li, Harigovind V. Ramasamy, Yaoping Ruan, Valentina Salapura, Eugen Schenfeld
  • Publication number: 20170293447
    Abstract: A memory management service occupies a configurable portion of an overall memory system in a disaggregate compute environment. The service provides optimized data organization capabilities over the pool of real memory accessible to the system. The service enables various types of data stores to be implemented in hardware, including at a data structure level. Storage capacity conservation is enabled through the creation and management of high-performance, re-usable data structure implementations across the memory pool, and then using analytics (e.g., multi-tenant similarity and duplicate detection) to determine when data organizations should be used. The service also may re-align memory to different data structures that may be more efficient given data usage and distribution patterns. The service also advantageously manages automated backups efficiently.
    Type: Application
    Filed: April 7, 2016
    Publication date: October 12, 2017
    Inventors: John Alan Bivens, Koushik K. Das, Min Li, Ruchi Mahindru, Harigovind V. Ramasamy, Yaoping Ruan, Valentina Salapura, Eugen Schenfeld
  • Publication number: 20170295107
    Abstract: Server resources in a data center are disaggregated into shared server resource pools. Servers are constructed dynamically, on-demand and based on workload requirements, by allocating from these resource pools. A disaggregated compute system of this type keeps track of resources that are available in the shared server resource pools, and it manages those resources based on that information. Each server entity built is assigned with a unique server ID, and each resource that comprises a component thereof is tagged with the identifier. As a workload is processed by the server entity, its composition may change, e.g. by allocating more resources to the server entity, or by de-allocating resources from the server entity. Workload requests are associated with the unique server ID for the server entity. When a workload request is received at a resource, it matches its unique server ID to that of the request before servicing the request.
    Type: Application
    Filed: April 7, 2016
    Publication date: October 12, 2017
    Inventors: Valentina Salapura, John Alan Bivens, Koushik K. Das, Min Li, Ruchi Mahindru, Harigovind V. Ramasamy, Yaoping Ruan, Eugen Schenfeld
  • Patent number: 9460049
    Abstract: Symmetric multi-processor (SMP) nodes are dynamically configured via SMP sockets that use SMP optically-connected switches to dynamically connect SMP optically-connected links connected to the SMP nodes to form SMP domains based on best matched expected workloads for coherent traffic for exchanging SMP coherent information. The SMP nodes are dynamically added to one of the SMP domains and/or dynamically removed from one of the SMP domains.
    Type: Grant
    Filed: July 18, 2013
    Date of Patent: October 4, 2016
    Assignee: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD.
    Inventors: John M. Borkenhagen, James S. Fields, Jr., Eugen Schenfeld
  • Patent number: 9390047
    Abstract: Data is collected by an active node from passive nodes. A source node extracts the data format, and a remote memory blade identification (ID), a remote memory blade address, and ranges of the RMMA space, and composes and sends metadata to receiving nodes and receiving racks.
    Type: Grant
    Filed: February 3, 2016
    Date of Patent: July 12, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Eugen Schenfeld, Abhirup Chakraborty
  • Publication number: 20160154755
    Abstract: Data is collected by an active node from passive nodes. A source node extracts the data format, and a remote memory blade identification (ID), a remote memory blade address, and ranges of the RMMA space, and composes and sends metadata to receiving nodes and receiving racks.
    Type: Application
    Filed: February 3, 2016
    Publication date: June 2, 2016
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Eugen SCHENFELD, Abhirup CHAKRABORTY
  • Patent number: 9338528
    Abstract: Reflecting optical devices are optimally positioned by an all optical switch in an optically-connected system by transmitting optical power readings taken from an optimal monitoring module that are transmitted to the all optical switch for optimal positioning of a reflecting optical device in order to produce maximum optical output power.
    Type: Grant
    Filed: July 18, 2013
    Date of Patent: May 10, 2016
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: John M. Borkenhagen, Eugen Schenfeld, Mark Z. Solomon
  • Publication number: 20160110226
    Abstract: An embodiment of the invention includes applying a first partition to a plurality of LPs, wherein a particular LP is assigned to a first set of LPs. A second partition is applied to the LPs, wherein the particular LP is assigned to an LP set different from the first set. For both the first and second partitions, lookahead values and transit times are determined for each of the LPs and related links. For the first partition, a first system progression rate is computed using a specified function with the lookahead values and transit times determined for the first partition. For the second partition, a second system progression rate is computed using the specified function with the lookahead values and transit times determined for the second partition. The first and second system progression rates are compared to determine which is the lowest.
    Type: Application
    Filed: December 16, 2015
    Publication date: April 21, 2016
    Inventors: Cheng-Hong Li, Alfred J. Park, Eugen Schenfeld
  • Patent number: 9256547
    Abstract: Data is collected by an active node from passive nodes and arranges and stores the collected data on receiving nodes. A source node extracts the data format, and a remote memory blade identification (ID), a remote memory blade address, and ranges of the RMMA space, and composes and sends metadata to the receiving nodes and receiving racks.
    Type: Grant
    Filed: August 10, 2015
    Date of Patent: February 9, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Eugen Schenfeld, Abhirup Chakraborty