Patents by Inventor Ekaterina M. Ambroladze

Ekaterina M. Ambroladze has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10380020
    Abstract: Embodiments include methods, systems and computer program products method for maintaining ordered memory access with parallel access data streams associated with a distributed shared memory system. The computer-implemented method includes performing, by a first cache, a key check, the key check being associated with a first ordered data store. A first memory node signals that the first memory node is ready to begin pipelining of a second ordered data store into the first memory node to an input/output (I/O) controller. A second cache returns a key response to the first cache indicating that the pipelining of the second ordered data store can proceed. The first memory node sends a ready signal indicating that the first memory node is ready to continue pipelining of the second ordered data store into the first memory node to the I/O controller, wherein the ready signal is triggered by receipt of the key response.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: August 13, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Timothy C. Bronson, Matthias Klein, Pak-kin Mak, Vesselina K. Papazova, Robert J. Sonnelitter, III, Lahiruka S. Winter
  • Publication number: 20190205251
    Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 4, 2019
    Applicant: International Business Machines Corp
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Michael F. Fee, Arthur J. O'Neill, Robert J. Sonnelitter, III
  • Publication number: 20190179765
    Abstract: In an approach for purging an address range from a cache, a processor quiesces a computing system. Cache logic issues a command to purge a section of a cache to higher level memory, wherein the command comprises a starting storage address and a range of storage addresses to be purged. Responsive to each cache of the computing system activating the command, cache logic ends the quiesce of the computing system. Subsequent to ending the quiesce of the computing system, Cache logic purges storage addresses from the cache, based on the command, to the higher level memory.
    Type: Application
    Filed: February 21, 2019
    Publication date: June 13, 2019
    Inventors: Ekaterina M. Ambroladze, Deanna P. D. Berger, Michael A. Blake, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy, Chad G. Wilson
  • Publication number: 20190018775
    Abstract: Embodiments include methods, systems and computer program products method for maintaining ordered memory access with parallel access data streams associated with a distributed shared memory system. The computer-implemented method includes performing, by a first cache, a key check, the key check being associated with a first ordered data store. A first memory node signals that the first memory node is ready to begin pipelining of a second ordered data store into the first memory node to an input/output (I/O) controller. A second cache returns a key response to the first cache indicating that the pipelining of the second ordered data store can proceed. The first memory node sends a ready signal indicating that the first memory node is ready to continue pipelining of the second ordered data store into the first memory node to the I/O controller, wherein the ready signal is triggered by receipt of the key response.
    Type: Application
    Filed: July 17, 2017
    Publication date: January 17, 2019
    Inventors: Ekaterina M. Ambroladze, Timothy C. Bronson, Matthias Klein, Pak-kin Mak, Vesselina K. Papazova, Robert J. Sonnelitter, III, Lahiruka S. Winter
  • Patent number: 10169272
    Abstract: A data processing apparatus is provided, which includes: a plurality of processor cores; a shared processor cache, the shared processor cache being connected to each of the processor cores and to a main memory; a bus controller, the bus controller being connected to the shared processor cache and performing, in response to receiving a descriptor sent by one of the processor cores, a transfer of requested data indicated by the descriptor from the shared processor cache to an input/output (I/O) device; a bus unit, the bus unit being connected to the bus controller and transferring data to/from the I/O device; wherein the shared processor cache includes means for prefetching the requested data from the shared processor cache or main memory by performing a direct memory access in response to receiving a descriptor from the one of the processor cores.
    Type: Grant
    Filed: August 17, 2015
    Date of Patent: January 1, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Norbert Hagspiel, Sascha Junghans, Matthias Klein, Jeorg Walter
  • Patent number: 10169260
    Abstract: In an approach for managing data transfer across a bus shared by processors, a request for a first set of data is received from a first processor. A request for a second set of data is received from a second processor. First portions of the first set of data and the second set of data are written to a buffer. Additional portions of each set of data are written to the buffer as portions are received. It is determined that a portion of the first set of data has a higher priority to the bus than a portion of the second set of data based on a priority scheme, wherein the priority scheme is based on return progress of each respective set of data having at least a portion of data in the buffer. The portion of the first set of data is granted access to the bus.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Michael Fee, Arthur J. O'Neill, Jr.
  • Publication number: 20180374522
    Abstract: A system and method to transfer an ordered partial store of data from a controller to a memory subsystem receives the ordered partial store of data into a buffer of the controller. The method also includes issuing a preinstall command to the memory subsystem, wherein the preinstall command indicates that data from a number of addresses of memory corresponding with a target memory location be obtained in local memory of the memory subsystem along with ownership of the data for subsequent use. A query command is issued to the memory subsystem. The query command requests an indication from the memory subsystem that the memory subsystem is ready to receive and correctly serialize the ordered partial store of data. The ordered partial store of data is transferred from the controller to the memory subsystem.
    Type: Application
    Filed: June 22, 2017
    Publication date: December 27, 2018
    Inventors: Ekaterina M. Ambroladze, Sascha Junghans, Matthias Klein, Pak-Kin Mak, Robert J. Sonnelitter, III, Chad G. Wilson
  • Publication number: 20180307612
    Abstract: In an approach for purging an address range from a cache, a processor quiesces a computing system. Cache logic issues a command to purge a section of a cache to higher level memory, wherein the command comprises a starting storage address and a range of storage addresses to be purged. Responsive to each cache of the computing system activating the command, cache logic ends the quiesce of the computing system. Subsequent to ending the quiesce of the computing system, Cache logic purges storage addresses from the cache, based on the command, to the higher level memory.
    Type: Application
    Filed: April 19, 2017
    Publication date: October 25, 2018
    Inventors: Ekaterina M. Ambroladze, Deanna P. D. Berger, Michael A. Blake, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy, Chad G. Wilson
  • Patent number: 10055355
    Abstract: In an approach for purging an address range from a cache, a processor quiesces a computing system. Cache logic issues a command to purge a section of a cache to higher level memory, wherein the command comprises a starting storage address and a range of storage addresses to be purged. Responsive to each cache of the computing system activating the command, cache logic ends the quiesce of the computing system. Subsequent to ending the quiesce of the computing system, Cache logic purges storage addresses from the cache, based on the command, to the higher level memory.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: August 21, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna P. D. Berger, Michael A. Blake, Pak-Kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy, Chad G. Wilson
  • Patent number: 10042554
    Abstract: A method, computer program product, and system for maintaining a proper ordering of a data steam that includes two or more sequentially ordered stores, the data stream being moved to a destination memory device, the two or more sequentially ordered stores including at least a first store and a second store, wherein the first store is rejected by the destination memory device. A computer-implemented method includes sending the first store to the destination memory device. A conditional request is sent to the destination memory device for approval to send the second store to the destination memory device, the conditional request dependent upon successful completion of the first store. The second store is cancelled responsive to receiving a reject response corresponding to the first store.
    Type: Grant
    Filed: November 19, 2015
    Date of Patent: August 7, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Garrett M. Drapala, Norbert Hagspiel, Sascha Junghans, Matthias Klein, Gary E. Strait
  • Publication number: 20180121359
    Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration
    Type: Application
    Filed: January 2, 2018
    Publication date: May 3, 2018
    Applicant: International Business Machines Corp
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Michael F. Fee, Arthur J. O'Neill, Robert J. Sonnelitter, III
  • Publication number: 20180121358
    Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration
    Type: Application
    Filed: January 2, 2018
    Publication date: May 3, 2018
    Applicant: International Business Machines Corp
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Michael F. Fee, Arthur J. O'Neill, Robert J. Sonnelitter, III
  • Publication number: 20180107617
    Abstract: In an approach for managing data transfer across a bus shared by processors, a request for a first set of data is received from a first processor. A request for a second set of data is received from a second processor. First portions of the first set of data and the second set of data are written to a buffer. Additional portions of each set of data are written to the buffer as portions are received. It is determined that a portion of the first set of data has a higher priority to the bus than a portion of the second set of data based on a priority scheme, wherein the priority scheme is based on return progress of each respective set of data having at least a portion of data in the buffer. The portion of the first set of data is granted access to the bus.
    Type: Application
    Filed: December 15, 2017
    Publication date: April 19, 2018
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Michael Fee, Arthur J. O'Neill, JR.
  • Patent number: 9898407
    Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: February 20, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M Ambroladze, Deanna P Berger, Michael F Fee, Arthur J O'Neill, Robert J Sonnelitter, III
  • Patent number: 9892067
    Abstract: In an approach for managing data transfer across a bus shared by processors, a request for a first set of data is received from a first processor. A request for a second set of data is received from a second processor. First portions of the first set of data and the second set of data are written to a buffer. Additional portions of each set of data are written to the buffer as portions are received. It is determined that a portion of the first set of data has a higher priority to the bus than a portion of the second set of data based on a priority scheme, wherein the priority scheme is based on return progress of each respective set of data having at least a portion of data in the buffer. The portion of the first set of data is granted access to the bus.
    Type: Grant
    Filed: January 29, 2015
    Date of Patent: February 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Michael Fee, Arthur J. O'Neill, Jr.
  • Patent number: 9886382
    Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration.
    Type: Grant
    Filed: November 20, 2014
    Date of Patent: February 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M Ambroladze, Deanna P Berger, Michael F Fee, Arthur J O'Neill, Robert J Sonnelitter, III
  • Patent number: 9858190
    Abstract: Maintaining store order with high throughput in a distributed shared memory system. A request is received for a first ordered data store and a coherency check is initiated. A signal is sent that pipelining of a second ordered data store can be initiated. If a delay condition is encountered during the coherency check for the first ordered data store, rejection of the first ordered data store is signaled. If a delay condition is not encountered during the coherency check for the first ordered data store, a signal is sent indicating a readiness to continue pipelining of the second ordered data store.
    Type: Grant
    Filed: January 27, 2015
    Date of Patent: January 2, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Timothy C. Bronson, Garrett M. Drapala, Michael Fee, Matthias Klein, Pak-kin Mak, Robert J. Sonnelitter, III, Gary E. Strait
  • Patent number: 9703661
    Abstract: In an approach for taking corrupt portions of cache offline during runtime, a notification of a section of a cache to be taken offline is received, wherein the section includes one or more sets in one or more indexes of the cache. An indication is associated with each set of the one or more sets in a first index of the one or more indexes, wherein the indication marks the respective set as unusable for future operations. Data is purged from the one or more sets in the first index of the cache. Each set of the one or more sets in the first index is marked as invalid.
    Type: Grant
    Filed: February 5, 2015
    Date of Patent: July 11, 2017
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Michael A. Blake, Michael Fee, Arthur J. O'Neill, Jr.
  • Patent number: 9678848
    Abstract: In an approach for taking corrupt portions of cache offline during runtime, a notification of a section of a cache to be taken offline is received, wherein the section includes one or more sets in one or more indexes of the cache. An indication is associated with each set of the one or more sets in a first index of the one or more indexes, wherein the indication marks the respective set as unusable for future operations. Data is purged from the one or more sets in the first index of the cache. Each set of the one or more sets in the first index is marked as invalid.
    Type: Grant
    Filed: September 7, 2016
    Date of Patent: June 13, 2017
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Michael A. Blake, Michael Fee, Arthur J. O'Neill, Jr.
  • Patent number: 9594689
    Abstract: In an approach for backing up designated data located in a cache, data stored within an index of a cache is identified, wherein the data has an associated designation indicating that the data is applicable to be backed up to a higher level memory. It is determined that the data stored to the cache has been updated. A status associated with the data is adjusted, such that the adjusted status indicates that the data stored to the cache has not been changed. A copy of the data is created. The copy of the data is stored to the higher level memory.
    Type: Grant
    Filed: February 9, 2015
    Date of Patent: March 14, 2017
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Garrett M. Drapala, Michael Fee, Pak-kin Mak, Arthur J. O'Neill, Jr., Diana L. Orf