Patents by Inventor Pak-kin Mak

Pak-kin Mak has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10884946
    Abstract: Aspects include a computer-implemented method that includes receiving an instruction at a processor to perform an operation on a memory block having an address and accessing a state indicator by the processor without altering a value of the state indicator. The state indicator is stored in a memory location independent of the memory block, and accessing includes sending a request to an operator to return the value of the state indicator to the processor. The method also includes determining based on the value of the state indicator whether the memory block is in a pre-defined state.
    Type: Grant
    Filed: September 15, 2015
    Date of Patent: January 5, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Pak-kin Mak, Timothy J. Slegel, Craig R. Walters, Charles F. Webb
  • Patent number: 10884945
    Abstract: Aspects include a computer-implemented method includes receiving an instruction at a processor to perform an operation on a memory block having an address and accessing a state indicator by the processor without altering a value of the state indicator. The state indicator is stored in a memory location independent of the memory block, and accessing includes sending a request to an operator to return the value of the state indicator to the processor. The method also includes determining based on the value of the state indicator whether the memory block is in a pre-defined state.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: January 5, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Pak-kin Mak, Timothy J. Slegel, Craig R. Walters, Charles F. Webb
  • Patent number: 10649908
    Abstract: In an approach for purging an address range from a cache, a processor quiesces a computing system. Cache logic issues a command to purge a section of a cache to higher level memory, wherein the command comprises a starting storage address and a range of storage addresses to be purged. Responsive to each cache of the computing system activating the command, cache logic ends the quiesce of the computing system. Subsequent to ending the quiesce of the computing system, Cache logic purges storage addresses from the cache, based on the command, to the higher level memory.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: May 12, 2020
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna P. D. Berger, Michael A. Blake, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy, Chad G. Wilson
  • Patent number: 10628314
    Abstract: Embodiments of the present invention are directed to managing a shared high-level cache for dual clusters of fully connected integrated circuit multiprocessors. An example of a computer-implemented method includes: providing a drawer comprising a plurality of clusters, each of the plurality of clusters comprising a plurality of processors; providing a shared cache integrated circuit to manage a shared cache memory among the plurality of clusters; receiving, by the shared cache integrated circuit, an operation of one of a plurality of operation types from one of the plurality of processors; and processing, by the shared cache integrated circuit, the operation based at least in part on the operation type of the operation according to a set of rules for processing the operation type.
    Type: Grant
    Filed: November 1, 2017
    Date of Patent: April 21, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Timothy C. Bronson, Pak-kin Mak, Vesselina K. Papazova, Robert J. Sonnelitter, III
  • Patent number: 10628313
    Abstract: Embodiments of the present invention are directed to managing a shared high-level cache for dual clusters of fully connected integrated circuit multiprocessors. An example of a computer-implemented method includes: providing a drawer comprising a plurality of clusters, each of the plurality of clusters comprising a plurality of processors; providing a shared cache integrated circuit to manage a shared cache memory among the plurality of clusters; receiving, by the shared cache integrated circuit, an operation of one of a plurality of operation types from one of the plurality of processors; and processing, by the shared cache integrated circuit, the operation based at least in part on the operation type of the operation according to a set of rules for processing the operation type.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: April 21, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Timothy C. Bronson, Pak-kin Mak, Vesselina K. Papazova, Robert J. Sonnelitter, III
  • Patent number: 10572385
    Abstract: A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line.
    Type: Grant
    Filed: December 8, 2017
    Date of Patent: February 25, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Pak-Kin Mak, Vesselina K. Papazova, Hanno Ulrich
  • Patent number: 10529396
    Abstract: A system and method to transfer an ordered partial store of data from a controller to a memory subsystem receives the ordered partial store of data into a buffer of the controller. The method also includes issuing a preinstall command to the memory subsystem, wherein the preinstall command indicates that data from a number of addresses of memory corresponding with a target memory location be obtained in local memory of the memory subsystem along with ownership of the data for subsequent use. A query command is issued to the memory subsystem. The query command requests an indication from the memory subsystem that the memory subsystem is ready to receive and correctly serialize the ordered partial store of data. The ordered partial store of data is transferred from the controller to the memory subsystem.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: January 7, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Sascha Junghans, Matthias Klein, Pak-Kin Mak, Robert J. Sonnelitter, III, Chad G. Wilson
  • Patent number: 10489294
    Abstract: Embodiments of the present invention are directed to hot cache line arbitration. An example of a computer-implemented method for hot cache line arbitration includes receiving a request for exclusive access to a cache line from a requestor of a drawer in a processing system. The method further includes bringing the cache line to a local cache of the drawer. The method further includes invalidating copies of the cache line in the processing system. The method further includes loading a remote fetch address register (RFAR) controller on other drawers in the processing system, wherein the RFAR comprises a local pending flag and a remote pending flag.
    Type: Grant
    Filed: April 5, 2017
    Date of Patent: November 26, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Rebecca M. Gott, Pak-Kin Mak, Vesselina K. Papazova
  • Patent number: 10437729
    Abstract: In an approach for purging an address range from a cache, a processor quiesces a computing system. Cache logic issues a command to purge a section of a cache to higher level memory, wherein the command comprises a starting storage address and a range of storage addresses to be purged. Responsive to each cache of the computing system activating the command, cache logic ends the quiesce of the computing system. Subsequent to ending the quiesce of the computing system, Cache logic purges storage addresses from the cache, based on the command, to the higher level memory.
    Type: Grant
    Filed: April 19, 2017
    Date of Patent: October 8, 2019
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna P. D. Berger, Michael A. Blake, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy, Chad G. Wilson
  • Publication number: 20190251036
    Abstract: In an approach for purging an address range from a cache, a processor quiesces a computing system. Cache logic issues a command to purge a section of a cache to higher level memory, wherein the command comprises a starting storage address and a range of storage addresses to be purged. Responsive to each cache of the computing system activating the command, cache logic ends the quiesce of the computing system. Subsequent to ending the quiesce of the computing system, Cache logic purges storage addresses from the cache, based on the command, to the higher level memory.
    Type: Application
    Filed: May 3, 2019
    Publication date: August 15, 2019
    Inventors: Ekaterina M. Ambroladze, Deanna P. D. Berger, Michael A. Blake, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy, Chad G. Wilson
  • Publication number: 20190251037
    Abstract: In an approach for purging an address range from a cache, a processor quiesces a computing system. Cache logic issues a command to purge a section of a cache to higher level memory, wherein the command comprises a starting storage address and a range of storage addresses to be purged. Responsive to each cache of the computing system activating the command, cache logic ends the quiesce of the computing system. Subsequent to ending the quiesce of the computing system, Cache logic purges storage addresses from the cache, based on the command, to the higher level memory.
    Type: Application
    Filed: May 3, 2019
    Publication date: August 15, 2019
    Inventors: Ekaterina M. Ambroladze, Deanna P. D. Berger, Michael A. Blake, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy, Chad G. Wilson
  • Patent number: 10380020
    Abstract: Embodiments include methods, systems and computer program products method for maintaining ordered memory access with parallel access data streams associated with a distributed shared memory system. The computer-implemented method includes performing, by a first cache, a key check, the key check being associated with a first ordered data store. A first memory node signals that the first memory node is ready to begin pipelining of a second ordered data store into the first memory node to an input/output (I/O) controller. A second cache returns a key response to the first cache indicating that the pipelining of the second ordered data store can proceed. The first memory node sends a ready signal indicating that the first memory node is ready to continue pipelining of the second ordered data store into the first memory node to the I/O controller, wherein the ready signal is triggered by receipt of the key response.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: August 13, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Timothy C. Bronson, Matthias Klein, Pak-kin Mak, Vesselina K. Papazova, Robert J. Sonnelitter, III, Lahiruka S. Winter
  • Patent number: 10379776
    Abstract: An aspect includes interlocking operations in an address-sliced cache system. A computer-implemented method includes determining whether a dynamic memory relocation operation is in process in the address-sliced cache system. Based on determining that the dynamic memory relocation operation is in process, a key operation is serialized to maintain a sequenced order of completion of the key operation across a plurality of slices and pipes in the address-sliced cache system. Based on determining that the dynamic memory relocation operation is not in process, a plurality of key operation requests is allowed to launch across two or more of the slices and pipes in parallel in the address-sliced cache system while ensuring that only one instance of the key operations is in process across all of the slices and pipes at a same time.
    Type: Grant
    Filed: May 24, 2017
    Date of Patent: August 13, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna P. Berger, Michael A. Blake, Ashraf Elsharif, Kenneth D. Klapproth, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy
  • Patent number: 10339064
    Abstract: Embodiments of the present invention are directed to hot cache line arbitration. An example of a computer-implemented method for hot cache line arbitration includes detecting, by a processing device, a hot cache line scenario. The computer-implemented method further includes tracking, by the processing device, hot cache line requests from requesters to determine subsequent satisfaction of the requests. The computer-implemented method further includes facilitating, by the processing device, servicing of the requests according to hierarchy of the requestors.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: July 2, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Timothy C. Bronson, Jason D. Kohl, Pak-Kin Mak, Vesselina K. Papazova
  • Patent number: 10331576
    Abstract: A computer implemented method for avoiding false activation of hang avoidance mechanisms of a system is provided. The computer implemented method includes receiving, by a nest of the system, rejects from a processor core of the system. The rejects are issued based on a cache line being locked by the processor core. The computer implemented method includes accumulating the rejects by the nest. The computer implemented method includes determining, by the nest, when an amount of the rejects accumulated by the nest has met or exceeded a programmable threshold. The computer implemented method also includes triggering, by the nest, a global reset to counters of the hang avoidance mechanisms of a system in response to the amount meeting or exceeding the programmable threshold.
    Type: Grant
    Filed: April 25, 2017
    Date of Patent: June 25, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Pak-kin Mak, Robert J. Sonnelitter, III, Timothy W. Steele, Gary E. Strait, Poornima P. Sulibele, Guy G. Tracy
  • Publication number: 20190179765
    Abstract: In an approach for purging an address range from a cache, a processor quiesces a computing system. Cache logic issues a command to purge a section of a cache to higher level memory, wherein the command comprises a starting storage address and a range of storage addresses to be purged. Responsive to each cache of the computing system activating the command, cache logic ends the quiesce of the computing system. Subsequent to ending the quiesce of the computing system, Cache logic purges storage addresses from the cache, based on the command, to the higher level memory.
    Type: Application
    Filed: February 21, 2019
    Publication date: June 13, 2019
    Inventors: Ekaterina M. Ambroladze, Deanna P. D. Berger, Michael A. Blake, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy, Chad G. Wilson
  • Patent number: 10310982
    Abstract: A computer-implemented method for managing cache memory in a distributed symmetric multiprocessing computer is described. The method may include receiving, at a first central processor (CP) chip, a fetch request from a first chip. The method may further include determining via address compare mechanisms on the first CP chip whether one or more of a second CP chip and a third CP chip is requesting access to a target line. The first chip, the second chip, and the third chip are within the same chip cluster. The method further includes providing access to the target line if both of the second CP chip and the third CP chip have accessed the target line at least one time since the first CP chip has accessed the target line.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: June 4, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna Postles Dunn Berger, Johnathon J. Hoste, Pak-kin Mak, Arthur J. O'Neill, Jr., Robert J. Sonnelitter, III
  • Publication number: 20190018775
    Abstract: Embodiments include methods, systems and computer program products method for maintaining ordered memory access with parallel access data streams associated with a distributed shared memory system. The computer-implemented method includes performing, by a first cache, a key check, the key check being associated with a first ordered data store. A first memory node signals that the first memory node is ready to begin pipelining of a second ordered data store into the first memory node to an input/output (I/O) controller. A second cache returns a key response to the first cache indicating that the pipelining of the second ordered data store can proceed. The first memory node sends a ready signal indicating that the first memory node is ready to continue pipelining of the second ordered data store into the first memory node to the I/O controller, wherein the ready signal is triggered by receipt of the key response.
    Type: Application
    Filed: July 17, 2017
    Publication date: January 17, 2019
    Inventors: Ekaterina M. Ambroladze, Timothy C. Bronson, Matthias Klein, Pak-kin Mak, Vesselina K. Papazova, Robert J. Sonnelitter, III, Lahiruka S. Winter
  • Publication number: 20180374522
    Abstract: A system and method to transfer an ordered partial store of data from a controller to a memory subsystem receives the ordered partial store of data into a buffer of the controller. The method also includes issuing a preinstall command to the memory subsystem, wherein the preinstall command indicates that data from a number of addresses of memory corresponding with a target memory location be obtained in local memory of the memory subsystem along with ownership of the data for subsequent use. A query command is issued to the memory subsystem. The query command requests an indication from the memory subsystem that the memory subsystem is ready to receive and correctly serialize the ordered partial store of data. The ordered partial store of data is transferred from the controller to the memory subsystem.
    Type: Application
    Filed: June 22, 2017
    Publication date: December 27, 2018
    Inventors: Ekaterina M. Ambroladze, Sascha Junghans, Matthias Klein, Pak-Kin Mak, Robert J. Sonnelitter, III, Chad G. Wilson
  • Publication number: 20180365070
    Abstract: Methods, systems, and computer program products for managing broadcasts in a distributed symmetric multiprocessing computer are provided. Aspects include defining a default broadcast rate for a plurality of processors in the distributed symmetric multiprocessing computer, wherein the default broadcast rate is a rate at which a processor broadcasts a request for a resource. The one or more broadcasted requests by a first processor are monitored and related responses are utilized to determine a state of the one or more broadcasted requests. The default broadcast rate is adjusted based at least in part on the state of the one or more broadcasted requests.
    Type: Application
    Filed: June 16, 2017
    Publication date: December 20, 2018
    Inventors: Michael A. Blake, Mike Chow, Kenneth D. Klapproth, Pak-kin Mak, Arthur J. O'Neill, JR., Robert J. Sonnelitter, III