Patents by Inventor Pak-kin Mak

Pak-kin Mak has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180341422
    Abstract: An aspect includes interlocking operations in an address-sliced cache system. A computer-implemented method includes determining whether a dynamic memory relocation operation is in process in the address-sliced cache system. Based on determining that the dynamic memory relocation operation is in process, a key operation is serialized to maintain a sequenced order of completion of the key operation across a plurality of slices and pipes in the address-sliced cache system. Based on determining that the dynamic memory relocation operation is not in process, a plurality of key operation requests is allowed to launch across two or more of the slices and pipes in parallel in the address-sliced cache system while ensuring that only one instance of the key operations is in process across all of the slices and pipes at a same time.
    Type: Application
    Filed: May 24, 2017
    Publication date: November 29, 2018
    Inventors: Deanna P. Berger, Michael A. Blake, Ashraf Elsharif, Kenneth D. Klapproth, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy
  • Publication number: 20180341586
    Abstract: Embodiments of the present invention are directed to managing a shared high-level cache for dual clusters of fully connected integrated circuit multiprocessors. An example of a computer-implemented method includes: providing a drawer comprising a plurality of clusters, each of the plurality of clusters comprising a plurality of processors; providing a shared cache integrated circuit to manage a shared cache memory among the plurality of clusters; receiving, by the shared cache integrated circuit, an operation of one of a plurality of operation types from one of the plurality of processors; and processing, by the shared cache integrated circuit, the operation based at least in part on the operation type of the operation according to a set of rules for processing the operation type.
    Type: Application
    Filed: May 26, 2017
    Publication date: November 29, 2018
    Inventors: Michael A. Blake, Timothy C. Bronson, Pak-kin Mak, Vesselina K. Papazova, Robert J. Sonnelitter, III
  • Publication number: 20180341587
    Abstract: Embodiments of the present invention are directed to managing a shared high-level cache for dual clusters of fully connected integrated circuit multiprocessors. An example of a computer-implemented method includes: providing a drawer comprising a plurality of clusters, each of the plurality of clusters comprising a plurality of processors; providing a shared cache integrated circuit to manage a shared cache memory among the plurality of clusters; receiving, by the shared cache integrated circuit, an operation of one of a plurality of operation types from one of the plurality of processors; and processing, by the shared cache integrated circuit, the operation based at least in part on the operation type of the operation according to a set of rules for processing the operation type.
    Type: Application
    Filed: November 1, 2017
    Publication date: November 29, 2018
    Inventors: Michael A. Blake, Timothy C. Bronson, Pak-kin Mak, Vesselina K. Papazova, Robert J. Sonnelitter, III
  • Publication number: 20180307628
    Abstract: A computer implemented method for avoiding false activation of hang avoidance mechanisms of a system is provided. The computer implemented method includes receiving, by a nest of the system, rejects from a processor core of the system. The rejects are issued based on a cache line being locked by the processor core. The computer implemented method includes accumulating the rejects by the nest. The computer implemented method includes determining, by the nest, when an amount of the rejects accumulated by the nest has met or exceeded a programmable threshold. The computer implemented method also includes triggering, by the nest, a global reset to counters of the hang avoidance mechanisms of a system in response to the amount meeting or exceeding the programmable threshold.
    Type: Application
    Filed: April 25, 2017
    Publication date: October 25, 2018
    Inventors: Michael A. Blake, Pak-kin Mak, Robert J. Sonnelitter, III, Timothy W. Steele, Gary E. Strait, Poornima P. Sulibele, Guy G. Tracy
  • Publication number: 20180307612
    Abstract: In an approach for purging an address range from a cache, a processor quiesces a computing system. Cache logic issues a command to purge a section of a cache to higher level memory, wherein the command comprises a starting storage address and a range of storage addresses to be purged. Responsive to each cache of the computing system activating the command, cache logic ends the quiesce of the computing system. Subsequent to ending the quiesce of the computing system, Cache logic purges storage addresses from the cache, based on the command, to the higher level memory.
    Type: Application
    Filed: April 19, 2017
    Publication date: October 25, 2018
    Inventors: Ekaterina M. Ambroladze, Deanna P. D. Berger, Michael A. Blake, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy, Chad G. Wilson
  • Publication number: 20180293172
    Abstract: Embodiments of the present invention are directed to hot cache line arbitration. An example of a computer-implemented method for hot cache line arbitration includes receiving a request for exclusive access to a cache line from a requestor of a drawer in a processing system. The method further includes bringing the cache line to a local cache of the drawer. The method further includes invalidating copies of the cache line in the processing system. The method further includes loading a remote fetch address register (RFAR) controller on other drawers in the processing system, wherein the RFAR comprises a local pending flag and a remote pending flag.
    Type: Application
    Filed: April 5, 2017
    Publication date: October 11, 2018
    Inventors: Michael A. Blake, Rebecca M. Gott, Pak-Kin Mak, Vesselina K. Papazova
  • Publication number: 20180285277
    Abstract: Embodiments of the present invention are directed to hot cache line arbitration. An example of a computer-implemented method for hot cache line arbitration includes detecting, by a processing device, a hot cache line scenario. The computer-implemented method further includes tracking, by the processing device, hot cache line requests from requesters to determine subsequent satisfaction of the requests. The computer-implemented method further includes facilitating, by the processing device, servicing of the requests according to hierarchy of the requestors.
    Type: Application
    Filed: March 29, 2017
    Publication date: October 4, 2018
    Inventors: Michael A. Blake, Timothy C. Bronson, Jason D. Kohl, Pak-Kin Mak, Vesselina K. Papazova
  • Patent number: 10055355
    Abstract: In an approach for purging an address range from a cache, a processor quiesces a computing system. Cache logic issues a command to purge a section of a cache to higher level memory, wherein the command comprises a starting storage address and a range of storage addresses to be purged. Responsive to each cache of the computing system activating the command, cache logic ends the quiesce of the computing system. Subsequent to ending the quiesce of the computing system, Cache logic purges storage addresses from the cache, based on the command, to the higher level memory.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: August 21, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna P. D. Berger, Michael A. Blake, Pak-Kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy, Chad G. Wilson
  • Publication number: 20180173630
    Abstract: A computer-implemented method for managing cache memory in a distributed symmetric multiprocessing computer is described. The method may include receiving, at a first central processor (CP) chip, a fetch request from a first chip. The method may further include determining via address compare mechanisms on the first CP chip whether one or more of a second CP chip and a third CP chip is requesting access to a target line. The first chip, the second chip, and the third chip are within the same chip cluster. The method further includes providing access to the target line if both of the second CP chip and the third CP chip have accessed the target line at least one time since the first CP chip has accessed the target line.
    Type: Application
    Filed: December 15, 2016
    Publication date: June 21, 2018
    Inventors: Deanna Postles Dunn Berger, Johnathon J. Hoste, Pak-kin Mak, Arthur J. O'Neill, JR., Robert J. Sonnelitter, III
  • Publication number: 20180101474
    Abstract: A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line.
    Type: Application
    Filed: December 8, 2017
    Publication date: April 12, 2018
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Pak-Kin Mak, Vesselina K. Papazova, Hanno Ulrich
  • Patent number: 9892043
    Abstract: A computer system comprising multiple nodes, each node comprising a plurality of processors and a local cache hierarchy, suppresses local cache coherency of a node operations or global cache coherency operations between nodes based on the coherency request being a global or local request, and the state of the cache line at the node.
    Type: Grant
    Filed: April 27, 2017
    Date of Patent: February 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Garrett Michael Drapala, William J Lewis, Pak-kin Mak, Robert J Sonnelitter, III
  • Patent number: 9858190
    Abstract: Maintaining store order with high throughput in a distributed shared memory system. A request is received for a first ordered data store and a coherency check is initiated. A signal is sent that pipelining of a second ordered data store can be initiated. If a delay condition is encountered during the coherency check for the first ordered data store, rejection of the first ordered data store is signaled. If a delay condition is not encountered during the coherency check for the first ordered data store, a signal is sent indicating a readiness to continue pipelining of the second ordered data store.
    Type: Grant
    Filed: January 27, 2015
    Date of Patent: January 2, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Timothy C. Bronson, Garrett M. Drapala, Michael Fee, Matthias Klein, Pak-kin Mak, Robert J. Sonnelitter, III, Gary E. Strait
  • Patent number: 9852071
    Abstract: A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line.
    Type: Grant
    Filed: October 20, 2014
    Date of Patent: December 26, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Pak-kin Mak, Vesselina K. Papazova, Hanno Ulrich
  • Patent number: 9798663
    Abstract: A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line.
    Type: Grant
    Filed: September 7, 2015
    Date of Patent: October 24, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Pak-kin Mak, Vesselina K. Papazova, Hanno Ulrich
  • Publication number: 20170228317
    Abstract: A computer system comprising multiple nodes, each node comprising a plurality of processors and a local cache hierarchy, suppresses local cache coherency of a node operations or global cache coherency operations between nodes based on the coherency request being a global or local request, and the state of the cache line at the node.
    Type: Application
    Filed: April 27, 2017
    Publication date: August 10, 2017
    Applicant: INTERNATIOINAL BUSINESS MACHINES CORPORATION
    Inventors: Garrett Michael Drapala, William J Lewis, Pak-kin Mak, Robert J Sonnelitter, III
  • Patent number: 9727464
    Abstract: A computer system comprising multiple nodes, each node comprising a plurality of processors and a local cache hierarchy, suppresses local cache coherency of a node operations or global cache coherency operations between nodes based on the coherency request being a global or local request, and the state of the cache line at the node.
    Type: Grant
    Filed: November 20, 2014
    Date of Patent: August 8, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Garrett Michael Drapala, William J Lewis, Pak-kin Mak, Robert J Sonnelitter, III
  • Patent number: 9720833
    Abstract: A computer system comprising multiple nodes, each node comprising a plurality of processors and a local cache hierarchy, suppresses local cache coherency of a node operations or global cache coherency operations between nodes based on the coherency request being a global or local request, and the state of the cache line at the node.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: August 1, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Garrett Michael Drapala, William J Lewis, Pak-kin Mak, Robert J Sonnelitter, III
  • Patent number: 9600360
    Abstract: An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process.
    Type: Grant
    Filed: November 21, 2014
    Date of Patent: March 21, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael F. Fee, Pak-kin Mak, Arthur J. O'Neill, Jr., Deanna Postles Dunn Berger
  • Patent number: 9600361
    Abstract: An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process.
    Type: Grant
    Filed: August 12, 2015
    Date of Patent: March 21, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael F. Fee, Pak-kin Mak, Arthur J. O'Neill, Jr., Deanna Postles Dunn Berger
  • Patent number: 9594689
    Abstract: In an approach for backing up designated data located in a cache, data stored within an index of a cache is identified, wherein the data has an associated designation indicating that the data is applicable to be backed up to a higher level memory. It is determined that the data stored to the cache has been updated. A status associated with the data is adjusted, such that the adjusted status indicates that the data stored to the cache has not been changed. A copy of the data is created. The copy of the data is stored to the higher level memory.
    Type: Grant
    Filed: February 9, 2015
    Date of Patent: March 14, 2017
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Garrett M. Drapala, Michael Fee, Pak-kin Mak, Arthur J. O'Neill, Jr., Diana L. Orf