Patents by Inventor Pak-kin Mak

Pak-kin Mak has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9558119
    Abstract: Main memory operation in a symmetric multiprocessing computer, the computer comprising one or more processors operatively coupled through a cache controller to at least one cache of main memory, the main memory shared among the processors, the computer further comprising input/output (‘I/O’) resources, including receiving, in the cache controller from an issuing resource, a memory instruction for a memory address, the memory instruction requiring writing data to main memory; locking by the cache controller the memory address against further memory operations for the memory address; advising the issuing resource of completion of the memory instruction before the memory instruction completes in main memory; issuing by the cache controller the memory instruction to main memory; and unlocking the memory address only after completion of the memory instruction in main memory.
    Type: Grant
    Filed: June 23, 2010
    Date of Patent: January 31, 2017
    Assignee: International Business Machines Corporation
    Inventors: Garrett M. Drapala, Pak-Kin Mak, Arthur J. O'Neill, Jr., Craig R. Walters
  • Publication number: 20170003884
    Abstract: Aspects include a computer-implemented method that includes receiving an instruction at a processor to perform an operation on a memory block having an address and accessing a state indicator by the processor without altering a value of the state indicator. The state indicator is stored in a memory location independent of the memory block, and accessing includes sending a request to an operator to return the value of the state indicator to the processor. The method also includes determining based on the value of the state indicator whether the memory block is in a pre-defined state.
    Type: Application
    Filed: September 15, 2015
    Publication date: January 5, 2017
    Inventors: Pak-kin Mak, Timothy J. Slegel, Craig R. Walters, Charles F. Webb
  • Publication number: 20170003886
    Abstract: Aspects include a computer-implemented method includes receiving an instruction at a processor to perform an operation on a memory block having an address and accessing a state indicator by the processor without altering a value of the state indicator. The state indicator is stored in a memory location independent of the memory block, and accessing includes sending a request to an operator to return the value of the state indicator to the processor. The method also includes determining based on the value of the state indicator whether the memory block is in a pre-defined state.
    Type: Application
    Filed: June 30, 2015
    Publication date: January 5, 2017
    Inventors: Pak-kin Mak, Timothy J. Slegel, Craig R. Walters, Charles F. Webb
  • Patent number: 9495107
    Abstract: A computing device is provided and includes a first physical memory device, a second physical memory device and a hypervisor configured to assign resources of the first and second physical memory devices to a logical partition. The hypervisor configures a dynamic memory relocation (DMR) mechanism to move entire storage increments currently processed by the logical partition between the first and second physical memory devices in a manner that is substantially transparent to the logical partition.
    Type: Grant
    Filed: November 19, 2014
    Date of Patent: November 15, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Mark S. Farrell, Hieu T. Huynh, William J. Lewis, Pak-Kin Mak, Craig R. Walters
  • Patent number: 9489255
    Abstract: A method, system, and/or computer program product for dynamic array masking is provided. Dynamic array masking includes, during execution of computer instructions that access a cache memory, detecting an error condition in a portion of the cache memory. The portion of the cache memory contains an array macro. Dynamic array masking, during the execution of the computer instructions that access a cache memory, further includes dynamically setting mask bits to indicate the error condition in the portion of the cache memory and preventing subsequent writes to the portion of the cache memory in accordance with the dynamically set mask bits. Embodiments also include evicting cache entries from the portion of the cache memory. This evicting can include performing a cache purge operation for the cache entries corresponding to the dynamically set mask bits.
    Type: Grant
    Filed: February 12, 2015
    Date of Patent: November 8, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Hieu T. Huynh, Pak-kin Mak, Arthur J. O'Neill, Jr., Rebecca S. Wisniewski
  • Patent number: 9459998
    Abstract: A multi-boundary address protection range is provided to prevent key operations from interfering with a data move performed by a dynamic memory relocation (DMR) move operation. Any key operation address that is within the move boundary address range gets rejected back to the hypervisor. Further, logic exists across a set of parallel slices to synchronize the DMR move operation as it crosses a protected boundary address range.
    Type: Grant
    Filed: February 4, 2015
    Date of Patent: October 4, 2016
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blake, Garrett M. Drapala, James F. Driftmyer, Deanna P. Berger, Pak-kin Mak, Timothy J. Slegel, Rebecca S. Wisniewski
  • Publication number: 20160239378
    Abstract: A method, system, and/or computer program product for dynamic array masking is provided. Dynamic array masking includes, during execution of computer instructions that access a cache memory, detecting an error condition in a portion of the cache memory. The portion of the cache memory contains an array macro. Dynamic array masking, during the execution of the computer instructions that access a cache memory, further includes dynamically setting mask bits to indicate the error condition in the portion of the cache memory and preventing subsequent writes to the portion of the cache memory in accordance with the dynamically set mask bits. Embodiments also include evicting cache entries from the portion of the cache memory. This evicting can include performing a cache purge operation for the cache entries corresponding to the dynamically set mask bits.
    Type: Application
    Filed: February 12, 2015
    Publication date: August 18, 2016
    Inventors: Michael A. Blake, Hieu T. Huynh, Pak-kin Mak, Arthur J. O'Neill, JR., Rebecca S. Wisniewski
  • Publication number: 20160232099
    Abstract: In an approach for backing up designated data located in a cache, data stored within an index of a cache is identified, wherein the data has an associated designation indicating that the data is applicable to be backed up to a higher level memory. It is determined that the data stored to the cache has been updated. A status associated with the data is adjusted, such that the adjusted status indicates that the data stored to the cache has not been changed. A copy of the data is created. The copy of the data is stored to the higher level memory.
    Type: Application
    Filed: February 9, 2015
    Publication date: August 11, 2016
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Garrett M. Drapala, Michael Fee, Pak-kin Mak, Arthur J. O'Neill, JR., Diana L. Orf
  • Publication number: 20160224463
    Abstract: A multi-boundary address protection range is provided to prevent key operations from interfering with a data move performed by a dynamic memory relocation (DMR) move operation. Any key operation address that is within the move boundary address range gets rejected back to the hypervisor. Further, logic exists across a set of parallel slices to synchronize the DMR move operation as it crosses a protected boundary address range.
    Type: Application
    Filed: February 4, 2015
    Publication date: August 4, 2016
    Inventors: Michael A. Blake, Garrett M. Drapala, James F. Driftmyer, Deanna P. Berger, Pak-kin Mak, Timothy J. Slegel, Rebecca S. Wisniewski
  • Publication number: 20160217077
    Abstract: Maintaining store order with high throughput in a distributed shared memory system. A request is received for a first ordered data store and a coherency check is initiated. A signal is sent that pipelining of a second ordered data store can be initiated. If a delay condition is encountered during the coherency check for the first ordered data store, rejection of the first ordered data store is signaled. If a delay condition is not encountered during the coherency check for the first ordered data store, a signal is sent indicating a readiness to continue pipelining of the second ordered data store.
    Type: Application
    Filed: January 27, 2015
    Publication date: July 28, 2016
    Inventors: Ekaterina M. Ambroladze, Timothy C. Bronson, Garrett M. Drapala, Michael Fee, Matthias Klein, Pak-kin Mak, Robert J. Sonnelitter, III, Gary E. Strait
  • Publication number: 20160147662
    Abstract: A computer system comprising multiple nodes, each node comprising a plurality of processors and a local cache hierarchy, suppresses local cache coherency of a node operations or global cache coherency operations between nodes based on the coherency request being a global or local request, and the state of the cache line at the node.
    Type: Application
    Filed: August 3, 2015
    Publication date: May 26, 2016
    Applicant: INTERNATIOINAL BUSINESS MACHINES CORPORATION
    Inventors: Garrett Michael Drapala, William J. Lewis, Pak-kin Mak, Robert J. Sonnelitter, III
  • Publication number: 20160147597
    Abstract: An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process.
    Type: Application
    Filed: August 12, 2015
    Publication date: May 26, 2016
    Inventors: Michael F. Fee, Pak-kin Mak, Arthur J. O'Neill, JR., Deanna Postles Dunn Berger
  • Publication number: 20160147664
    Abstract: An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process.
    Type: Application
    Filed: November 21, 2014
    Publication date: May 26, 2016
    Inventors: Michael F. Fee, Pak-kin Mak, Arthur J. O'Neill, Jr., Deanna Postles Dunn Berger
  • Publication number: 20160147659
    Abstract: A computer system comprising multiple nodes, each node comprising a plurality of processors and a local cache hierarchy, suppresses local cache coherency of a node operations or global cache coherency operations between nodes based on the coherency request being a global or local request, and the state of the cache line at the node.
    Type: Application
    Filed: November 20, 2014
    Publication date: May 26, 2016
    Inventors: Garrett Michael Drapala, William J Lewis, Pak-kin Mak, Robert J Sonnelitter, III
  • Patent number: 9348524
    Abstract: A computing device is provided and includes a plurality of nodes. Each node includes multiple chips and a node controller at which the multiple chips are assignable to logical partitions. Each of the multiple chips includes processors and a memory unit configured to handle local memory operations originating from the processors. The node controller includes a dynamic memory relocation (DMR) mechanism configured to move data having a DMR storage increment address relative to a local one of the memory units without interrupting a processing of the data by at least one of the logical partitions. During movement of the data by the DMR mechanism, the memory units are disabled from handling the local memory operations matching the DMR storage increment address and the node controller handles the local memory operations matching the DMR storage increment address.
    Type: Grant
    Filed: November 19, 2014
    Date of Patent: May 24, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Michael F. Fee, Pak-Kin Mak, Arthur J. O'Neill, Robert J. Sonnelitter, III
  • Publication number: 20160139831
    Abstract: A computing device is provided and includes a first physical memory device, a second physical memory device and a hypervisor configured to assign resources of the first and second physical memory devices to a logical partition. The hypervisor configures a dynamic memory relocation (DMR) mechanism to move entire storage increments currently processed by the logical partition between the first and second physical memory devices in a manner that is substantially transparent to the logical partition.
    Type: Application
    Filed: November 19, 2014
    Publication date: May 19, 2016
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Mark S. Farrell, Hieu T. Huynh, William J. Lewis, Pak-Kin Mak, Craig R. Walters
  • Publication number: 20160139830
    Abstract: A computing device is provided and includes a plurality of nodes. Each node includes multiple chips and a node controller at which the multiple chips are assignable to logical partitions. Each of the multiple chips includes processors and a memory unit configured to handle local memory operations originating from the processors. The node controller includes a dynamic memory relocation (DMR) mechanism configured to move data having a DMR storage increment address relative to a local one of the memory units without interrupting a processing of the data by at least one of the logical partitions. During movement of the data by the DMR mechanism, the memory units are disabled from handling the local memory operations matching the DMR storage increment address and the node controller handles the local memory operations matching the DMR storage increment address.
    Type: Application
    Filed: November 19, 2014
    Publication date: May 19, 2016
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Michael F. Fee, Pak-Kin Mak, Arthur J. O'Neill, Robert J. Sonnelitter, III
  • Patent number: 9323676
    Abstract: Embodiments relate to a non-data inclusive coherent (NIC) directory for a symmetric multiprocessor (SMP) of a computer. An aspect includes determining a first eviction entry of a highest-level cache in a multilevel caching structure of the first processor node of the SMP. Another aspect includes determining that the NIC directory is not full. Another aspect includes determining that the first eviction entry of the highest-level cache is owned by a lower-level cache in the multilevel caching structure. Another aspect includes, based on the NIC directory not being full and based on the first eviction entry of the highest-level cache being owned by the lower-level cache, installing an address of the first eviction entry of the highest-level cache in a first new entry in the NIC directory. Another aspect includes invalidating the first eviction entry in the highest-level cache.
    Type: Grant
    Filed: March 5, 2013
    Date of Patent: April 26, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Rebecca M. Gott, Pak-Kin Mak, Vijayalakshmi Srinivasan, Craig R. Walters
  • Publication number: 20160110288
    Abstract: A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line.
    Type: Application
    Filed: September 7, 2015
    Publication date: April 21, 2016
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Pak-kin Mak, Vesselina K. Papazova, Hanno Ulrich
  • Publication number: 20160110287
    Abstract: A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line.
    Type: Application
    Filed: October 20, 2014
    Publication date: April 21, 2016
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Pak-kin Mak, Vesselina K. Papazova, Hanno Ulrich