Patents by Inventor Hieu T. Huynh

Hieu T. Huynh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11620231
    Abstract: Aspects of the invention include defining one or more processor units having a plurality of caches, each processor unit comprising a processor having at least one cache, and wherein each of the one or more processor units are coupled together by an interconnect fabric, for each of the plurality of caches, arranging a plurality of cache lines into one or more congruence classes, each congruence class comprises a chronology vector, arranging each cache in the plurality of caches into a cluster of caches based on a plurality of scope domains, determining a first cache line to evict based on the chronology vector, and determining a target cache for installing the first cache line based on a scope of the first cache line and a saturation metric associated with the target cache, wherein the scope of the first cache line is determined based on lateral persistence tag bits.
    Type: Grant
    Filed: August 20, 2021
    Date of Patent: April 4, 2023
    Assignee: International Business Machines Corporation
    Inventors: Ram Sai Manoj Bamdhamravuri, Craig R. Walters, Christian Jacobi, Timothy Bronson, Gregory William Alexander, Hieu T. Huynh, Robert J. Sonnelitter, III, Jason D. Kohl, Deanna P. D. Berger, Richard Joseph Branciforte
  • Publication number: 20230054424
    Abstract: Aspects of the invention include defining one or more processor units having a plurality of caches, each processor unit comprising a processor having at least one cache, and wherein each of the one or more processor units are coupled together by an interconnect fabric, for each of the plurality of caches, arranging a plurality of cache lines into one or more congruence classes, each congruence class comprises a chronology vector, arranging each cache in the plurality of caches into a cluster of caches based on a plurality of scope domains, determining a first cache line to evict based on the chronology vector, and determining a target cache for installing the first cache line based on a scope of the first cache line and a saturation metric associated with the target cache, wherein the scope of the first cache line is determined based on lateral persistence tag bits.
    Type: Application
    Filed: August 20, 2021
    Publication date: February 23, 2023
    Inventors: Ram Sai Manoj BAMDHAMRAVURI, Craig R. WALTERS, Christian JACOBI, Timothy BRONSON, Gregory William ALEXANDER, Hieu T. HUYNH, Robert J. SONNELITTER, III, Jason D. KOHL, Deanna P. D. BERGER, Richard Joseph BRANCIFORTE
  • Patent number: 11221794
    Abstract: Methods, systems and computer program products for providing access to a spare memory array element (“MAE”) are provided. Aspects include storing a row number a column number associated with a defective MAE of a plurality of MAEs. The plurality of MAEs are logically arranged in a plurality of rows and a plurality of columns. Aspects also include receiving a command to access a cache line. The cache line corresponds to a selected row of MAEs of the plurality of MAEs. Responsive to determining that the selected row matches the row number that is associated with the defective MAE, aspects include activating one or more column shifters to prevent access to the defective MAE and provide access to a spare MAE when accessing the cache line. The activation of the one of more column shifters is based on the column number that is associated with the defective MAE.
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: January 11, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tim Bronson, Hieu T. Huynh, Kenneth Klapproth
  • Patent number: 11048427
    Abstract: Methods, systems and computer program products for evacuating memory from a drawer in a live multi-node system are provided. Aspects include placing a first drawer into an evacuation mode. The evacuation mode includes a cessation of non-evacuation operations and provides for a transfer of data stored by memory of the first drawer to a destination drawer using dynamic memory reallocation (DMR). Aspects also include transmitting a store request by the first drawer to the destination drawer. The store request represents a request to transfer the data stored by the memory of the first drawer to the destination drawer for storage by the destination drawer. Aspects also include transmitting the data stored by the memory of the first drawer to the destination drawer. The data is transmitted by the first drawer using a local pool of fetch/store controllers.
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: June 29, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jason D. Kohl, Tim Bronson, Hieu T. Huynh, Michael Andrew Blake
  • Patent number: 10915461
    Abstract: Embodiments of the present invention are directed to a computer-implemented method for cache eviction. The method includes detecting a first data in a shared cache and a first cache in response to a request by a first processor. The first data is determined to have a mid-level cache eviction priority. A request is detected from a second processor for a same first data as requested by the first processor. However, in this instance, the second processor has indicated that the same first data has a low-level cache eviction priority. The first data is duplicated and loaded to a second cache, however, the data has a low-level cache eviction priority at the second cache.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: February 9, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Robert J. Sonnelitter, III, Matthias Klein, Craig Walters, Kevin Lopes, Michael A. Blake, Tim Bronson, Kenneth Klapproth, Vesselina Papazova, Hieu T Huynh
  • Patent number: 10901902
    Abstract: Methods and systems for cache management are provided. Aspects include providing a drawer including a plurality of clusters, each of the plurality of clusters including a plurality of processor each having one or more cores, wherein each of the one or more cores shares a first cache memory, providing a second cache memory shared among the plurality of clusters, and receiving a cache line request from one of the one or more cores to the first cache memory, wherein the first cache memory sends a request to a memory controller to retrieve the cache line from a memory, store the cache line in the first cache memory, create a directory state associated with the cache line, and provide the directory state to the second cache memory to create a directory entry for the cache line.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: January 26, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Chad G. Wilson, Robert J Sonnelitter, III, Tim Bronson, Ekaterina M. Ambroladze, Hieu T Huynh, Jason D Kohl, Chakrapani Rayadurgam
  • Publication number: 20200301831
    Abstract: Methods and systems for cache management are provided. Aspects include providing a drawer including a plurality of clusters, each of the plurality of clusters including a plurality of processor each having one or more cores, wherein each of the one or more cores shares a first cache memory, providing a second cache memory shared among the plurality of clusters, and receiving a cache line request from one of the one or more cores to the first cache memory, wherein the first cache memory sends a request to a memory controller to retrieve the cache line from a memory, store the cache line in the first cache memory, create a directory state associated with the cache line, and provide the directory state to the second cache memory to create a directory entry for the cache line.
    Type: Application
    Filed: March 21, 2019
    Publication date: September 24, 2020
    Inventors: Chad G. Wilson, Robert J Sonnelitter, III, Tim Bronson, Ekaterina M. Ambroladze, Hieu T Huynh, Jason D Kohl, Chakrapani Rayadurgam
  • Publication number: 20200285592
    Abstract: Embodiments of the present invention are directed to a computer-implemented method for cache eviction. The method includes detecting a first data in a shared cache and a first cache in response to a request by a first processor. The first data is determined to have a mid-level cache eviction priority. A request is detected from a second processor for a same first data as requested by the first processor. However, in this instance, the second processor has indicated that the same first data has a low-level cache eviction priority. The first data is duplicated and loaded to a second cache, however, the data has a low-level cache eviction priority at the second cache.
    Type: Application
    Filed: March 5, 2019
    Publication date: September 10, 2020
    Inventors: Ekaterina M. Ambroladze, Robert J. Sonnelitter, III, Matthias Klein, Craig Walters, Kevin Lopes, Michael A. Blake, Tim Bronson, Kenneth Klapproth, Vesselina Papazova, Hieu T Huynh
  • Publication number: 20200264797
    Abstract: Methods, systems and computer program products for evacuating memory from a drawer in a live multi-node system are provided. Aspects include placing a first drawer into an evacuation mode. The evacuation mode includes a cessation of non-evacuation operations and provides for a transfer of data stored by memory of the first drawer to a destination drawer using dynamic memory reallocation (DMR). Aspects also include transmitting a store request by the first drawer to the destination drawer. The store request represents a request to transfer the data stored by the memory of the first drawer to the destination drawer for storage by the destination drawer. Aspects also include transmitting the data stored by the memory of the first drawer to the destination drawer. The data is transmitted by the first drawer using a local pool of fetch/store controllers.
    Type: Application
    Filed: February 20, 2019
    Publication date: August 20, 2020
    Inventors: Jason D. Kohl, Tim Bronson, Hieu T. Huynh, Michael Andrew Blake
  • Publication number: 20200264803
    Abstract: Methods, systems and computer program products for providing access to a spare memory array element (“MAE”) are provided. Aspects include storing a row number a column number associated with a defective MAE of a plurality of MAEs. The plurality of MAEs are logically arranged in a plurality of rows and a plurality of columns. Aspects also include receiving a command to access a cache line. The cache line corresponds to a selected row of MAEs of the plurality of MAEs. Responsive to determining that the selected row matches the row number that is associated with the defective MAE, aspects include activating one or more column shifters to prevent access to the defective MAE and provide access to a spare MAE when accessing the cache line. The activation of the one of more column shifters is based on the column number that is associated with the defective MAE.
    Type: Application
    Filed: February 20, 2019
    Publication date: August 20, 2020
    Inventors: Tim Bronson, Hieu T. Huynh, Kenneth Klapproth
  • Patent number: 9495107
    Abstract: A computing device is provided and includes a first physical memory device, a second physical memory device and a hypervisor configured to assign resources of the first and second physical memory devices to a logical partition. The hypervisor configures a dynamic memory relocation (DMR) mechanism to move entire storage increments currently processed by the logical partition between the first and second physical memory devices in a manner that is substantially transparent to the logical partition.
    Type: Grant
    Filed: November 19, 2014
    Date of Patent: November 15, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Mark S. Farrell, Hieu T. Huynh, William J. Lewis, Pak-Kin Mak, Craig R. Walters
  • Patent number: 9489255
    Abstract: A method, system, and/or computer program product for dynamic array masking is provided. Dynamic array masking includes, during execution of computer instructions that access a cache memory, detecting an error condition in a portion of the cache memory. The portion of the cache memory contains an array macro. Dynamic array masking, during the execution of the computer instructions that access a cache memory, further includes dynamically setting mask bits to indicate the error condition in the portion of the cache memory and preventing subsequent writes to the portion of the cache memory in accordance with the dynamically set mask bits. Embodiments also include evicting cache entries from the portion of the cache memory. This evicting can include performing a cache purge operation for the cache entries corresponding to the dynamically set mask bits.
    Type: Grant
    Filed: February 12, 2015
    Date of Patent: November 8, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Hieu T. Huynh, Pak-kin Mak, Arthur J. O'Neill, Jr., Rebecca S. Wisniewski
  • Publication number: 20160239378
    Abstract: A method, system, and/or computer program product for dynamic array masking is provided. Dynamic array masking includes, during execution of computer instructions that access a cache memory, detecting an error condition in a portion of the cache memory. The portion of the cache memory contains an array macro. Dynamic array masking, during the execution of the computer instructions that access a cache memory, further includes dynamically setting mask bits to indicate the error condition in the portion of the cache memory and preventing subsequent writes to the portion of the cache memory in accordance with the dynamically set mask bits. Embodiments also include evicting cache entries from the portion of the cache memory. This evicting can include performing a cache purge operation for the cache entries corresponding to the dynamically set mask bits.
    Type: Application
    Filed: February 12, 2015
    Publication date: August 18, 2016
    Inventors: Michael A. Blake, Hieu T. Huynh, Pak-kin Mak, Arthur J. O'Neill, JR., Rebecca S. Wisniewski
  • Publication number: 20160139831
    Abstract: A computing device is provided and includes a first physical memory device, a second physical memory device and a hypervisor configured to assign resources of the first and second physical memory devices to a logical partition. The hypervisor configures a dynamic memory relocation (DMR) mechanism to move entire storage increments currently processed by the logical partition between the first and second physical memory devices in a manner that is substantially transparent to the logical partition.
    Type: Application
    Filed: November 19, 2014
    Publication date: May 19, 2016
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Mark S. Farrell, Hieu T. Huynh, William J. Lewis, Pak-Kin Mak, Craig R. Walters
  • Patent number: 9086990
    Abstract: Embodiments relate to a computer system for bitline deletion, the system including a cache controller and cache. The system is configured to perform a method including detecting a first error when reading a first cache line, recording a first address of the first error, detecting a second error when reading a second cache line, recording a second address of the second error, comparing first and second bitline addresses, comparing the first and second wordline address, activating a bitline delete mode based on matching first and second bitline addresses and not matching first and second wordline addresses, detecting a third error when reading a third cache line, recording a third bitline address of the third error, comparing the second bitline address to the third bitline address and deleting a location corresponding to the third cache line based on the activated bitline delete mode and matching third and second bitline addresses.
    Type: Grant
    Filed: March 7, 2013
    Date of Patent: July 21, 2015
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Michael A. Blake, Michael Fee, Hieu T. Huynh, Patrick J. Meaney, Arthur J. O'Neill
  • Patent number: 9003127
    Abstract: Embodiments relate to storing data to a system memory. An aspect includes accessing successive entries of a cache directory having a plurality of directory entries by a stepper engine, where access to the cache directory is given a lower priority than other cache operations. It is determined that a specific directory entry in the cache directory has a change line state that indicates it is modified. A store operation is performed to send a copy of the specific corresponding cache entry to the system memory as part of a cache management function. The specific directory entry is updated to indicate that the change line state is unmodified.
    Type: Grant
    Filed: November 21, 2013
    Date of Patent: April 7, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blake, Timothy C. Bronson, Hieu T. Huynh, Kenneth D. Klapproth, Pak-Kin Mak, Vesselina K. Papazova
  • Patent number: 8990507
    Abstract: Embodiments relate to storing data to a system memory. An aspect includes accessing successive entries of a cache directory having a plurality of directory entries by a stepper engine, where access to the cache directory is given a lower priority than other cache operations. It is determined that a specific directory entry in the cache directory has a change line state that indicates it is modified. A store operation is performed to send a copy of the specific corresponding cache entry to the system memory as part of a cache management function. The specific directory entry is updated to indicate that the change line state is unmodified.
    Type: Grant
    Filed: June 13, 2012
    Date of Patent: March 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blake, Pak-Kin Mak, Timothy C. Bronson, Hieu T. Huynh, Kenneth D. Klapproth, Vesselina K. Papazova
  • Patent number: 8930616
    Abstract: System refresh in a cache memory that includes generating a refresh time period (RTIM) pulse at a centralized refresh controller of the cache memory and activating a refresh request at the centralized refresh controller based on generating the RTIM pulse. The refresh request is associated with a single cache memory bank of the cache memory. A refresh grant is received and transmitted to a bank controller. The bank controller is associated with and localized at the single cache memory bank of the cache memory.
    Type: Grant
    Filed: October 18, 2012
    Date of Patent: January 6, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blake, Timothy C. Bronson, Hieu T. Huynh, Kenneth D. Klapproth
  • Patent number: 8874957
    Abstract: A technique is provided for a cache. A cache controller accesses a set in a congruence class and determines that the set contains corrupted data based on an error being found. The cache controller determines that a delete parameter for taking the set offline is met and determines that a number of currently offline sets in the congruence class is higher than an allowable offline number threshold. The cache controller determines not to take the set in which the error was found offline based on determining that the number of currently offline sets in the congruence class is higher than the allowable offline number threshold.
    Type: Grant
    Filed: December 11, 2013
    Date of Patent: October 28, 2014
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Michael A. Blake, Timothy C. Bronson, Hieu T. Huynh
  • Patent number: 8788891
    Abstract: Embodiments relate to a method including detecting a first error when reading a first cache line, recording a first address of the first error, detecting a second error when reading a second cache line and recording a second address of the second error. Embodiments also include comparing the first and second bitline address, comparing the first and second wordline address, activating a bitline delete mode based on matching first and second bitline addresses and not matching the first and second wordline addresses, detecting a third error when reading a third cache line, recording a third bitline address of the third error, comparing the second bitline address to a third bitline address and deleting a location corresponding to the third cache line from available cache locations based on the activated bitline delete mode and the third bitline address matching the second bitline address.
    Type: Grant
    Filed: June 14, 2012
    Date of Patent: July 22, 2014
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Michael A. Blake, Michael Fee, Hieu T. Huynh, Patrick J. Meaney, Arthur J. O'Neill