Patents by Inventor Gabriel Loh

Gabriel Loh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240087667
    Abstract: Error correction for stacked memory is described. In accordance with the described techniques, a system includes a plurality of error correction code engines to detect vulnerabilities in a stacked memory and coordinate at least one vulnerability detected for a portion of the stacked memory to at least one other portion of the stacked memory.
    Type: Application
    Filed: August 29, 2023
    Publication date: March 14, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Divya Madapusi Srinivas Prasad, Michael Ignatowski, Gabriel Loh
  • Patent number: 10437736
    Abstract: A data processing system includes a memory and an input output memory management unit that is connected to the memory. The input output memory management unit is adapted to receive batches of address translation requests. The input output memory management unit has instructions that identify, from among the batches of address translation requests, a later batch having a lower number of memory access requests than an earlier batch, and selectively schedules access to a page table walker for each address translation request of a batch.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: October 8, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Arkaprava Basu, Eric Van Tassell, Mark Oskin, Guilherme Cox, Gabriel Loh
  • Patent number: 10394726
    Abstract: A memory network includes a plurality of memory nodes each identifiable by an ordinal number m, and a set of links divided into N subsets of links, where each subset of links is identifiable by an ordinal number n. For each subset of the plurality of N subsets of links, each link in the subset connects two memory nodes that have ordinal numbers m differing by b(n-1), where b is a positive number. Each of the memory nodes is communicatively coupled to a processor via at least two non-overlapping pathways through the plurality of links.
    Type: Grant
    Filed: August 5, 2016
    Date of Patent: August 27, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Gabriel Loh
  • Publication number: 20190196978
    Abstract: A data processing system includes a memory and an input output memory management unit that is connected to the memory. The input output memory management unit is adapted to receive batches of address translation requests. The input output memory management unit has instructions that identify, from among the batches of address translation requests, a later batch having a lower number of memory access requests than an earlier batch, and selectively schedules access to a page table walker for each address translation request of a batch.
    Type: Application
    Filed: December 22, 2017
    Publication date: June 27, 2019
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Arkaprava Basu, Eric Van Tassell, Mark Oskin, Guilherme Cox, Gabriel Loh
  • Patent number: 10133678
    Abstract: In some embodiments, a method of managing cache memory includes identifying a group of cache lines in a cache memory, based on a correlation between the cache lines. The method also includes tracking evictions of cache lines in the group from the cache memory and, in response to a determination that a criterion regarding eviction of cache lines in the group from the cache memory is satisfied, selecting one or more (e.g., all) remaining cache lines in the group for eviction.
    Type: Grant
    Filed: August 28, 2013
    Date of Patent: November 20, 2018
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Yasuko Eckert, Syed Ali Jafri, Srilatha Manne, Gabriel Loh
  • Patent number: 10049044
    Abstract: Proactive flush logic in a computing system is configured to perform a proactive flush operation to flush data from a first memory in a first computing device to a second memory in response to execution of a non-blocking flush instruction. Reactive flush logic in the computing system is configured to, in response to a memory request issued prior to completion of the proactive flush operation, interrupt the proactive flush operation and perform a reactive flush operation to flush requested data from the first memory to the second memory.
    Type: Grant
    Filed: June 14, 2016
    Date of Patent: August 14, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Michael Boyer, Gabriel Loh, Nuwan Jayasena
  • Publication number: 20180039587
    Abstract: A memory network includes a plurality of memory nodes each identifiable by an ordinal number m, and a set of links divided into N subsets of links, where each subset of links is identifiable by an ordinal number n. For each subset of the plurality of N subsets of links, each link in the subset connects two memory nodes that have ordinal numbers m differing by b(n-1), where b is a positive number. Each of the memory nodes is communicatively coupled to a processor via at least two non-overlapping pathways through the plurality of links.
    Type: Application
    Filed: August 5, 2016
    Publication date: February 8, 2018
    Inventor: Gabriel Loh
  • Publication number: 20170357583
    Abstract: Proactive flush logic in a computing system is configured to perform a proactive flush operation to flush data from a first memory in a first computing device to a second memory in response to execution of a non-blocking flush instruction. Reactive flush logic in the computing system is configured to, in response to a memory request issued prior to completion of the proactive flush operation, interrupt the proactive flush operation and perform a reactive flush operation to flush requested data from the first memory to the second memory.
    Type: Application
    Filed: June 14, 2016
    Publication date: December 14, 2017
    Inventors: Michael Boyer, Gabriel Loh, Nuwan Jayasena
  • Publication number: 20170161194
    Abstract: A method of prefetching data includes issuing to a translation lookaside buffer (TLB) an address translation request for a virtual memory address, detecting a TLB miss generated in response to the address translation request, and in response to the TLB miss, selecting the data for prefetching from memory based on the memory address causing the TLB miss and prefetching the selected data to a cache.
    Type: Application
    Filed: December 2, 2015
    Publication date: June 8, 2017
    Inventor: Gabriel Loh
  • Patent number: 9377954
    Abstract: A system for memory allocation in a multiclass memory system includes a processor coupleable to a plurality of memories sharing a unified memory address space, and a library store to store a library of software functions. The processor identifies a type of a data structure in response to a memory allocation function call to the library for allocating memory to the data structure. Using the library, the processor allocates portions of the data structure among multiple memories of the multiclass memory system based on the type of the data structure.
    Type: Grant
    Filed: May 9, 2014
    Date of Patent: June 28, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Gabriel Loh, Mitesh Meswani, Michael Ignatowski, Mark Nutter
  • Publication number: 20160041914
    Abstract: Embodiments include methods, systems, and computer readable medium directed to cache bypassing based on prefetch streams. A first cache receives a memory access request. The request references data in the memory. The data comprises non-reuse data. After a determination of a miss in the first cache, the first cache forwards the memory access request to a cache control logic. The detection of the non-reuse data instructs the cache control logic to allocate a block only in a second cache and bypass allocating a block in the first cache. The first cache is closer to the memory than the second cache.
    Type: Application
    Filed: August 5, 2014
    Publication date: February 11, 2016
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Yasuko Eckert, Gabriel Loh
  • Publication number: 20150324131
    Abstract: A system for memory allocation in a multiclass memory system includes a processor coupleable to a plurality of memories sharing a unified memory address space, and a library store to store a library of software functions. The processor identifies a type of a data structure in response to a memory allocation function call to the library for allocating memory to the data structure. Using the library, the processor allocates portions of the data structure among multiple memories of the multiclass memory system based on the type of the data structure.
    Type: Application
    Filed: May 9, 2014
    Publication date: November 12, 2015
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Gabriel Loh, Mitesh Meswani, Michael Ignatowski, Mark Nutter
  • Patent number: 9026731
    Abstract: A system, method and computer program product to store tag blocks in a tag buffer in order to provide early row-buffer miss detection, early page closing, and reductions in tag block transfers. A system comprises a tag buffer, a request buffer, and a memory controller. The request buffer stores a memory request having an associated tag. The memory controller compares the associated tag to a plurality of tags stored in the tag buffer and issues the memory request stored in the request buffer to either a memory cache or a main memory based on the comparison.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: May 5, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Gabriel Loh, Jaewoong Sim
  • Publication number: 20150067264
    Abstract: In some embodiments, a method of managing cache memory includes identifying a group of cache lines in a cache memory, based on a correlation between the cache lines. The method also includes tracking evictions of cache lines in the group from the cache memory and, in response to a determination that a criterion regarding eviction of cache lines in the group from the cache memory is satisfied, selecting one or more (e.g., all) remaining cache lines in the group for eviction.
    Type: Application
    Filed: August 28, 2013
    Publication date: March 5, 2015
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Yasuko Eckert, Syed Ali Jafri, Srilatha Manne, Gabriel Loh
  • Patent number: 8935472
    Abstract: A data processing device is provided that includes an array of working memory banks and an associated processing engine. The working memory bank array is configured with at least one independently activatable memory bank. A dirty data counter (DDC) is associated with the independently activatable memory bank and is configured to reflect a count of dirty data migrated from the independently activatable memory bank upon selective deactivation of the independently activatable memory bank. The DDC is configured to selectively decrement the count of dirty data upon the reactivation of the independently activatable memory bank in connection with a transient state. In the transient state, each dirty data access by the processing engine to the reactivated memory bank is also conducted with respect to another memory bank of the array. Upon a condition that dirty data is found in the other memory bank, the count of dirty data is decremented.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: January 13, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mithuna Thottethodi, Gabriel Loh, Mauricio Breternitz, James O'Connor, Yasuko Eckert
  • Patent number: 8880809
    Abstract: Embodiments are described for a method for controlling access to memory in a processor-based system comprising monitoring a number of interference events, such as bank contentions, bus contentions, row-buffer conflicts, and increased write-to-read turnaround time caused by a first core in the processor-based system that causes a delay in access to the memory by a second core in the processor-based system; deriving a control signal based on the number of interference events; and transmitting the control signal to one or more resources of the processor-based system to reduce the number of interference events from an original number of interference events.
    Type: Grant
    Filed: October 29, 2012
    Date of Patent: November 4, 2014
    Assignee: Advanced Micro Devices Inc.
    Inventors: Gabriel Loh, James O'Connor
  • Patent number: 8839053
    Abstract: Architecture that implements error correcting pointers (ECPs) with a memory row, which point to the address of failed memory cells, each of which is paired with a replacement cell to be substituted for the failed cell. If two error correcting pointers in the array point to the same cell, a precedence rule dictates the array entry with the higher index (the entry created later) takes precedence. To count the number of error correcting pointers in use, a null pointer address can be employed to indicate that a pointer is inactive, an activation bit can be added, and/or a counter, that represents the number of error correcting pointers that are active. Mechanisms are provided for wear-leveling within the error correction structure, or for pairing this scheme with single-error correcting bits for instances where transient failures may occur. The architecture also employs pointers to correct errors in volatile and non-volatile memories.
    Type: Grant
    Filed: May 27, 2010
    Date of Patent: September 16, 2014
    Assignee: Microsoft Corporation
    Inventors: Stuart Schechter, Karin Strauss, Gabriel Loh, Douglas C. Burger
  • Publication number: 20140181384
    Abstract: A system, method and computer program product to store tag blocks in a tag buffer in order to provide early row-buffer miss detection, early page closing, and reductions in tag block transfers. A system comprises a tag buffer, a request buffer, and a memory controller. The request buffer stores a memory request having an associated tag. The memory controller compares the associated tag to a plurality of tags stored in the tag buffer and issues the memory request stored in the request buffer to either a memory cache or a main memory based on the comparison.
    Type: Application
    Filed: December 21, 2012
    Publication date: June 26, 2014
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Gabriel LOH, Jaewoong Sim
  • Publication number: 20140181415
    Abstract: Prefetching functionality on a logic die stacked with memory is described herein. A device includes a logic chip stacked with a memory chip. The logic chip includes a control block, an in-stack prefetch request handler and a memory controller. The control block receives memory requests from an external source and determines availability of the requested data in the in-stack prefetch request handler. If the data is available, the control block sends the requested data to the external source. If the data is not available, the control block obtains the requested data via the memory controller. The in-stack prefetch request handler includes a prefetch controller, a prefetcher and a prefetch buffer. The prefetcher monitors the memory requests and based on observed patterns, issues additional prefetch requests to the memory controller.
    Type: Application
    Filed: December 21, 2012
    Publication date: June 26, 2014
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventors: Gabriel Loh, Nuwan Jayasena, James O'Connor, Michael Schulte, Michael Ignatowski
  • Publication number: 20140177347
    Abstract: A method and apparatus for inter-row data transfer in memory devices is described. Data transfer from one physical location in a memory device to another is achieved without engaging the external input/output pins on the memory device. In an example method, a memory device is responsive to a row transfer (RT) command which includes a source row identifier and a target row identifier. The memory device activates a source row and storing source row data in a row buffer, latches the target row identifier into the memory device, activates a word line of a target row to prepare for a write operation, and stores the source row data from the row buffer into the target row.
    Type: Application
    Filed: December 20, 2012
    Publication date: June 26, 2014
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventors: Niladrish Chatterjee, James O'Connor, Nuwan Jayasena, Gabriel Loh