Patents by Inventor Gabriel Loh
Gabriel Loh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240087667Abstract: Error correction for stacked memory is described. In accordance with the described techniques, a system includes a plurality of error correction code engines to detect vulnerabilities in a stacked memory and coordinate at least one vulnerability detected for a portion of the stacked memory to at least one other portion of the stacked memory.Type: ApplicationFiled: August 29, 2023Publication date: March 14, 2024Applicant: Advanced Micro Devices, Inc.Inventors: Divya Madapusi Srinivas Prasad, Michael Ignatowski, Gabriel Loh
-
Patent number: 10437736Abstract: A data processing system includes a memory and an input output memory management unit that is connected to the memory. The input output memory management unit is adapted to receive batches of address translation requests. The input output memory management unit has instructions that identify, from among the batches of address translation requests, a later batch having a lower number of memory access requests than an earlier batch, and selectively schedules access to a page table walker for each address translation request of a batch.Type: GrantFiled: December 22, 2017Date of Patent: October 8, 2019Assignee: Advanced Micro Devices, Inc.Inventors: Arkaprava Basu, Eric Van Tassell, Mark Oskin, Guilherme Cox, Gabriel Loh
-
Patent number: 10394726Abstract: A memory network includes a plurality of memory nodes each identifiable by an ordinal number m, and a set of links divided into N subsets of links, where each subset of links is identifiable by an ordinal number n. For each subset of the plurality of N subsets of links, each link in the subset connects two memory nodes that have ordinal numbers m differing by b(n-1), where b is a positive number. Each of the memory nodes is communicatively coupled to a processor via at least two non-overlapping pathways through the plurality of links.Type: GrantFiled: August 5, 2016Date of Patent: August 27, 2019Assignee: Advanced Micro Devices, Inc.Inventor: Gabriel Loh
-
Publication number: 20190196978Abstract: A data processing system includes a memory and an input output memory management unit that is connected to the memory. The input output memory management unit is adapted to receive batches of address translation requests. The input output memory management unit has instructions that identify, from among the batches of address translation requests, a later batch having a lower number of memory access requests than an earlier batch, and selectively schedules access to a page table walker for each address translation request of a batch.Type: ApplicationFiled: December 22, 2017Publication date: June 27, 2019Applicant: Advanced Micro Devices, Inc.Inventors: Arkaprava Basu, Eric Van Tassell, Mark Oskin, Guilherme Cox, Gabriel Loh
-
Patent number: 10133678Abstract: In some embodiments, a method of managing cache memory includes identifying a group of cache lines in a cache memory, based on a correlation between the cache lines. The method also includes tracking evictions of cache lines in the group from the cache memory and, in response to a determination that a criterion regarding eviction of cache lines in the group from the cache memory is satisfied, selecting one or more (e.g., all) remaining cache lines in the group for eviction.Type: GrantFiled: August 28, 2013Date of Patent: November 20, 2018Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Yasuko Eckert, Syed Ali Jafri, Srilatha Manne, Gabriel Loh
-
Patent number: 10049044Abstract: Proactive flush logic in a computing system is configured to perform a proactive flush operation to flush data from a first memory in a first computing device to a second memory in response to execution of a non-blocking flush instruction. Reactive flush logic in the computing system is configured to, in response to a memory request issued prior to completion of the proactive flush operation, interrupt the proactive flush operation and perform a reactive flush operation to flush requested data from the first memory to the second memory.Type: GrantFiled: June 14, 2016Date of Patent: August 14, 2018Assignee: Advanced Micro Devices, Inc.Inventors: Michael Boyer, Gabriel Loh, Nuwan Jayasena
-
Publication number: 20180039587Abstract: A memory network includes a plurality of memory nodes each identifiable by an ordinal number m, and a set of links divided into N subsets of links, where each subset of links is identifiable by an ordinal number n. For each subset of the plurality of N subsets of links, each link in the subset connects two memory nodes that have ordinal numbers m differing by b(n-1), where b is a positive number. Each of the memory nodes is communicatively coupled to a processor via at least two non-overlapping pathways through the plurality of links.Type: ApplicationFiled: August 5, 2016Publication date: February 8, 2018Inventor: Gabriel Loh
-
Publication number: 20170357583Abstract: Proactive flush logic in a computing system is configured to perform a proactive flush operation to flush data from a first memory in a first computing device to a second memory in response to execution of a non-blocking flush instruction. Reactive flush logic in the computing system is configured to, in response to a memory request issued prior to completion of the proactive flush operation, interrupt the proactive flush operation and perform a reactive flush operation to flush requested data from the first memory to the second memory.Type: ApplicationFiled: June 14, 2016Publication date: December 14, 2017Inventors: Michael Boyer, Gabriel Loh, Nuwan Jayasena
-
Publication number: 20170161194Abstract: A method of prefetching data includes issuing to a translation lookaside buffer (TLB) an address translation request for a virtual memory address, detecting a TLB miss generated in response to the address translation request, and in response to the TLB miss, selecting the data for prefetching from memory based on the memory address causing the TLB miss and prefetching the selected data to a cache.Type: ApplicationFiled: December 2, 2015Publication date: June 8, 2017Inventor: Gabriel Loh
-
Patent number: 9377954Abstract: A system for memory allocation in a multiclass memory system includes a processor coupleable to a plurality of memories sharing a unified memory address space, and a library store to store a library of software functions. The processor identifies a type of a data structure in response to a memory allocation function call to the library for allocating memory to the data structure. Using the library, the processor allocates portions of the data structure among multiple memories of the multiclass memory system based on the type of the data structure.Type: GrantFiled: May 9, 2014Date of Patent: June 28, 2016Assignee: Advanced Micro Devices, Inc.Inventors: Gabriel Loh, Mitesh Meswani, Michael Ignatowski, Mark Nutter
-
Publication number: 20160041914Abstract: Embodiments include methods, systems, and computer readable medium directed to cache bypassing based on prefetch streams. A first cache receives a memory access request. The request references data in the memory. The data comprises non-reuse data. After a determination of a miss in the first cache, the first cache forwards the memory access request to a cache control logic. The detection of the non-reuse data instructs the cache control logic to allocate a block only in a second cache and bypass allocating a block in the first cache. The first cache is closer to the memory than the second cache.Type: ApplicationFiled: August 5, 2014Publication date: February 11, 2016Applicant: Advanced Micro Devices, Inc.Inventors: Yasuko Eckert, Gabriel Loh
-
Publication number: 20150324131Abstract: A system for memory allocation in a multiclass memory system includes a processor coupleable to a plurality of memories sharing a unified memory address space, and a library store to store a library of software functions. The processor identifies a type of a data structure in response to a memory allocation function call to the library for allocating memory to the data structure. Using the library, the processor allocates portions of the data structure among multiple memories of the multiclass memory system based on the type of the data structure.Type: ApplicationFiled: May 9, 2014Publication date: November 12, 2015Applicant: Advanced Micro Devices, Inc.Inventors: Gabriel Loh, Mitesh Meswani, Michael Ignatowski, Mark Nutter
-
Patent number: 9026731Abstract: A system, method and computer program product to store tag blocks in a tag buffer in order to provide early row-buffer miss detection, early page closing, and reductions in tag block transfers. A system comprises a tag buffer, a request buffer, and a memory controller. The request buffer stores a memory request having an associated tag. The memory controller compares the associated tag to a plurality of tags stored in the tag buffer and issues the memory request stored in the request buffer to either a memory cache or a main memory based on the comparison.Type: GrantFiled: December 21, 2012Date of Patent: May 5, 2015Assignee: Advanced Micro Devices, Inc.Inventors: Gabriel Loh, Jaewoong Sim
-
Publication number: 20150067264Abstract: In some embodiments, a method of managing cache memory includes identifying a group of cache lines in a cache memory, based on a correlation between the cache lines. The method also includes tracking evictions of cache lines in the group from the cache memory and, in response to a determination that a criterion regarding eviction of cache lines in the group from the cache memory is satisfied, selecting one or more (e.g., all) remaining cache lines in the group for eviction.Type: ApplicationFiled: August 28, 2013Publication date: March 5, 2015Applicant: Advanced Micro Devices, Inc.Inventors: Yasuko Eckert, Syed Ali Jafri, Srilatha Manne, Gabriel Loh
-
Patent number: 8935472Abstract: A data processing device is provided that includes an array of working memory banks and an associated processing engine. The working memory bank array is configured with at least one independently activatable memory bank. A dirty data counter (DDC) is associated with the independently activatable memory bank and is configured to reflect a count of dirty data migrated from the independently activatable memory bank upon selective deactivation of the independently activatable memory bank. The DDC is configured to selectively decrement the count of dirty data upon the reactivation of the independently activatable memory bank in connection with a transient state. In the transient state, each dirty data access by the processing engine to the reactivated memory bank is also conducted with respect to another memory bank of the array. Upon a condition that dirty data is found in the other memory bank, the count of dirty data is decremented.Type: GrantFiled: December 21, 2012Date of Patent: January 13, 2015Assignee: Advanced Micro Devices, Inc.Inventors: Mithuna Thottethodi, Gabriel Loh, Mauricio Breternitz, James O'Connor, Yasuko Eckert
-
Patent number: 8880809Abstract: Embodiments are described for a method for controlling access to memory in a processor-based system comprising monitoring a number of interference events, such as bank contentions, bus contentions, row-buffer conflicts, and increased write-to-read turnaround time caused by a first core in the processor-based system that causes a delay in access to the memory by a second core in the processor-based system; deriving a control signal based on the number of interference events; and transmitting the control signal to one or more resources of the processor-based system to reduce the number of interference events from an original number of interference events.Type: GrantFiled: October 29, 2012Date of Patent: November 4, 2014Assignee: Advanced Micro Devices Inc.Inventors: Gabriel Loh, James O'Connor
-
Patent number: 8839053Abstract: Architecture that implements error correcting pointers (ECPs) with a memory row, which point to the address of failed memory cells, each of which is paired with a replacement cell to be substituted for the failed cell. If two error correcting pointers in the array point to the same cell, a precedence rule dictates the array entry with the higher index (the entry created later) takes precedence. To count the number of error correcting pointers in use, a null pointer address can be employed to indicate that a pointer is inactive, an activation bit can be added, and/or a counter, that represents the number of error correcting pointers that are active. Mechanisms are provided for wear-leveling within the error correction structure, or for pairing this scheme with single-error correcting bits for instances where transient failures may occur. The architecture also employs pointers to correct errors in volatile and non-volatile memories.Type: GrantFiled: May 27, 2010Date of Patent: September 16, 2014Assignee: Microsoft CorporationInventors: Stuart Schechter, Karin Strauss, Gabriel Loh, Douglas C. Burger
-
Publication number: 20140181384Abstract: A system, method and computer program product to store tag blocks in a tag buffer in order to provide early row-buffer miss detection, early page closing, and reductions in tag block transfers. A system comprises a tag buffer, a request buffer, and a memory controller. The request buffer stores a memory request having an associated tag. The memory controller compares the associated tag to a plurality of tags stored in the tag buffer and issues the memory request stored in the request buffer to either a memory cache or a main memory based on the comparison.Type: ApplicationFiled: December 21, 2012Publication date: June 26, 2014Applicant: Advanced Micro Devices, Inc.Inventors: Gabriel LOH, Jaewoong Sim
-
Publication number: 20140181415Abstract: Prefetching functionality on a logic die stacked with memory is described herein. A device includes a logic chip stacked with a memory chip. The logic chip includes a control block, an in-stack prefetch request handler and a memory controller. The control block receives memory requests from an external source and determines availability of the requested data in the in-stack prefetch request handler. If the data is available, the control block sends the requested data to the external source. If the data is not available, the control block obtains the requested data via the memory controller. The in-stack prefetch request handler includes a prefetch controller, a prefetcher and a prefetch buffer. The prefetcher monitors the memory requests and based on observed patterns, issues additional prefetch requests to the memory controller.Type: ApplicationFiled: December 21, 2012Publication date: June 26, 2014Applicant: ADVANCED MICRO DEVICES, INC.Inventors: Gabriel Loh, Nuwan Jayasena, James O'Connor, Michael Schulte, Michael Ignatowski
-
Publication number: 20140177347Abstract: A method and apparatus for inter-row data transfer in memory devices is described. Data transfer from one physical location in a memory device to another is achieved without engaging the external input/output pins on the memory device. In an example method, a memory device is responsive to a row transfer (RT) command which includes a source row identifier and a target row identifier. The memory device activates a source row and storing source row data in a row buffer, latches the target row identifier into the memory device, activates a word line of a target row to prepare for a write operation, and stores the source row data from the row buffer into the target row.Type: ApplicationFiled: December 20, 2012Publication date: June 26, 2014Applicant: ADVANCED MICRO DEVICES, INC.Inventors: Niladrish Chatterjee, James O'Connor, Nuwan Jayasena, Gabriel Loh