Patents by Inventor Gabriel H. Loh

Gabriel H. Loh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160170919
    Abstract: A system includes a plurality of memory classes and a set of one or more processing units coupled to the plurality of memory classes. The system further includes a data migration controller to select a traffic rate as a maximum traffic rate for transferring data between the plurality of memory classes based on a net benefit metric associated with the traffic rate, and to enforce the maximum traffic rate for transferring data between the plurality of memory classes.
    Type: Application
    Filed: December 15, 2014
    Publication date: June 16, 2016
    Inventors: Sergey Blagodurov, Gabriel H. Loh, Yasuko Eckert
  • Publication number: 20160170887
    Abstract: To efficiently transfer of data from a cache to a memory, it is desirable that more data corresponding to the same page in the memory be loaded in a line buffer. Writing data to a memory page that is not currently loaded in a row buffer requires closing an old page and opening a new page. Both operations consume energy and clock cycles and potentially delay more critical memory read requests. Hence it is desirable to have more than one write going to the same DRAM page to amortize the cost of opening and closing DRAM pages. A desirable approach is batch write backs to the same DRAM page by retaining modified blocks in the cache until a sufficient number of modified blocks belonging to the same memory page are ready for write backs.
    Type: Application
    Filed: December 12, 2014
    Publication date: June 16, 2016
    Inventors: Syed Ali R. JAFRI, Yasuko ECKERT, Srilatha MANNE, Mithuna S. THOTTETHODI, Gabriel H. LOH
  • Patent number: 9367466
    Abstract: A type of conditional probability fetcher prefetches data, such as for a cache, from another memory by maintaining information relating to memory elements in a group of memory elements fetched from the second memory. The information may be an aggregate number of memory elements that have been fetched for different memory segments in the group. The information is maintained responsive to fetching one or more memory elements from a segment of memory elements in the group of memory elements. Prefetching one or more remaining memory elements in a particular segment of memory elements from the second memory into the first memory occurs when the information relating to the memory elements in the group of memory elements indicates that a prefetching condition has been satisfied.
    Type: Grant
    Filed: February 13, 2013
    Date of Patent: June 14, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Matthew R. Poremba, Gabriel H. Loh
  • Patent number: 9361956
    Abstract: The described embodiments include a memory with a memory array and logic circuits. In these embodiments, logical operations are performed on data from the memory array by reading the data from the memory array, performing a logical operation on the data in the logic circuits, and writing the data back to the memory array. In these embodiments, the logic circuit is located in the memory so that the data read from the memory array need not be sent to another circuit (e.g., a processor coupled to the memory, etc.) to have the logical operation performed.
    Type: Grant
    Filed: January 15, 2014
    Date of Patent: June 7, 2016
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Hye Ran Jeon, Gabriel H. Loh
  • Patent number: 9344091
    Abstract: A die-stacked memory device incorporates a reconfigurable logic device to provide implementation flexibility in performing various data manipulation operations and other memory operations that use data stored in the die-stacked memory device or that result in data that is to be stored in the die-stacked memory device. One or more configuration files representing corresponding logic configurations for the reconfigurable logic device can be stored in a configuration store at the die-stacked memory device, and a configuration controller can program a reconfigurable logic fabric of the reconfigurable logic device using a selected one of the configuration files. Due to the integration of the logic dies and the memory dies, the reconfigurable logic device can perform various data manipulation operations with higher bandwidth and lower latency and power consumption compared to devices external to the die-stacked memory device.
    Type: Grant
    Filed: November 24, 2014
    Date of Patent: May 17, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Nuwan S. Jayasena, Michael J. Schulte, Gabriel H. Loh, Michael Ignatowski
  • Patent number: 9331053
    Abstract: Various stacked semiconductor chip arrangements and methods of manufacturing the same are disclosed. In one aspect, an apparatus is provided that includes a first semiconductor chip, a second semiconductor chip mounted on the first semiconductor chip, and a first portion of a phase change material positioned in a first pocket associated with the first semiconductor chip or the second semiconductor chip to store heat generated by one or both of the first and second semiconductor chips.
    Type: Grant
    Filed: August 31, 2013
    Date of Patent: May 3, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Manish Arora, Nuwan Jayasena, Gabriel H. Loh, Michael J. Schulte
  • Publication number: 20160085677
    Abstract: A processing system having a multilevel cache hierarchy employs techniques for repurposing dead cache blocks so as to use otherwise wasted space in a cache hierarchy employing a write-back scheme. For a cache line containing invalid data with a valid tag, the valid tag is maintained for cache coherence purposes or otherwise, resulting in a valid tag for a dead cache block. A cache controller repurposes the dead cache block by storing any of a variety of new data at the dead cache block, while storing the new tag in a tag entry of a dead block tag way with an identifier indicating the location of the new data.
    Type: Application
    Filed: September 19, 2014
    Publication date: March 24, 2016
    Inventors: Gabriel H. Loh, Derek R. Hower, Shuai Che
  • Patent number: 9286948
    Abstract: An integrated circuit (IC) package includes a stacked-die memory device. The stacked-die memory device includes a set of one or more stacked memory dies implementing memory cell circuitry. The stacked-die memory device further includes a set of one or more logic dies electrically coupled to the memory cell circuitry. The set of one or more logic dies includes a query controller and a memory controller. The memory controller is coupleable to at least one device external to the stacked-die memory device. The query controller is to perform a query operation on data stored in the memory cell circuitry responsive to a query command received from the external device.
    Type: Grant
    Filed: July 15, 2013
    Date of Patent: March 15, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Gabriel H. Loh, Nuwan S. Jayasena, James M. O'Connor, Yasuko Eckert
  • Publication number: 20160062803
    Abstract: The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism performs a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the resource is not available for performing the operation and until another resource is selected for performing the operation, the selection mechanism identifies a next resource in the table and selects the next resource for performing the operation when the next resource is available for performing the operation.
    Type: Application
    Filed: November 6, 2015
    Publication date: March 3, 2016
    Inventors: Bradford M. Beckmann, Mithuna S. Thottethodi, James M. O'Connor, Mauricio Breternitz, Lisa R. Hsu, Gabriel H. Loh, Yasuko Eckert
  • Publication number: 20160055100
    Abstract: A processing system having multilevel cache employs techniques for identifying and selecting valid candidate cache lines for eviction from a lower level cache of an inclusive cache hierarchy, so as to reduce invalidations resulting from an eviction of a cache line in a lower level cache that also resides in a higher level cache. In response to an eviction trigger for a lower level cache, a cache controller identifies candidate cache lines for eviction from the cache lines residing in the lower level cache based on the replacement policy. The cache controller uses residency metadata to identify the candidate cache line as a valid candidate if it does not also reside in the higher cache and as an invalid candidate if it does reside in the higher cache. The cache controller prevents eviction of invalid candidates, so as to avoid unnecessary invalidations in the higher cache while maintaining inclusiveness.
    Type: Application
    Filed: August 19, 2014
    Publication date: February 25, 2016
    Inventor: Gabriel H. Loh
  • Patent number: 9251081
    Abstract: A system and method for efficiently powering down banks in a cache memory for reducing power consumption. A computing system includes a cache array and a corresponding cache controller. The cache array includes multiple banks, each comprising multiple cache sets. In response to a request to power down a first bank of the multiple banks in the cache array, the cache controller selects a cache line of a given type in the first bank and determines whether a respective locality of reference for the selected cache line exceeds a threshold. If the threshold is exceeded, then the selected cache line is migrated to a second bank in the cache array. If the threshold is not exceeded, then the selected cache line is written back to lower-level memory.
    Type: Grant
    Filed: August 1, 2013
    Date of Patent: February 2, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Kai K. Chang, Yasuko Eckert, Gabriel H. Loh, Lisa R. Hsu
  • Patent number: 9251069
    Abstract: A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache array and a corresponding cache controller. The cache array includes multiple banks, wherein a first bank is powered down. In response a write request to a second bank for data indicated to be stored in the powered down first bank, the cache controller determines a respective bypass condition for the data. If the bypass condition exceeds a threshold, then the cache controller invalidates any copy of the data stored in the second bank. If the bypass condition does not exceed the threshold, then the cache controller stores the data with a clean state in the second bank. The cache controller writes the data in a lower-level memory for both cases.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: February 2, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Yasuko Eckert, Gabriel H. Loh, Mauricio Breternitz, James M. O'Connor, Srilatha Manne, Nuwan S. Jayasena, Mithuna S. Thottethodi
  • Patent number: 9244629
    Abstract: Methods, systems and computer readable storage mediums for more efficient and flexible scheduling of tasks on an asymmetric processing system having at least one host processor and one or more slave processors, are disclosed. An example embodiment includes, determining a data access requirement of a task, comparing the data access requirement to respective local memories of the one or more slave processors selecting a slave processor from the one or more slave processors based upon the comparing, and running the task on the selected slave processor.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: January 26, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Lisa R. Hsu, Gabriel H. Loh, James Michael O'Connor, Nuwan S. Jayasena
  • Publication number: 20160019937
    Abstract: Various apparatus and methods using phase change materials are disclosed. In one aspect, a method of operating a computing device that has a first semiconductor chip with a first phase change material and a second semiconductor chip with a second phase change material is provided. The method includes determining if the first semiconductor chip phase change material has available thermal capacity. If the first semiconductor chip phase change material has available thermal capacity then the first semiconductor chip is instructed to operate in sprint mode. The first semiconductor chip is instructed to perform a first computing task while in sprint mode.
    Type: Application
    Filed: July 21, 2014
    Publication date: January 21, 2016
    Inventors: Manish Arora, Nuwan Jayasena, Gabriel H. Loh, Michael J. Schulte
  • Patent number: 9235514
    Abstract: The described embodiments include a cache controller with a prediction mechanism in a cache. In the described embodiments, the prediction mechanism is configured to perform a lookup in each table in a hierarchy of lookup tables in parallel to determine if a memory request is predicted to be a hit in the cache, each table in the hierarchy comprising predictions whether memory requests to corresponding regions of a main memory will hit the cache, the corresponding regions of the main memory being smaller for tables lower in the hierarchy.
    Type: Grant
    Filed: January 8, 2013
    Date of Patent: January 12, 2016
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Gabriel H. Loh, Jaewoong Sim
  • Patent number: 9235528
    Abstract: A system, method, and memory device embodying some aspects of the present invention for remapping external memory addresses and internal memory locations in stacked memory are provided. The stacked memory includes one or more memory layers configured to store data. The stacked memory also includes a logic layer connected to the memory layer. The logic layer has an Input/Output (I/O) port configured to receive read and write commands from external devices, a memory map configured to maintain an association between external memory addresses and internal memory locations, and a controller coupled to the I/O port, memory map, and memory layers, configured to store data received from external devices to internal memory locations.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: January 12, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Lisa R. Hsu, Gabriel H. Loh, Michael Ignatowski, Michael J. Schulte, Nuwan S. Jayasena, James M. O'Connor
  • Patent number: 9229803
    Abstract: A method of managing memory includes installing a first cacheline at a first location in a cache memory and receiving a write request. In response to the write request, the first cacheline is modified in accordance with the write request and marked as dirty. Also in response to the write request, a second cacheline is installed that duplicates the first cacheline, as modified in accordance with the write request, at a second location in the cache memory.
    Type: Grant
    Filed: December 19, 2012
    Date of Patent: January 5, 2016
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Gabriel H. Loh, Vilas K. Sridharan, James M. O'Connor, Jaewoong Sim
  • Patent number: 9218204
    Abstract: A system includes an atomic processing engine (APE) coupled to an interconnect. The interconnect is to couple to one or more processor cores. The APE receives a plurality of commands from the one or more processor cores through the interconnect. In response to a first command, the APE performs a first plurality of operations associated with the first command. The first plurality of operations references multiple memory locations, at least one of which is shared between two or more threads executed by the one or more processor cores.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: December 22, 2015
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: James M. O'Connor, Michael J. Schulte, Nuwan S. Jayasena, Gabriel H. Loh
  • Publication number: 20150356024
    Abstract: The described embodiments include a translation lookaside buffer (“TLB”) that is used for performing virtual address to physical address translations when making memory accesses in a memory in a computing device. In the described embodiments, the TLB includes a hierarchy of tables that are each used for performing virtual address to physical address translations based on the arrangement of pages of memory in corresponding regions of the memory. When performing a virtual address to physical address translation, the described embodiments perform a lookup in each of the tables in parallel for the virtual address to physical address translation and use a physical address that is returned from a lowest table in the hierarchy as the translation.
    Type: Application
    Filed: June 4, 2014
    Publication date: December 10, 2015
    Inventor: Gabriel H. Loh
  • Publication number: 20150357306
    Abstract: An electronic assembly includes horizontally-stacked die disposed at an interposer, and may also include vertically-stacked die. The stacked die are interconnected via a multi-hop communication network that is partitioned into a link partition and a router partition. The link partition is at least partially implemented in the metal layers of the interposer for horizontally-stacked die. The link partition may also be implemented in part by the intra-die interconnects in a single die and by the inter-die interconnects connecting vertically-stacked sets of die. The router partition is implemented at some or all of the die disposed at the interposer and comprises the logic that supports the functions that route packets among the components of the processing system via the interconnects of the link partition. The router partition may implement fixed routing, or alternatively may be configurable using programmable routing tables or configurable logic blocks.
    Type: Application
    Filed: May 18, 2015
    Publication date: December 10, 2015
    Inventors: Mithuna S. Thottethodi, Gabriel H. Loh