Patents by Inventor Lisa R. Hsu
Lisa R. Hsu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11301256Abstract: Embodiments disclose a system and method for reducing virtual address translation latency in a wide execution engine that implements virtual memory. One example method describes a method comprising receiving a wavefront, classifying the wavefront into a subset based on classification criteria selected to reduce virtual address translation latency associated with a memory support structure, and scheduling the wavefront for processing based on the classifying.Type: GrantFiled: August 22, 2014Date of Patent: April 12, 2022Assignee: Advanced Micro Devices, Inc.Inventors: Lisa R. Hsu, James Michael O'Connor
-
Patent number: 10522193Abstract: A system, method, and computer program product are provided for a memory device system. One or more memory dies and at least one logic die are disposed in a package and communicatively coupled. The logic die comprises a processing device configurable to manage virtual memory and operate in an operating mode. The operating mode is selected from a set of operating modes comprising a slave operating mode and a host operating mode.Type: GrantFiled: September 12, 2018Date of Patent: December 31, 2019Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Nuwan S. Jayasena, Gabriel H. Loh, Bradford M. Beckmann, James M. O'Connor, Lisa R. Hsu
-
Publication number: 20190013051Abstract: A system, method, and computer program product are provided for a memory device system. One or more memory dies and at least one logic die are disposed in a package and communicatively coupled. The logic die comprises a processing device configurable to manage virtual memory and operate in an operating mode. The operating mode is selected from a set of operating modes comprising a slave operating mode and a host operating mode.Type: ApplicationFiled: September 12, 2018Publication date: January 10, 2019Applicant: ADVANCED MICRO DEVICES, INC.Inventors: Nuwan S. Jayasena, Gabriel H. Loh, Bradford M. Beckmann, James M. O'Connor, Lisa R. Hsu
-
Patent number: 10079044Abstract: A system, method, and computer program product are provided for a memory device system. One or more memory dies and at least one logic die are disposed in a package and communicatively coupled. The logic die comprises a processing device configurable to manage virtual memory and operate in an operating mode. The operating mode is selected from a set of operating modes comprising a slave operating mode and a host operating mode.Type: GrantFiled: December 20, 2012Date of Patent: September 18, 2018Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Nuwan S. Jayasena, Gabriel H. Loh, Bradford M. Beckmann, James M. O'Connor, Lisa R. Hsu
-
Patent number: 9875195Abstract: A system and method are disclosed for managing memory interleaving patterns in a system with multiple memory devices. The system includes a processor configured to access multiple memory devices. The method includes receiving a first plurality of data blocks, and then storing the first plurality of data blocks using an interleaving pattern in which successive blocks of the first plurality of data blocks are stored in each of the memory devices. The method also includes receiving a second plurality of data blocks, and then storing successive blocks of the second plurality of data blocks in a first memory device of the multiple memory devices.Type: GrantFiled: August 14, 2014Date of Patent: January 23, 2018Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Nuwan S. Jayasena, Lisa R. Hsu, James M. O'Connor
-
Patent number: 9766936Abstract: The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism performs a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the resource is not available for performing the operation and until another resource is selected for performing the operation, the selection mechanism identifies a next resource in the table and selects the next resource for performing the operation when the next resource is available for performing the operation.Type: GrantFiled: November 6, 2015Date of Patent: September 19, 2017Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Bradford M. Beckmann, Mithuna S. Thottethodi, James M. O'Connor, Mauricio Breternitz, Lisa R. Hsu, Gabriel H. Loh, Yasuko Eckert
-
Publication number: 20160062803Abstract: The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism performs a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the resource is not available for performing the operation and until another resource is selected for performing the operation, the selection mechanism identifies a next resource in the table and selects the next resource for performing the operation when the next resource is available for performing the operation.Type: ApplicationFiled: November 6, 2015Publication date: March 3, 2016Inventors: Bradford M. Beckmann, Mithuna S. Thottethodi, James M. O'Connor, Mauricio Breternitz, Lisa R. Hsu, Gabriel H. Loh, Yasuko Eckert
-
Publication number: 20160055005Abstract: Embodiments disclose a system and method for reducing virtual address translation latency in a wide execution engine that implements virtual memory. One example method describes a method comprising receiving a wavefront, classifying the wavefront into a subset based on classification criteria selected to reduce virtual address translation latency associated with a memory support structure, and scheduling the wavefront for processing based on the classifying.Type: ApplicationFiled: August 22, 2014Publication date: February 25, 2016Applicant: Advanced Micro Devices, Inc.Inventors: Lisa R. HSU, James Michael O'Connor
-
Publication number: 20160048327Abstract: A system and method are disclosed for managing memory interleaving patterns in a system with multiple memory devices. The system includes a processor configured to access multiple memory devices. The method includes receiving a first plurality of data blocks, and then storing the first plurality of data blocks using an interleaving pattern in which successive blocks of the first plurality of data blocks are stored in each of the memory devices. The method also includes receiving a second plurality of data blocks, and then storing successive blocks of the second plurality of data blocks in a first memory device of the multiple memory devices.Type: ApplicationFiled: August 14, 2014Publication date: February 18, 2016Inventors: Nuwan S. Jayasena, Lisa R. Hsu, James M. O'Connor
-
Patent number: 9251081Abstract: A system and method for efficiently powering down banks in a cache memory for reducing power consumption. A computing system includes a cache array and a corresponding cache controller. The cache array includes multiple banks, each comprising multiple cache sets. In response to a request to power down a first bank of the multiple banks in the cache array, the cache controller selects a cache line of a given type in the first bank and determines whether a respective locality of reference for the selected cache line exceeds a threshold. If the threshold is exceeded, then the selected cache line is migrated to a second bank in the cache array. If the threshold is not exceeded, then the selected cache line is written back to lower-level memory.Type: GrantFiled: August 1, 2013Date of Patent: February 2, 2016Assignee: Advanced Micro Devices, Inc.Inventors: Kai K. Chang, Yasuko Eckert, Gabriel H. Loh, Lisa R. Hsu
-
Patent number: 9244629Abstract: Methods, systems and computer readable storage mediums for more efficient and flexible scheduling of tasks on an asymmetric processing system having at least one host processor and one or more slave processors, are disclosed. An example embodiment includes, determining a data access requirement of a task, comparing the data access requirement to respective local memories of the one or more slave processors selecting a slave processor from the one or more slave processors based upon the comparing, and running the task on the selected slave processor.Type: GrantFiled: June 25, 2013Date of Patent: January 26, 2016Assignee: Advanced Micro Devices, Inc.Inventors: Lisa R. Hsu, Gabriel H. Loh, James Michael O'Connor, Nuwan S. Jayasena
-
Patent number: 9235528Abstract: A system, method, and memory device embodying some aspects of the present invention for remapping external memory addresses and internal memory locations in stacked memory are provided. The stacked memory includes one or more memory layers configured to store data. The stacked memory also includes a logic layer connected to the memory layer. The logic layer has an Input/Output (I/O) port configured to receive read and write commands from external devices, a memory map configured to maintain an association between external memory addresses and internal memory locations, and a controller coupled to the I/O port, memory map, and memory layers, configured to store data received from external devices to internal memory locations.Type: GrantFiled: December 21, 2012Date of Patent: January 12, 2016Assignee: Advanced Micro Devices, Inc.Inventors: Lisa R. Hsu, Gabriel H. Loh, Michael Ignatowski, Michael J. Schulte, Nuwan S. Jayasena, James M. O'Connor
-
Patent number: 9201777Abstract: A die-stacked memory device implements an integrated QoS manager to provide centralized QoS functionality in furtherance of one or more specified QoS objectives for the sharing of the memory resources by other components of the processing system. The die-stacked memory device includes a set of one or more stacked memory dies and one or more logic dies. The logic dies implement hardware logic for a memory controller and the QoS manager. The memory controller is coupleable to one or more devices external to the set of one or more stacked memory dies and operates to service memory access requests from the one or more external devices. The QoS manager comprises logic to perform operations in furtherance of one or more QoS objectives, which may be specified by a user, by an operating system, hypervisor, job management software, or other application being executed, or specified via hardcoded logic or firmware.Type: GrantFiled: December 23, 2012Date of Patent: December 1, 2015Assignee: Advanced Micro Devices, Inc.Inventors: Lisa R. Hsu, Gabriel H. Loh, Bradford M. Beckmann, Michael Ignatowski
-
Patent number: 9183055Abstract: The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism is configured to perform a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the identified resource is not available for performing the operation and until a resource is selected for performing the operation, the selection mechanism is configured to identify a next resource in the table and select the next resource for performing the operation when the next resource is available for performing the operation.Type: GrantFiled: February 7, 2013Date of Patent: November 10, 2015Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Bradford M. Beckmann, Mithuna S. Thottethodi, James M. O'Connor, Mauricio Breternitz, Lisa R. Hsu, Gabriel H. Loh, Yasuko Eckert
-
Patent number: 9170948Abstract: A die-stacked memory device implements an integrated coherency manager to offload cache coherency protocol operations for the devices of a processing system. The die-stacked memory device includes a set of one or more stacked memory dies and a set of one or more logic dies. The one or more logic dies implement hardware logic providing a memory interface and the coherency manager. The memory interface operates to perform memory accesses in response to memory access requests from the coherency manager and the one or more external devices. The coherency manager comprises logic to perform coherency operations for shared data stored at the stacked memory dies. Due to the integration of the logic dies and the memory dies, the coherency manager can access shared data stored in the memory dies and perform related coherency operations with higher bandwidth and lower latency and power consumption compared to the external devices.Type: GrantFiled: December 23, 2012Date of Patent: October 27, 2015Assignee: Advanced Micro Devices, Inc.Inventors: Gabriel H. Loh, Bradford M. Beckmann, Lisa R. Hsu, Michael Ignatowski, Michael J. Schulte
-
Publication number: 20150293845Abstract: Described is a system and method for a multi-level memory hierarchy. Each level is based on different attributes including, for example, power, capacity, bandwidth, reliability, and volatility. In some embodiments, the different levels of the memory hierarchy may use an on-chip stacked dynamic random access memory, (providing fast, high-bandwidth, low-energy access to data) and an off-chip non-volatile random access memory, (providing low-power, high-capacity storage), in order to provide higher-capacity, lower power, and higher-bandwidth performance. The multi-level memory may present a unified interface to a processor so that specific memory hardware and software implementation details are hidden. The multi-level memory enables the illusion of a single-level memory that satisfies multiple conflicting constraints. A comparator receives a memory address from the processor, processes the address and reads from or writes to the appropriate memory level.Type: ApplicationFiled: April 11, 2014Publication date: October 15, 2015Applicant: ADVANCED MICRO DEVICES, INC.Inventors: Lisa R. Hsu, James M. O'Connor, Vilas K. Sridharan, Gabriel H. Loh, Nuwan S. Jayasena, Bradford M. Beckmann
-
Patent number: 9135185Abstract: A die-stacked memory device incorporates a data translation controller at one or more logic dies of the device to provide data translation services for data to be stored at, or retrieved from, the die-stacked memory device. The data translation operations implemented by the data translation controller can include compression/decompression operations, encryption/decryption operations, format translations, wear-leveling translations, data ordering operations, and the like. Due to the tight integration of the logic dies and the memory dies, the data translation controller can perform data translation operations with higher bandwidth and lower latency and power consumption compared to operations performed by devices external to the die-stacked memory device.Type: GrantFiled: December 23, 2012Date of Patent: September 15, 2015Assignee: Advanced Micro Devices, Inc.Inventors: Gabriel H. Loh, Bradford M. Beckmann, James M. O'Connor, Michael Ignatowski, Michael J. Schulte, Lisa R. Hsu, Nuwan S. Jayasena
-
Publication number: 20150039833Abstract: A system and method for efficiently powering down banks in a cache memory for reducing power consumption. A computing system includes a cache array and a corresponding cache controller. The cache array includes multiple banks, each comprising multiple cache sets. In response to a request to power down a first bank of the multiple banks in the cache array, the cache controller selects a cache line of a given type in the first bank and determines whether a respective locality of reference for the selected cache line exceeds a threshold. If the threshold is exceeded, then the selected cache line is migrated to a second bank in the cache array. If the threshold is not exceeded, then the selected cache line is written back to lower-level memory.Type: ApplicationFiled: August 1, 2013Publication date: February 5, 2015Applicant: Advanced Micro Devices, Inc.Inventors: Kai K. Chang, Yasuko Eckert, Gabriel H. Loh, Lisa R. Hsu
-
Publication number: 20140380003Abstract: Methods, systems and computer readable storage mediums for more efficient and flexible scheduling of tasks on an asymmetric processing system having at least one host processor and one or more slave processors, are disclosed. An example embodiment includes, determining a data access requirement of a task, comparing the data access requirement to respective local memories of the one or more slave processors selecting a slave processor from the one or more slave processors based upon the comparing, and running the task on the selected slave processor.Type: ApplicationFiled: June 25, 2013Publication date: December 25, 2014Inventors: Lisa R. Hsu, Gabriel H. Loh, James Michael O'Connor, Nuwan S. Jayasena
-
Publication number: 20140223445Abstract: The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism is configured to perform a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the identified resource is not available for performing the operation and until a resource is selected for performing the operation, the selection mechanism is configured to identify a next resource in the table and select the next resource for performing the operation when the next resource is available for performing the operation.Type: ApplicationFiled: February 7, 2013Publication date: August 7, 2014Applicant: ADVANCED MICRO DEVICES, INC.Inventors: Bradford M. Beckmann, Mithuna S. Thottethodi, James M. O'Connor, Mauricio Breternitz, Lisa R. Hsu, Gabriel H. Loh, Yasuko Eckert