Patents by Inventor Gabriel H. Loh

Gabriel H. Loh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230102347
    Abstract: Systems and methods are disclosed that map quantum circuits to physical qubits of a quantum computer. Techniques are disclosed to generate a graph that characterizes the physical qubits of the quantum computer and to compute the resource requirements of each circuit of the quantum circuits. For each circuit, the graph is searched for a subgraph that matches the resource requirements of the circuit, based on a density matrix. Physical qubits, defined by the matching subgraph, are then allocated to the logical qubits of the circuit.
    Type: Application
    Filed: September 30, 2021
    Publication date: March 30, 2023
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Anthony T. Gutierrez, Salonik Resch, Yasuko Eckert, Gabriel H. Loh, Mark Henry Oskin, Vedula Venkata Srikant Bharadwaj
  • Patent number: 11550728
    Abstract: A processing system includes a processor, a memory, and an operating system that are used to allocate a page table caching memory object (PTCM) for a user of the processing system. An allocation of the PTCM is requested from a PTCM allocation system. In order to allocate the PTCM, a plurality of physical memory pages from a memory are allocated to store a PTCM page table that is associated with the PTCM. A lockable region of a cache is designated to hold a copy of the PTCM page table, after which the lockable region of the cache is subsequently locked. The PTCM page table is populated with page table entries associated with the PTCM and copied to the locked region of the cache.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: January 10, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Derrick Allen Aguren, Eric H. Van Tassell, Gabriel H. Loh, Jay Fleischman
  • Publication number: 20220415384
    Abstract: An electronic device includes a memory having a plurality of memory rows and a memory refresh functional block that performs a victim row refresh operation. For the victim row refresh operation, the memory refresh functional block selects one or more victim memory rows that may be victims of data corruption caused by repeated memory accesses in a specified group of memory rows near each of the one or more victim memory rows. The memory refresh functional block then individually refreshes each of the one or more victim memory rows.
    Type: Application
    Filed: June 28, 2021
    Publication date: December 29, 2022
    Inventors: SeyedMohammad SeyedzadehDelcheh, Gabriel H. Loh
  • Patent number: 11507519
    Abstract: A processing system selectively compresses cache lines at a cache or at a memory or encrypts cache lines at the memory based on evictions of entries mapping virtual-to-physical address translations from a translation lookaside buffer (TLB). Upon eviction of a TLB entry, the processing system identifies cache lines corresponding to the physical addresses of the evicted TLB entry and selectively compresses the cache lines to increase the effective storage capacity of the processing system or encrypts the cache lines to protect against vulnerabilities.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: November 22, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Jagadish B. Kotra, Gabriel H. Loh, Matthew R. Poremba
  • Publication number: 20220365725
    Abstract: A method includes receiving from a compute element a command for performing a requested operation on data stored in a memory device, and in response to receiving the command, performing the requested operation by generating a plurality of memory access requests based on the command and issuing the plurality of memory access requests to the memory device.
    Type: Application
    Filed: May 10, 2022
    Publication date: November 17, 2022
    Inventors: Michael Ignatowski, Valentina Salapura, Ganesh Dasika, Gabriel H Loh
  • Publication number: 20220365975
    Abstract: An accelerator device includes a first processing unit to access a structure of a graph dataset, and a second processing unit coupled with the first processing unit to perform computations based on data values in the graph dataset.
    Type: Application
    Filed: December 29, 2021
    Publication date: November 17, 2022
    Inventors: Ganesh Dasika, Michael Ignatowski, Michael J. Schulte, Gabriel H. Loh, Valentina Salapura, Angela Beth Dalton
  • Patent number: 11494300
    Abstract: Methods and apparatus provide virtual to physical address translations and a hardware page table walker with region based page table prefetch operation that produces virtual memory region tracking information that includes at least: data representing a virtual base address of a virtual memory region and a physical address of a first page table entry (PTE) corresponding to a virtual page within the virtual memory region. The hardware page table walker, in response to the TLB miss indication, prefetches a physical address of a second page table entry, that provides a final physical address for the missed TLB entry, using the virtual memory region tracking information. In some implementations, the prefetching of the physical PTE address is done in parallel with earlier levels of a page walk operations.
    Type: Grant
    Filed: September 26, 2020
    Date of Patent: November 8, 2022
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Gabriel H. Loh
  • Patent number: 11475305
    Abstract: An electronic device has an activation function functional block that implements an activation function. During operation, the activation function functional block receives an input including a plurality of bits representing a numerical value. The activation function functional block then determines a range from among a plurality of ranges into which the input falls, each range including a separate portion of possible numerical values of the input. The activation function functional block next generates a result of a linear function associated with the range. Generating the result includes using a separate linear function that is associated with each range in the plurality of ranges to approximate results of the activation function within that range.
    Type: Grant
    Filed: December 8, 2017
    Date of Patent: October 18, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Gabriel H. Loh
  • Patent number: 11403221
    Abstract: A system and method for efficiently processing memory requests are described. A computing system includes multiple compute units, multiple caches of a memory hierarchy and a communication fabric. A compute unit generates a memory access request that misses in a higher level cache, which sends a miss request to a lower level shared cache. During servicing of the miss request, the lower level cache merges identification information of multiple memory access requests targeting a same cache line from multiple compute units into a merged memory access response. The lower level shared cache continues to insert information into the merged memory access response until the lower level shared cache is ready to issue the merged memory access response. An intermediate router in the communication fabric broadcasts the merged memory access response into multiple memory access responses to send to corresponding compute units.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: August 2, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Onur Kayiran, Yasuko Eckert, Mark Henry Oskin, Gabriel H. Loh, Steven E. Raasch, Maxim V. Kazakov
  • Patent number: 11360891
    Abstract: A method of dynamic cache configuration includes determining, for a first clustering configuration, whether a current cache miss rate exceeds a miss rate threshold. The first clustering configuration includes a plurality of graphics processing unit (GPU) compute units clustered into a first plurality of compute unit clusters. The method further includes clustering, based on the current cache miss rate exceeding the miss rate threshold, the plurality of GPU compute units into a second clustering configuration having a second plurality of compute unit clusters fewer than the first plurality of compute unit clusters.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: June 14, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mohamed Assem Ibrahim, Onur Kayiran, Yasuko Eckert, Gabriel H. Loh
  • Patent number: 11342933
    Abstract: Described are systems and methods for lossy compression and restoration of data. The raw data is first truncated. Then the truncated data is compressed. The compressed truncated data can then be efficiently stored and/or transmitted using fewer bits. To restore the data, the compressed data is then decompressed and restoration bits are concatenated. The restoration bits are selected to compensate for statistical biasing introduced by the truncation.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: May 24, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Gabriel H. Loh
  • Publication number: 20220138107
    Abstract: Systems, apparatuses, and methods for efficiently performing memory accesses in a computing system are disclosed. A computing system includes one or more clients, a communication fabric and a last-level cache implemented with low latency, high bandwidth memory. The cache controller for the last-level cache determines a range of addresses corresponding to a first region of system memory with a copy of data stored in a second region of the last-level cache. The cache controller sends a selected memory access request to system memory when the cache controller determines a request address of the memory access request is not within the range of addresses. The cache controller services the selected memory request by accessing data from the last-level cache when the cache controller determines the request address is within the range of addresses.
    Type: Application
    Filed: January 14, 2022
    Publication date: May 5, 2022
    Inventor: Gabriel H. Loh
  • Publication number: 20220100653
    Abstract: Methods and apparatus provide virtual to physical address translations and a hardware page table walker with region based page table prefetch operation that produces virtual memory region tracking information that includes at least: data representing a virtual base address of a virtual memory region and a physical address of a first page table entry (PTE) corresponding to a virtual page within the virtual memory region. The hardware page table walker, in response to the TLB miss indication, prefetches a physical address of a second page table entry, that provides a final physical address for the missed TLB entry, using the virtual memory region tracking information. In some implementations, the prefetching of the physical PTE address is done in parallel with earlier levels of a page walk operations.
    Type: Application
    Filed: September 26, 2020
    Publication date: March 31, 2022
    Inventor: Gabriel H. Loh
  • Publication number: 20220091980
    Abstract: A system and method for efficiently processing memory requests are described. A computing system includes multiple compute units, multiple caches of a memory hierarchy and a communication fabric. A compute unit generates a memory access request that misses in a higher level cache, which sends a miss request to a lower level shared cache. During servicing of the miss request, the lower level cache merges identification information of multiple memory access requests targeting a same cache line from multiple compute units into a merged memory access response. The lower level shared cache continues to insert information into the merged memory access response until the lower level shared cache is ready to issue the merged memory access response. An intermediate router in the communication fabric broadcasts the merged memory access response into multiple memory access responses to send to corresponding compute units.
    Type: Application
    Filed: September 24, 2020
    Publication date: March 24, 2022
    Inventors: Onur Kayiran, Yasuko Eckert, Mark Henry Oskin, Gabriel H. Loh, Steven E. Raasch, Maxim V. Kazakov
  • Patent number: 11275829
    Abstract: An apparatus includes an external device for causing messages to be transmitted with local traffic between internal blocks of a host system-on-chip (SoC) via a network on chip (NoC) in the host SoC, the transmitted messages including one or more memory requests directed to a memory of the host SoC, violating a traffic policy for a first time interval by transmitting a number of messages that exceeds a maximum threshold of for the first time interval, where the SoC monitors an amount of external traffic from an untrusted device transmitted over the NoC over a set of one or more time intervals including the first time interval, and in response to detection of the violation by the host SoC, reducing an amount of traffic transmitted via the NoC. The apparatus also includes an external processor link for transmitting the messages from the external device to the host SoC.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: March 15, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Gabriel H Loh, Maurice B Steinman
  • Publication number: 20220051985
    Abstract: A semiconductor package includes a first die, a second die, and an interconnect die coupled to a first plurality of through-die vias in the first die and a second plurality of through-die vias in the second die. The interconnect die provides communications pathways the first die and the second die.
    Type: Application
    Filed: October 30, 2020
    Publication date: February 17, 2022
    Inventors: RAHUL AGARWAL, RAJA SWAMINATHAN, MICHAEL S. ALFANO, GABRIEL H. LOH, ALAN D. SMITH, GABRIEL WONG, MICHAEL MANTOR
  • Patent number: 11232039
    Abstract: Systems, apparatuses, and methods for efficiently performing memory accesses in a computing system are disclosed. A computing system includes one or more clients, a communication fabric and a last-level cache implemented with low latency, high bandwidth memory. The cache controller for the last-level cache determines a range of addresses corresponding to a first region of system memory with a copy of data stored in a second region of the last-level cache. The cache controller sends a selected memory access request to system memory when the cache controller determines a request address of the memory access request is not within the range of addresses. The cache controller services the selected memory request by accessing data from the last-level cache when the cache controller determines the request address is within the range of addresses.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: January 25, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Gabriel H. Loh
  • Patent number: 11226900
    Abstract: An approach for tracking data stored in caches uses a Bloom filter to reduce the number of addresses that need to be tracked by a coherence directory. When a requested address is determined to not be currently tracked by either the coherence directory or the Bloom filter, tracking of the address is initiated in the Bloom filter, but not in the coherence directory. Initiating tracking of the address in the Bloom filter includes setting hash bits in the Bloom filter so that subsequent requests for the address will “hit” the Bloom filter. When a requested address is determined to be tracked by the coherence directory, the Bloom filter is not used to track the address.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: January 18, 2022
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Weon Taek Na, Yasuko Eckert, Mark H. Oskin, Gabriel H. Loh, William Louie Walker, Michael Warren Boyer
  • Patent number: 11132300
    Abstract: A system includes a device coupleable to a first memory. The device includes a second memory to cache data from the first memory. The second memory is to store a set of compressed pages of the first memory and a set of page descriptors. Each compressed page includes a set of compressed data blocks. Each page descriptor represents a corresponding page and includes a set of location identifiers that identify the locations of the compressed data blocks of the corresponding page in the second memory. The device further includes compression logic to compress data blocks of a page to be stored to the second memory and decompression logic to decompress compressed data blocks of a page accessed from the second memory.
    Type: Grant
    Filed: July 11, 2013
    Date of Patent: September 28, 2021
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Gabriel H. Loh, James M. O'Connor
  • Patent number: 11106600
    Abstract: A processing system adjusts a cache replacement priority of cache lines at a cache based on evictions of entries mapping virtual-to-physical address translations from a translation lookaside buffer (TLB). Upon eviction of a TLB entry, the processing system identifies cache lines corresponding to the physical addresses of the evicted TLB entry and evicts the cache lines or adjusts the cache replacement priority of the cache lines so that their eviction from the cache will be accelerated.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: August 31, 2021
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Gabriel H. Loh, Paul Moyer