Patents by Inventor Shinye Shiu

Shinye Shiu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10802968
    Abstract: An apparatus for processing memory requests from a functional unit in a computing system is disclosed. The apparatus may include an interface that may be configured to receive a request from the functional. Circuitry may be configured initiate a speculative read access command to a memory in response to a determination that the received request is a request for data from the memory. The circuitry may be further configured to determine, in parallel with the speculative read access, if the speculative read will result in an ordering or coherence violation.
    Type: Grant
    Filed: May 6, 2015
    Date of Patent: October 13, 2020
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Harshavardhan Kaushikkar, Munetoshi Fukami, Gurjeet S. Saund, Manu Gulati, Shinye Shiu
  • Patent number: 10607977
    Abstract: This document describes apparatuses and techniques for integrated DRAM with low-voltage swing I/O. In some aspects, a dynamic random access memory (DRAM) die and application processor (AP) die are mounted to a system-in-package (SiP) die carrier that includes one or more redistribution layers. The DRAM die and AP die are located adjacent to each other on the die-carrier such that respective memory inputs/outputs of each die are proximate the other inputs/outputs.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: March 31, 2020
    Assignee: Google LLC
    Inventor: Shinye Shiu
  • Patent number: 10310586
    Abstract: In an embodiment, a system includes a memory controller that includes a memory cache and a display controller configured to control a display. The system may be configured to detect that the images being displayed are essentially static, and may be configured to cause the display controller to request allocation in the memory cache for source frame buffer data. In some embodiments, the system may also alter power management configuration in the memory cache to prevent the memory cache from shutting down or reducing its effective size during the idle screen case, so that the frame buffer data may remain cached. During times that the display is dynamically changing, the frame buffer data may not be cached in the memory cache and the power management configuration may permit the shutting down/size reduction in the memory cache.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: June 4, 2019
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu, Cyril de la Cropte de Chanterac, Manu Gulati, Pulkit Desai, Rong Zhang Hu
  • Patent number: 10203901
    Abstract: Provided are methods and systems for memory decompression using a hardware decompressor that minimizes or eliminates the involvement of software. Custom decompression hardware is added to the memory subsystem, where the decompression hardware handles read accesses caused by, for example, cache misses or requests from devices to compressed memory blocks, by reading a compressed block, decompressing it into an internal buffer, and returning the requested portion of the block. The custom hardware is designed to determine if the block is compressed, and determine the parameters of compression, by checking unused high bits of the physical address of the access. This allows compression to be implemented without additional metadata, because the necessary metadata can be stored in unused bits in the existing page table structures.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: February 12, 2019
    Assignee: Google LLC
    Inventors: Vyacheslav Malyugin, Luigi Semenzato, Choon Ping Chng, Santhosh Rao, Shinye Shiu
  • Patent number: 10089239
    Abstract: Provided are methods, systems, and apparatus for managing and controlling memory caches, in particular, system level caches outside of those closest to the CPU. The processes and representative hardware structures that implement the processes are designed to allow for detailed control over the behavior of such system level caches. Caching policies are developed based on policy identifiers, where a policy identifier corresponds to a collection of parameters that control the behavior of a set of cache management structures. For a given cache, one policy identifier is stored in each line of the cache.
    Type: Grant
    Filed: May 26, 2016
    Date of Patent: October 2, 2018
    Assignee: Google LLC
    Inventors: Allan D. Knies, Shinye Shiu, Chih-Chung Chang, Vyacheslav Vladimirovich Malyugin, Santhosh Rao
  • Publication number: 20180211946
    Abstract: This document describes apparatuses and techniques for integrated DRAM with low-voltage swing I/O. In some aspects, a dynamic random access memory (DRAM) die and application processor (AP) die are mounted to a system-in-package (SiP) die carrier that includes one or more redistribution layers. The DRAM die and AP die are located adjacent to each other on the die-carrier such that respective memory inputs/outputs of each die are proximate the other inputs/outputs.
    Type: Application
    Filed: October 18, 2017
    Publication date: July 26, 2018
    Applicant: Google LLC
    Inventor: Shinye Shiu
  • Publication number: 20180095668
    Abstract: Provided are methods and systems for memory decompression using a hardware decompressor that minimizes or eliminates the involvement of software. Custom decompression hardware is added to the memory subsystem, where the decompression hardware handles read accesses caused by, for example, cache misses or requests from devices to compressed memory blocks, by reading a compressed block, decompressing it into an internal buffer, and returning the requested portion of the block. The custom hardware is designed to determine if the block is compressed, and determine the parameters of compression, by checking unused high bits of the physical address of the access. This allows compression to be implemented without additional metadata, because the necessary metadata can be stored in unused bits in the existing page table structures.
    Type: Application
    Filed: November 21, 2017
    Publication date: April 5, 2018
    Applicant: Google LLC
    Inventors: Vyacheslav Malyugin, Luigi Semenzato, Choon Ping Chng, Santhosh Rao, Shinye Shiu
  • Patent number: 9892054
    Abstract: A method and apparatus are disclosed to monitor system performance and dynamically update memory subsystem settings using software to optimize system performance and power consumption. In an example embodiment, the apparatus monitors a software application's cache performance and provides the software application the cache performance data. The software application, which has a higher-level/macro view of the overall system and better determination of its future requests, analyzes the performance data to determine more optimal memory sub-system settings. The software application provides the system more optimal settings to implement in the memory component to improve the memory and overall system performance and efficiency.
    Type: Grant
    Filed: October 7, 2015
    Date of Patent: February 13, 2018
    Assignee: Google LLC
    Inventor: Shinye Shiu
  • Patent number: 9864541
    Abstract: Provided are methods and systems for memory decompression using a hardware decompressor that minimizes or eliminates the involvement of software. Custom decompression hardware is added to the memory subsystem, where the decompression hardware handles read accesses caused by, for example, cache misses or requests from devices to compressed memory blocks, by reading a compressed block, decompressing it into an internal buffer, and returning the requested portion of the block. The custom hardware is designed to determine if the block is compressed, and determine the parameters of compression, by checking unused high bits of the physical address of the access. This allows compression to be implemented without additional metadata, because the necessary metadata can be stored in unused bits in the existing page table structures.
    Type: Grant
    Filed: February 12, 2016
    Date of Patent: January 9, 2018
    Assignee: Google LLC
    Inventors: Vyacheslav Vladimirovich Malyugin, Luigi Semenzato, Choon Ping Chng, Santhosh Rao, Shinye Shiu
  • Patent number: 9785571
    Abstract: Provided are methods and systems for de-duplicating cache lines in physical memory by detecting cache line data patterns and building a link-list between multiple physical addresses and their common data value. In this manner, the methods and systems are applied to achieve de-duplication of an on-chip cache. A cache line filter includes one table that defines the most commonly duplicated content patterns and a second table that saves pattern numbers from the first table and the physical address for she duplicated cache line. Since a cache line duplicate can be detected during a write operation, each write can involve table lookup and comparison. If there is a hit in the table, only the address is saved instead of the entire data string.
    Type: Grant
    Filed: October 7, 2015
    Date of Patent: October 10, 2017
    Assignee: Google Inc.
    Inventor: Shinye Shiu
  • Patent number: 9740631
    Abstract: Provided are methods and systems for managing memory using a hardware-based page filter designed to distinguish between active and inactive pages (“hot” and “cold” pages, respectively) so that inactive pages can be compressed prior to the occurrence of a page fault. The methods and systems are designed to achieve, among other things, lower cost, longer battery life, and faster user response. Whereas existing approaches for memory management are based on pixel or frame buffer compression, the methods and systems provided focus on the CPU's program (e.g., generic data structure). Focusing on hardware-accelerated memory compression to offload CPU translates higher power efficiency (e.g., ASIC is approximately 100× lower power than CPU) and higher performance (e.g., ASIC is approximately 10× faster than CPU), and also allows for hardware-assisted memory management to offload OS/kernel, which significantly increases response time.
    Type: Grant
    Filed: October 7, 2015
    Date of Patent: August 22, 2017
    Assignee: Google Inc.
    Inventor: Shinye Shiu
  • Publication number: 20160350232
    Abstract: Provided are methods, systems, and apparatus for managing and controlling memory caches, in particular, system level caches outside of those closest to the CPU. The processes and representative hardware structures that implement the processes are designed to allow for detailed control over the behavior of such system level caches. Caching policies are developed based on policy identifiers, where a policy identifier corresponds to a collection of parameters that control the behavior of a set of cache management structures. For a given cache, one policy identifier is stored in each line of the cache.
    Type: Application
    Filed: May 26, 2016
    Publication date: December 1, 2016
    Applicant: GOOGLE INC.
    Inventors: Allan D. KNIES, Shinye SHIU, Chih-Chung CHANG, Vyacheslav Vladimirovich MALYUGIN, Santhosh RAO
  • Publication number: 20160328322
    Abstract: An apparatus for processing memory requests from a functional unit in a computing system is disclosed. The apparatus may include an interface that may be configured to receive a request from the functional. Circuitry may be configured initiate a speculative read access command to a memory in response to a determination that the received request is a request for data from the memory. The circuitry may be further configured to determine, in parallel with the speculative read access, if the speculative read will result in an ordering or coherence violation.
    Type: Application
    Filed: May 6, 2015
    Publication date: November 10, 2016
    Inventors: Sukalpa Biswas, Harshavardhan Kaushikkar, Munetoshi Fukami, Gurjeet S. Saund, Manu Gulati, Shinye Shiu
  • Patent number: 9465740
    Abstract: An apparatus for processing coherency transactions in a computing system is disclosed. The apparatus may include a request queue circuit, a duplicate tag circuit, and a memory interface unit. The request queue circuit may be configured to generate a speculative read request dependent upon a received read transaction. The duplicate tag circuit may be configured to store copies of tag from one or more cache memories, and to generate a kill message in response to a determination that data requested in the received read transaction is stored in a cache memory. The memory interface unit may be configured to store the generated speculative read request dependent upon a stall condition. The stored speculative read request may be sent to a memory controller dependent upon the stall condition. The memory interface unit may be further configured to delete the speculative read request in response to the kill message.
    Type: Grant
    Filed: April 11, 2013
    Date of Patent: October 11, 2016
    Assignee: Apple Inc.
    Inventors: Erik P. Machnicki, Harshavardhan Kaushikkar, Shinye Shiu
  • Publication number: 20160239209
    Abstract: Provided are methods and systems for memory decompression using a hardware decompressor that minimizes or eliminates the involvement of software. Custom decompression hardware is added to the memory subsystem, where the decompression hardware handles read accesses caused by, for example, cache misses or requests from devices to compressed memory blocks, by reading a compressed block, decompressing it into an internal buffer, and returning the requested portion of the block. The custom hardware is designed to determine if the block is compressed, and determine the parameters of compression, by checking unused high bits of the physical address of the access. This allows compression to be implemented without additional metadata, because the necessary metadata can be stored in unused bits in the existing page table structures.
    Type: Application
    Filed: February 12, 2016
    Publication date: August 18, 2016
    Applicant: GOOGLE INC.
    Inventors: Vyacheslav Vladimirovich MALYUGIN, Luigi SEMENZATO, Choon Ping CHNG, Santhosh RAO, Shinye SHIU
  • Patent number: 9400544
    Abstract: Methods and apparatuses for reducing leakage power in a system cache within a memory controller. The system cache is divided into multiple sections, and each section is supplied with power from one of two supply voltages. When a section is not being accessed, the voltage supplied to the section is reduced to a voltage sufficient for retention of data but not for access. The cache utilizes a maximum allowed active section policy to limit the number of sections that are active at any given time to reduce leakage power. Each section includes a corresponding idle timer and break-even timer. The idle timer keeps track of how long the section has been idle and the break-even timer is used to periodically wake the section up from retention mode to check if there is a pending request that targets the section.
    Type: Grant
    Filed: April 2, 2013
    Date of Patent: July 26, 2016
    Assignee: Apple Inc.
    Inventors: Wolfgang H. Klingauf, Rong Zhang Hu, Sukalpa Biswas, Shinye Shiu
  • Patent number: 9396122
    Abstract: Methods and systems for cache allocation schemes optimized for browsing applications. A memory controller includes a memory cache for reducing the number of requests that access off-chip memory. When an idle screen use case is detected, the frame buffer is allocated to the memory cache using a sequential allocation mode. Pixels are allocated to indexes of a given way in a sequential fashion, and then each way is accessed in a sequential fashion. When a given way is being accessed, the other ways of the memory cache are put into retention mode to reduce the leakage power.
    Type: Grant
    Filed: April 19, 2013
    Date of Patent: July 19, 2016
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Wolfgang H. Klingauf, Rong Zhang Hu, Shinye Shiu
  • Publication number: 20160116969
    Abstract: In an embodiment, a system includes a memory controller that includes a memory cache and a display controller configured to control a display. The system may be configured to detect that the images being displayed are essentially static, and may be configured to cause the display controller to request allocation in the memory cache for source frame buffer data. In some embodiments, the system may also alter power management configuration in the memory cache to prevent the memory cache from shutting down or reducing its effective size during the idle screen case, so that the frame buffer data may remain cached. During times that the display is dynamically changing, the frame buffer data may not be cached in the memory cache and the power management configuration may permit the shutting down/size reduction in the memory cache.
    Type: Application
    Filed: December 28, 2015
    Publication date: April 28, 2016
    Inventors: Sukalpa Biswas, Shinye Shiu, Cyril de la Cropte de Chanterac, Manu Gulati, Pulkit Desai, Rong Zhang Hu
  • Patent number: 9311251
    Abstract: Methods and apparatuses for implementing a system cache within a memory controller. Multiple requesting agents may allocate cache lines in the system cache, and each line allocated in the system cache may be associated with a specific group ID. Also, each line may have a corresponding sticky state which indicates if the line should be retained in the cache. The sticky state is determined by an allocation hint provided by the requesting agent. When a cache line is allocated with the sticky state, the line will not be replaced by other cache lines fetched by any other group IDs.
    Type: Grant
    Filed: August 27, 2012
    Date of Patent: April 12, 2016
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu, James Wang
  • Publication number: 20160098356
    Abstract: Provided are methods and systems for managing memory using a hardware-based page filter designed to distinguish between active and inactive pages (“hot” and “cold” pages, respectively) so that inactive pages can be compressed prior to the occurrence of a page fault. The methods and systems are designed to achieve, among other things, lower cost, longer battery life, and faster user response. Whereas existing approaches for memory management are based on pixel or frame buffer compression, the methods and systems provided focus on the CPU's program (e.g., generic data structure). Focusing on hardware-accelerated memory compression to offload CPU translates higher power efficiency (e.g., ASIC is approximately 100× lower power than CPU) and higher performance (e.g., ASIC is approximately 10× faster than CPU), and also allows for hardware-assisted memory management to offload OS/kernel, which significantly increases response time.
    Type: Application
    Filed: October 7, 2015
    Publication date: April 7, 2016
    Applicant: GOOGLE INC.
    Inventor: Shinye SHIU