Patents by Inventor Shinye Shiu

Shinye Shiu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160098356
    Abstract: Provided are methods and systems for managing memory using a hardware-based page filter designed to distinguish between active and inactive pages (“hot” and “cold” pages, respectively) so that inactive pages can be compressed prior to the occurrence of a page fault. The methods and systems are designed to achieve, among other things, lower cost, longer battery life, and faster user response. Whereas existing approaches for memory management are based on pixel or frame buffer compression, the methods and systems provided focus on the CPU's program (e.g., generic data structure). Focusing on hardware-accelerated memory compression to offload CPU translates higher power efficiency (e.g., ASIC is approximately 100× lower power than CPU) and higher performance (e.g., ASIC is approximately 10× faster than CPU), and also allows for hardware-assisted memory management to offload OS/kernel, which significantly increases response time.
    Type: Application
    Filed: October 7, 2015
    Publication date: April 7, 2016
    Applicant: GOOGLE INC.
    Inventor: Shinye SHIU
  • Publication number: 20160098193
    Abstract: A method and apparatus are disclosed to monitor system performance and dynamically update memory subsystem settings using software to optimize system performance and power consumption. In an example embodiment, the apparatus monitors a software application's cache performance and provides the software application the cache performance data. The software application, which has a higher-level/macro view of the overall system and better determination of its future requests, analyzes the performance data to determine more optimal memory sub-system settings. The software application provides the system more optimal settings to implement in the memory component to improve the memory and overall system performance and efficiency.
    Type: Application
    Filed: October 7, 2015
    Publication date: April 7, 2016
    Applicant: GOOGLE INC.
    Inventor: Shinye SHIU
  • Patent number: 9280471
    Abstract: Systems, processors, and methods for sharing an agent's private cache with other agents within a SoC. Many agents in the SoC have a private cache in addition to the shared caches and memory of the SoC. If an agent's processor is shut down or operating at less than full capacity, the agent's private cache can be shared with other agents. When a requesting agent generates a memory request and the memory request misses in the memory cache, the memory cache can allocate the memory request in a separate agent's cache rather than allocating the memory request in the memory cache.
    Type: Grant
    Filed: November 15, 2013
    Date of Patent: March 8, 2016
    Assignee: Apple Inc.
    Inventors: Manu Gulati, Harshavardhan Kaushikkar, Gurjeet S. Saund, Wei-Han Lien, Gerard R. Williams, III, Sukalpa Biswas, Brian P. Lilly, Shinye Shiu
  • Patent number: 9261939
    Abstract: In an embodiment, a system includes a memory controller that includes a memory cache and a display controller configured to control a display. The system may be configured to detect that the images being displayed are essentially static, and may be configured to cause the display controller to request allocation in the memory cache for source frame buffer data. In some embodiments, the system may also alter power management configuration in the memory cache to prevent the memory cache from shutting down or reducing its effective size during the idle screen case, so that the frame buffer data may remain cached. During times that the display is dynamically changing, the frame buffer data may not be cached in the memory cache and the power management configuration may permit the shutting down/size reduction in the memory cache.
    Type: Grant
    Filed: May 9, 2013
    Date of Patent: February 16, 2016
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu, Cyril de la Cropte de Chanterac, Manu Gulati, Pulkit Desai, Rong Zhang Hu
  • Patent number: 9218286
    Abstract: Methods and apparatuses for processing partial write requests in a system cache within a memory controller. When a write request that updates a portion of a cache line misses in the system cache, the write request writes the data to the system cache without first reading the corresponding cache line from memory. The system cache includes error correction code bits which are redefined as word mask bits when a cache line is in a partial dirty state. When a read request hits on a partial dirty cache line, the partial data is written to memory using a word mask. Then, the corresponding full cache line is retrieved from memory and stored in the system cache.
    Type: Grant
    Filed: September 27, 2012
    Date of Patent: December 22, 2015
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu
  • Patent number: 9218040
    Abstract: Methods and apparatuses for reducing power consumption of a system cache within a memory controller. The system cache includes multiple ways, and individual ways are powered down when cache activity is low. A maximum active way configuration register is set by software and determines the maximum number of ways which are permitted to be active. When searching for a cache line replacement candidate, a linear feedback shift register (LFSR) is used to select from the active ways. This ensures that each active way has an equal chance of getting picked for finding a replacement candidate when one or more of the ways are inactive.
    Type: Grant
    Filed: September 27, 2012
    Date of Patent: December 22, 2015
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu, Rong Zhang Hu
  • Patent number: 9201796
    Abstract: Methods and apparatuses for processing speculative read requests in a system cache within a memory controller. To expedite a speculative read request, the request is sent on parallel paths through the system cache. A first path goes through a speculative read engine to determine if the speculative read request meets the conditions for accessing memory. A second path involves performing a tag lookup to determine if the data referenced by the request is already in the system cache. If the speculative read request meets the conditions for accessing memory, the request is sent to a miss queue where it is held until a confirm or cancel signal is received from the tag lookup mechanism.
    Type: Grant
    Filed: September 27, 2012
    Date of Patent: December 1, 2015
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu
  • Patent number: 9158685
    Abstract: Methods and apparatuses for utilizing a cache hint mechanism in which a requesting agent can provide hints as to how data corresponding to a request should be cached in a system cache within a memory controller. The way the system cache responds to received requests is determined by the cache hint provided by the originating requesting agent. When a request is accompanied by a de-allocate cache hint, the system cache causes a cache line hit by the request to be de-allocated. For a request with a do not allocate cache hint, the system cache does not allocate a cache line if the request misses in the system cache, and the system cache maintains a given cache line in its current state if the requests hits the given cache line.
    Type: Grant
    Filed: September 11, 2012
    Date of Patent: October 13, 2015
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu, James Wang
  • Patent number: 9135177
    Abstract: Techniques for escalating a real time agent's request that has an address conflict with a best effort agent's request. A best effort request can be allocated in a memory controller cache but can progress slowly in the memory system due to its low priority. Therefore, when a real time request has an address conflict with an older best effort request, the best effort request can be escalated if it is still pending when the real time request is received at the memory controller cache. Escalating the best effort request can include setting the push attribute of the best effort request or sending another request with a push attribute to bypass or push the best effort request.
    Type: Grant
    Filed: February 26, 2013
    Date of Patent: September 15, 2015
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu
  • Patent number: 9043570
    Abstract: Methods and apparatuses for implementing a system cache with quota-based control. Quotas may be assigned on a group ID basis to each group ID that is assigned to use the system cache. The quota does not reserve space in the system cache, but rather the quota may be used within any way within the system cache. The quota may prevent a given group ID from consuming more than a desired amount of the system cache. Once a group ID's quota has been reached, no additional allocation will be permitted for that group ID. The total amount of allocated quota for all group IDs can exceed the size of system cache, such that the system cache can be oversubscribed. The sticky state can be used to prioritize data retention within the system cache when oversubscription is being used.
    Type: Grant
    Filed: September 11, 2012
    Date of Patent: May 26, 2015
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu, James Wang
  • Publication number: 20150143044
    Abstract: Systems, processors, and methods for sharing an agent's private cache with other agents within a SoC. Many agents in the SoC have a private cache in addition to the shared caches and memory of the SoC. If an agent's processor is shut down or operating at less than full capacity, the agent's private cache can be shared with other agents. When a requesting agent generates a memory request and the memory request misses in the memory cache, the memory cache can allocate the memory request in a separate agent's cache rather than allocating the memory request in the memory cache.
    Type: Application
    Filed: November 15, 2013
    Publication date: May 21, 2015
    Applicant: APPLE INC.
    Inventors: Manu Gulati, Harshavardhan Kaushikkar, Gurjeet S. Saund, Wei-Han Lien, Gerard R. Williams, III, Sukalpa Biswas, Brian P. Lilly, Shinye Shiu
  • Patent number: 8984227
    Abstract: Methods and apparatuses for reducing power consumption of a system cache within a memory controller. The system cache includes multiple ways, and each way is powered independently of the other ways. A target active way count is maintained and the system cache attempts to keep the number of currently active ways equal to the target active way count. The bandwidth and allocation intention of the system cache is monitored. Based on these characteristics, the system cache adjusts the target active way count up or down, which then causes the number of currently active ways to rise or fall in response to the adjustment to the target active way count.
    Type: Grant
    Filed: April 2, 2013
    Date of Patent: March 17, 2015
    Assignee: Apple Inc.
    Inventors: Shinye Shiu, Sukalpa Biswas, Wolfgang H. Klingauf, Rong Zhang Hu
  • Patent number: 8977817
    Abstract: Methods and apparatuses for reducing leakage power in a system cache within a memory controller. The system cache is divided into multiple small sections, and each section is supplied with power from a separately controllable power supply. When a section is not being accessed, the voltage supplied to the section is reduced to a voltage sufficient for retention of data but not for access. Incoming requests are grouped together based on which section of the system cache they target. When enough requests that target a given section have accumulated, the voltage supplied to the given section is increased to a voltage sufficient for access. Then, once the given section has enough time to ramp-up and stabilize at the higher voltage, the waiting requests may access the given section in a burst of operations.
    Type: Grant
    Filed: September 28, 2012
    Date of Patent: March 10, 2015
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu
  • Patent number: 8912853
    Abstract: A dynamic level shifter circuit and a ring oscillator implemented using the same are disclosed. A dynamic level shifter may include a pull-down circuit and a pull-up circuit. The pull-up circuit may include an extra transistor configured to reduce the current through that circuit when the pull-down circuit is activated. A ring oscillator may be implemented using instances of the dynamic level shifter along with instances of a static level shifter. The ring oscillator may also include a pulse generator configured to initiate oscillation. The ring oscillator implemented with dynamic level shifters may be used in conjunction with another ring oscillator implemented using only static level shifters to compare relative performance levels of the static and dynamic level shifters.
    Type: Grant
    Filed: June 14, 2012
    Date of Patent: December 16, 2014
    Assignee: Apple Inc.
    Inventors: James E. Burnette, Greg M. Hess, Shinye Shiu
  • Publication number: 20140337649
    Abstract: In an embodiment, a system includes a memory controller that includes a memory cache and a display controller configured to control a display. The system may be configured to detect that the images being displayed are essentially static, and may be configured to cause the display controller to request allocation in the memory cache for source frame buffer data. In some embodiments, the system may also alter power management configuration in the memory cache to prevent the memory cache from shutting down or reducing its effective size during the idle screen case, so that the frame buffer data may remain cached. During times that the display is dynamically changing, the frame buffer data may not be cached in the memory cache and the power management configuration may permit the shutting down/size reduction in the memory cache.
    Type: Application
    Filed: May 9, 2013
    Publication date: November 13, 2014
    Applicant: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu, Cyril de la Cropte de Chanterac, Manu Gulati, Pulkit Desai, Rong Zhang Hu
  • Patent number: 8886886
    Abstract: Methods and apparatuses for releasing the sticky state of cache lines for one or more group IDs. A sticky removal engine walks through the tag memory of a system cache looking for matches with a first group ID which is clearing its cache lines from the system cache. The engine clears the sticky state of each cache line belonging to the first group ID. If the engine receives a release request for a second group ID, the engine records the current index to log its progress through the tag memory. Then, the engine continues its walk through the tag memory looking for matches with either the first or second group ID. The engine wraps around to the start of the tag memory and continues its walk until reaching the recorded index for the second group ID.
    Type: Grant
    Filed: September 28, 2012
    Date of Patent: November 11, 2014
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu, James Wang, Robert Hu
  • Publication number: 20140317355
    Abstract: Methods and systems for cache allocation schemes optimized for browsing applications. A memory controller includes a memory cache for reducing the number of requests that access off-chip memory. When an idle screen use case is detected, the frame buffer is allocated to the memory cache using a sequential allocation mode. Pixels are allocated to indexes of a given way in a sequential fashion, and then each way is accessed in a sequential fashion. When a given way is being accessed, the other ways of the memory cache are put into retention mode to reduce the leakage power.
    Type: Application
    Filed: April 19, 2013
    Publication date: October 23, 2014
    Applicant: Apple Inc.
    Inventors: Sukalpa Biswas, Wolfgang H. Klingauf, Rong Zhang Hu, Shinye Shiu
  • Publication number: 20140310469
    Abstract: An apparatus for processing coherency transactions in a computing system is disclosed. The apparatus may include a request queue circuit, a duplicate tag circuit, and a memory interface unit. The request queue circuit may be configured to generate a speculative read request dependent upon a received read transaction. The duplicate tag circuit may be configured to store copies of tag from one or more cache memories, and to generate a kill message in response to a determination that data requested in the received read transaction is stored in a cache memory. The memory interface unit may be configured to store the generated speculative read request dependent upon a stall condition. The stored speculative read request may be sent to a memory controller dependent upon the stall condition. The memory interface unit may be further configured to delete the speculative read request in response to the kill message.
    Type: Application
    Filed: April 11, 2013
    Publication date: October 16, 2014
    Applicant: Apple Inc.
    Inventors: Erik P. Machnicki, Harshavardhan Kaushikkar, Shinye Shiu
  • Publication number: 20140297959
    Abstract: Methods and apparatuses for reducing power consumption of a system cache within a memory controller. The system cache includes multiple ways, and each way is powered independently of the other ways. A target active way count is maintained and the system cache attempts to keep the number of currently active ways equal to the target active way count. The bandwidth and allocation intention of the system cache is monitored. Based on these characteristics, the system cache adjusts the target active way count up or down, which then causes the number of currently active ways to rise or fall in response to the adjustment to the target active way count.
    Type: Application
    Filed: April 2, 2013
    Publication date: October 2, 2014
    Applicant: Apple Inc.
    Inventors: Shinye Shiu, Sukalpa Biswas, Wolfgang H. Klingauf, Rong Zhang Hu
  • Publication number: 20140298058
    Abstract: Methods and apparatuses for reducing leakage power in a system cache within a memory controller. The system cache is divided into multiple sections, and each section is supplied with power from one of two supply voltages. When a section is not being accessed, the voltage supplied to the section is reduced to a voltage sufficient for retention of data but not for access. The cache utilizes a maximum allowed active section policy to limit the number of sections that are active at any given time to reduce leakage power. Each section includes a corresponding idle timer and break-even timer. The idle timer keeps track of how long the section has been idle and the break-even timer is used to periodically wake the section up from retention mode to check if there is a pending request that targets the section.
    Type: Application
    Filed: April 2, 2013
    Publication date: October 2, 2014
    Applicant: Apple Inc.
    Inventors: Wolfgang H. Klingauf, Rong Zhang Hu, Sukalpa Biswas, Shinye Shiu