Patents by Inventor Sukalpa Biswas

Sukalpa Biswas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8856459
    Abstract: A method and apparatus for utilizing a matrix to store numerical comparisons is disclosed. In one embodiment, an apparatus includes an array in which results of comparisons are stored. The comparisons are performed between numbers associated with agents (or functional units) that have access to a shared resource. The numbers may be a value to indicate a priority for their corresponding agents. The comparison results stored in an array may be generated based on comparisons between two different numbers associated with two different agents, and may indicate the priority of each relative to the other. When two different agents concurrently assert requests for access to the shared resource, a control circuit may access the array to determine which of the two has the higher priority. The agent having the higher priority may then be granted access to the shared resource.
    Type: Grant
    Filed: December 7, 2011
    Date of Patent: October 7, 2014
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Hao Chen
  • Publication number: 20140298058
    Abstract: Methods and apparatuses for reducing leakage power in a system cache within a memory controller. The system cache is divided into multiple sections, and each section is supplied with power from one of two supply voltages. When a section is not being accessed, the voltage supplied to the section is reduced to a voltage sufficient for retention of data but not for access. The cache utilizes a maximum allowed active section policy to limit the number of sections that are active at any given time to reduce leakage power. Each section includes a corresponding idle timer and break-even timer. The idle timer keeps track of how long the section has been idle and the break-even timer is used to periodically wake the section up from retention mode to check if there is a pending request that targets the section.
    Type: Application
    Filed: April 2, 2013
    Publication date: October 2, 2014
    Applicant: Apple Inc.
    Inventors: Wolfgang H. Klingauf, Rong Zhang Hu, Sukalpa Biswas, Shinye Shiu
  • Publication number: 20140297959
    Abstract: Methods and apparatuses for reducing power consumption of a system cache within a memory controller. The system cache includes multiple ways, and each way is powered independently of the other ways. A target active way count is maintained and the system cache attempts to keep the number of currently active ways equal to the target active way count. The bandwidth and allocation intention of the system cache is monitored. Based on these characteristics, the system cache adjusts the target active way count up or down, which then causes the number of currently active ways to rise or fall in response to the adjustment to the target active way count.
    Type: Application
    Filed: April 2, 2013
    Publication date: October 2, 2014
    Applicant: Apple Inc.
    Inventors: Shinye Shiu, Sukalpa Biswas, Wolfgang H. Klingauf, Rong Zhang Hu
  • Patent number: 8848577
    Abstract: In some embodiments, a system includes a shared, high bandwidth resource (e.g. a memory system), multiple agents configured to communicate with the shared resource, and a communication fabric coupling the multiple agents to the shared resource. The communication fabric may be equipped with limiters configured to limit bandwidth from the various agents based on one or more performance metrics measured with respect to the shared, high bandwidth resource. For example, the performance metrics may include one or more of latency, number of outstanding transactions, resource utilization, etc. The limiters may dynamically modify their limit configurations based on the performance metrics. In an embodiment, the system may include multiple thresholds for the performance metrics, and exceeding a given threshold may include modifying the limiters in the communication fabric. There may be hysteresis implemented in the system as well in some embodiments, to reduce the frequency of transitions between configurations.
    Type: Grant
    Filed: September 24, 2012
    Date of Patent: September 30, 2014
    Assignee: Apple Inc.
    Inventors: Gurjeet S. Saund, James B. Keller, Manu Gulati, Sukalpa Biswas
  • Publication number: 20140244920
    Abstract: Techniques for escalating a real time agent's request that has an address conflict with a best effort agent's request. A best effort request can be allocated in a memory controller cache but can progress slowly in the memory system due to its low priority. Therefore, when a real time request has an address conflict with an older best effort request, the best effort request can be escalated if it is still pending when the real time request is received at the memory controller cache. Escalating the best effort request can include setting the push attribute of the best effort request or sending another request with a push attribute to bypass or push the best effort request.
    Type: Application
    Filed: February 26, 2013
    Publication date: August 28, 2014
    Applicant: APPLE INC.
    Inventors: Sukalpa Biswas, Shinye Shiu
  • Publication number: 20140237276
    Abstract: Various method and apparatus embodiments for selecting tunable operating parameters in an integrated circuit (IC) are disclosed. In one embodiment, an IC includes a number of various functional blocks each having a local management circuit. The IC also includes a global management unit coupled to each of the functional blocks having a local management circuit. The management unit is configured to determine the operational state of the IC based on the respective operating states of each of the functional blocks. Responsive to determining the operational state of the IC, the management unit may provide indications of the same to the local management circuit of each of the functional blocks. The local management circuit for each of the functional blocks may select one or more tunable parameters based on the operational state determined by the management unit.
    Type: Application
    Filed: February 15, 2013
    Publication date: August 21, 2014
    Applicant: APPLE INC.
    Inventors: Erik P. Machnicki, Brian P. Lilly, Gurjeet S. Saund, Sukalpa Biswas
  • Patent number: 8786332
    Abstract: A clock divider may provide a lower speed clock to a logic block portion, but during reset, the clock divider may not operate properly, causing the logic block portion to be reset at a clock frequency greater than the frequency for which that logic was designed. However, an extended reset may be employed in which the clock divider is reset normally first before the logic block portion, allowing that logic to be reset according to the divided clock (e.g., rather than a higher speed clock). An asynchronous reset may also be employed in which one or more clock dividers first emerge from reset before being provided with a (synchronized) high speed clock signal, causing the clock dividers to be in phase with each other. This may enable communication between different areas of an IC that might not otherwise be in proper phase with each other.
    Type: Grant
    Filed: January 17, 2013
    Date of Patent: July 22, 2014
    Assignee: Apple Inc.
    Inventors: Erik P. Machnicki, David S. Warren, Shane J. Keil, Sukalpa Biswas
  • Publication number: 20140197870
    Abstract: A clock divider may provide a lower speed clock to a logic block portion, but during reset, the clock divider may not operate properly, causing the logic block portion to be reset at a clock frequency greater than the frequency for which that logic was designed. However, an extended reset may be employed in which the clock divider is reset normally first before the logic block portion, allowing that logic to be reset according to the divided clock (e.g., rather than a higher speed clock). An asynchronous reset may also be employed in which one or more clock dividers first emerge from reset before being provided with a (synchronized) high speed clock signal, causing the clock dividers to be in phase with each other. This may enable communication between different areas of an IC that might not otherwise be in proper phase with each other.
    Type: Application
    Filed: January 17, 2013
    Publication date: July 17, 2014
    Applicant: APPLE INC.
    Inventors: Erik P. Machnicki, David S. Warren, Shane J. Keil, Sukalpa Biswas
  • Patent number: 8762653
    Abstract: In an embodiment, a memory controller includes multiple ports. Each port may be dedicated to a different type of traffic. In an embodiment, quality of service (QoS) parameters may be defined for the traffic types, and different traffic types may have different QoS parameter definitions. The memory controller may be configured to schedule operations received on the different ports based on the QoS parameters. In an embodiment, the memory controller may support upgrade of the QoS parameters when subsequent operations are received that have higher QoS parameters, via sideband request, and/or via aging of operations. In an embodiment, the memory controller is configured to reduce emphasis on QoS parameters and increase emphasis on memory bandwidth optimization as operations flow through the memory controller pipeline.
    Type: Grant
    Filed: October 24, 2013
    Date of Patent: June 24, 2014
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Hao Chen, Ruchi Wadhawan
  • Patent number: 8706925
    Abstract: A memory controller, system, and method for accelerating blocking memory operations. A memory controller reorders memory operations so as to maximize efficient use of the memory device bus. When data for a newer memory operation is retrieved from memory and ready to be returned to a source device, the newer memory operation can be held up waiting for an older memory operation to be completed. In response, the memory controller forwards a push request for the older memory operation to a memory channel unit. The memory channel unit then sets a push bit of the older memory operation, which expedites the scheduling of the older memory operation.
    Type: Grant
    Filed: August 30, 2011
    Date of Patent: April 22, 2014
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Hao Chen
  • Publication number: 20140095777
    Abstract: Methods and apparatuses for reducing leakage power in a system cache within a memory controller. The system cache is divided into multiple small sections, and each section is supplied with power from a separately controllable power supply. When a section is not being accessed, the voltage supplied to the section is reduced to a voltage sufficient for retention of data but not for access. Incoming requests are grouped together based on which section of the system cache they target. When enough requests that target a given section have accumulated, the voltage supplied to the given section is increased to a voltage sufficient for access. Then, once the given section has enough time to ramp-up and stabilize at the higher voltage, the waiting requests may access the given section in a burst of operations.
    Type: Application
    Filed: September 28, 2012
    Publication date: April 3, 2014
    Applicant: APPLE INC.
    Inventors: Sukalpa Biswas, Shinye Shiu
  • Publication number: 20140095800
    Abstract: Methods and apparatuses for releasing the sticky state of cache lines for one or more group IDs. A sticky removal engine walks through the tag memory of a system cache looking for matches with a first group ID which is clearing its cache lines from the system cache. The engine clears the sticky state of each cache line belonging to the first group ID. If the engine receives a release request for a second group ID, the engine records the current index to log its progress through the tag memory. Then, the engine continues its walk through the tag memory looking for matches with either the first or second group ID. The engine wraps around to the start of the tag memory and continues its walk until reaching the recorded index for the second group ID.
    Type: Application
    Filed: September 28, 2012
    Publication date: April 3, 2014
    Applicant: APPLE INC.
    Inventors: Sukalpa Biswas, Shinye Shiu, James Wang, Robert Hu
  • Publication number: 20140089602
    Abstract: Methods and apparatuses for processing partial write requests in a system cache within a memory controller. When a write request that updates a portion of a cache line misses in the system cache, the write request writes the data to the system cache without first reading the corresponding cache line from memory. The system cache includes error correction code bits which are redefined as word mask bits when a cache line is in a partial dirty state. When a read request hits on a partial dirty cache line, the partial data is written to memory using a word mask. Then, the corresponding full cache line is retrieved from memory and stored in the system cache.
    Type: Application
    Filed: September 27, 2012
    Publication date: March 27, 2014
    Applicant: Apple Inc.
    Inventors: Sukalpa Biswas, Shinye Shiu
  • Publication number: 20140089590
    Abstract: Methods and apparatuses for reducing power consumption of a system cache within a memory controller. The system cache includes multiple ways, and individual ways are powered down when cache activity is low. A maximum active way configuration register is set by software and determines the maximum number of ways which are permitted to be active. When searching for a cache line replacement candidate, a linear feedback shift register (LFSR) is used to select from the active ways. This ensures that each active way has an equal chance of getting picked for finding a replacement candidate when one or more of the ways are inactive.
    Type: Application
    Filed: September 27, 2012
    Publication date: March 27, 2014
    Applicant: APPLE INC.
    Inventors: Sukalpa Biswas, Shinye Shiu, Rong Zhang Hu
  • Publication number: 20140086070
    Abstract: In some embodiments, a system includes a shared, high bandwidth resource (e.g. a memory system), multiple agents configured to communicate with the shared resource, and a communication fabric coupling the multiple agents to the shared resource. The communication fabric may be equipped with limiters configured to limit bandwidth from the various agents based on one or more performance metrics measured with respect to the shared, high bandwidth resource. For example, the performance metrics may include one or more of latency, number of outstanding transactions, resource utilization, etc. The limiters may dynamically modify their limit configurations based on the performance metrics. In an embodiment, the system may include multiple thresholds for the performance metrics, and exceeding a given threshold may include modifying the limiters in the communication fabric. There may be hysteresis implemented in the system as well in some embodiments, to reduce the frequency of transitions between configurations.
    Type: Application
    Filed: September 24, 2012
    Publication date: March 27, 2014
    Inventors: Gurjeet S. Saund, James B. Keller, Manu Gulati, Sukalpa Biswas
  • Publication number: 20140089592
    Abstract: Methods and apparatuses for processing speculative read requests in a system cache within a memory controller. To expedite a speculative read request, the request is sent on parallel paths through the system cache. A first path goes through a speculative read engine to determine if the speculative read request meets the conditions for accessing memory. A second path involves performing a tag lookup to determine if the data referenced by the request is already in the system cache. If the speculative read request meets the conditions for accessing memory, the request is sent to a miss queue where it is held until a confirm or cancel signal is received from the tag lookup mechanism.
    Type: Application
    Filed: September 27, 2012
    Publication date: March 27, 2014
    Applicant: APPLE INC.
    Inventors: Sukalpa Biswas, Shinye Shiu
  • Publication number: 20140089600
    Abstract: Methods and apparatuses for utilizing a data pending state for cache misses in a system cache. To reduce the size of a miss queue that is searched by subsequent misses, a cache line storage location is allocated in the system cache for a miss and the state of the cache line storage location is set to data pending. A subsequent request that hits to the cache line storage location will detect the data pending state and as a result, the subsequent request will be sent to a replay buffer. When the fill for the original miss comes back from external memory, the state of the cache line storage location is updated to a clean state. Then, the request stored in the replay buffer is reactivated and allowed to complete its access to the cache line storage location.
    Type: Application
    Filed: September 27, 2012
    Publication date: March 27, 2014
    Applicant: APPLE INC.
    Inventors: Sukalpa Biswas, Shinye Shiu, James B. Keller
  • Publication number: 20140075118
    Abstract: Methods and apparatuses for implementing a system cache with quota-based control. Quotas may be assigned on a group ID basis to each group ID that is assigned to use the system cache. The quota does not reserve space in the system cache, but rather the quota may be used within any way within the system cache. The quota may prevent a given group ID from consuming more than a desired amount of the system cache. Once a group ID's quota has been reached, no additional allocation will be permitted for that group ID. The total amount of allocated quota for all group IDs can exceed the size of system cache, such that the system cache can be oversubscribed. The sticky state can be used to prioritize data retention within the system cache when oversubscription is being used.
    Type: Application
    Filed: September 11, 2012
    Publication date: March 13, 2014
    Inventors: Sukalpa Biswas, Shinye Shiu, James Wang
  • Publication number: 20140075125
    Abstract: Methods and apparatuses for utilizing a cache hint mechanism in which a requesting agent can provide hints as to how data corresponding to a request should be cached in a system cache within a memory controller. The way the system cache responds to received requests is determined by the cache hint provided by the originating requesting agent. When a request is accompanied by a de-allocate cache hint, the system cache causes a cache line hit by the request to be de-allocated. For a request with a do not allocate cache hint, the system cache does not allocate a cache line if the request misses in the system cache, and the system cache maintains a given cache line in its current state if the requests hits the given cache line.
    Type: Application
    Filed: September 11, 2012
    Publication date: March 13, 2014
    Inventors: Sukalpa Biswas, Shinye Shiu, James Wang
  • Publication number: 20140059297
    Abstract: Methods and apparatuses for implementing a system cache within a memory controller. Multiple requesting agents may allocate cache lines in the system cache, and each line allocated in the system cache may be associated with a specific group ID. Also, each line may have a corresponding sticky state which indicates if the line should be retained in the cache. The sticky state is determined by an allocation hint provided by the requesting agent. When a cache line is allocated with the sticky state, the line will not be replaced by other cache lines fetched by any other group IDs.
    Type: Application
    Filed: August 27, 2012
    Publication date: February 27, 2014
    Inventors: Sukalpa Biswas, Shinye Shiu, James Wang