Patents by Inventor Krishna C. Potnuru
Krishna C. Potnuru has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11893241Abstract: A variable latency cache memory is disclosed. A cache subsystem includes a pipeline control circuit configured to initiate cache memory accesses for data. The cache subsystem further includes a cache memory circuit having a data array arranged into a plurality of groups, wherein different ones of the plurality of groups have different minimum access latencies due to different distances from the pipeline control circuit. A plurality of latency control circuits configured to ensure a latency is bounded to a maximum value for a given access to the data array, wherein a given latency control circuit is associated with a corresponding group of the plurality of groups. The latency for a given access may thus vary between a minimum access latency for a group closest to the pipeline control circuit to a maximum latency for an access to the group furthest from the pipeline control circuit.Type: GrantFiled: August 31, 2022Date of Patent: February 6, 2024Assignee: Apple Inc.Inventors: Brian P. Lilly, Sandeep Gupta, Chandan Shantharaj, Krishna C. Potnuru, Sahil Kapoor
-
Publication number: 20230418724Abstract: An apparatus includes a plurality of processor circuits, a cache memory circuit, and a trace control circuit. The trace control circuit may be configured, in response to activation of a mode to record information indicative of program execution of at least one processor circuit of the plurality of processor circuits, to monitor memory requests transmitted between ones of the plurality of processor circuits and the cache memory circuit, and then to select a particular memory request of monitored memory requests using an arbitration algorithm. The trace control circuit may be further configured to allocate space in a trace buffer to the particular memory request, and to store, in the trace buffer, information associated with the particular memory request.Type: ApplicationFiled: June 29, 2023Publication date: December 28, 2023Inventors: Andrew J. Beaumont-Smith, Sandeep Gupta, Krishna C. Potnuru, Matthias Knoth
-
Publication number: 20230359557Abstract: A cache may include multiple request handling pipes, each of which may further include multiple request buffers, for storing device requests from one or more processors to one or more devices. Some of the device requests may require to be sent to the devices according to an order. For a given one of such device requests, the cache may select a request handling pipe, based on an address indicated by the device request, and select a request buffer, based on the available entries of the request buffers of the selected request handling pipe, to store the device request. The cache may further use a first-level and a second-level token stores to track and maintain the device requests in order when transmitting the device requests to the devices.Type: ApplicationFiled: July 17, 2023Publication date: November 9, 2023Applicant: Apple Inc.Inventors: Sandeep Gupta, Brian P. Lilly, Krishna C. Potnuru
-
Patent number: 11768690Abstract: A system may include a plurality of processors and a coprocessor. A plurality of coprocessor context priority registers corresponding to a plurality of contexts supported by the coprocessor may be included. The plurality of processors may use the plurality of contexts, and may program the coprocessor context priority register corresponding to a context with a value specifying a priority of the context relative to other contexts. An arbiter may arbitrate among instructions issued by the plurality of processors based on the priorities in the plurality of coprocessor context priority registers. In one embodiment, real-time threads may be assigned higher priorities than bulk processing tasks, improving bandwidth allocated to the real-time threads as compared to the bulk tasks.Type: GrantFiled: November 22, 2021Date of Patent: September 26, 2023Assignee: Apple Inc.Inventors: Aditya Kesiraju, Andrew J. Beaumont-Smith, Brian P. Lilly, James Vash, Jason M. Kassoff, Krishna C. Potnuru, Rajdeep L. Bhuyar, Ran A. Chachick, Tyler J. Huberty, Derek R. Kumar
-
Patent number: 11741009Abstract: A cache may include multiple request handling pipes, each of which may further include multiple request buffers, for storing device requests from one or more processors to one or more devices. Some of the device requests may require to be sent to the devices according to an order. For a given one of such device requests, the cache may select a request handling pipe, based on an address indicated by the device request, and select a request buffer, based on the available entries of the request buffers of the selected request handling pipe, to store the device request. The cache may further use a first-level and a second-level token stores to track and maintain the device requests in order when transmitting the device requests to the devices.Type: GrantFiled: November 15, 2021Date of Patent: August 29, 2023Assignee: Apple Inc.Inventors: Sandeep Gupta, Brian P Lilly, Krishna C Potnuru
-
Patent number: 11740993Abstract: An apparatus includes a plurality of processor circuits, a cache memory circuit, and a trace control circuit. The trace control circuit may be configured, in response to activation of a mode to record information indicative of program execution of at least one processor circuit of the plurality of processor circuits, to monitor memory requests transmitted between ones of the plurality of processor circuits and the cache memory circuit, and then to select a particular memory request of monitored memory requests using an arbitration algorithm. The trace control circuit may be further configured to allocate space in a trace buffer to the particular memory request, and to store, in the trace buffer, information associated with the particular memory request.Type: GrantFiled: November 30, 2021Date of Patent: August 29, 2023Assignee: Apple Inc.Inventors: Andrew J. Beaumont-Smith, Sandeep Gupta, Krishna C. Potnuru, Matthias Knoth
-
Publication number: 20230061419Abstract: An apparatus includes a plurality of processor circuits, a cache memory circuit, and a trace control circuit. The trace control circuit may be configured, in response to activation of a mode to record information indicative of program execution of at least one processor circuit of the plurality of processor circuits, to monitor memory requests transmitted between ones of the plurality of processor circuits and the cache memory circuit, and then to select a particular memory request of monitored memory requests using an arbitration algorithm. The trace control circuit may be further configured to allocate space in a trace buffer to the particular memory request, and to store, in the trace buffer, information associated with the particular memory request.Type: ApplicationFiled: November 30, 2021Publication date: March 2, 2023Inventors: Andrew J. Beaumont-Smith, Sandeep Gupta, Krishna C. Potnuru, Matthias Knoth
-
Publication number: 20220083343Abstract: A system may include a plurality of processors and a coprocessor. A plurality of coprocessor context priority registers corresponding to a plurality of contexts supported by the coprocessor may be included. The plurality of processors may use the plurality of contexts, and may program the coprocessor context priority register corresponding to a context with a value specifying a priority of the context relative to other contexts. An arbiter may arbitrate among instructions issued by the plurality of processors based on the priorities in the plurality of coprocessor context priority registers. In one embodiment, real-time threads may be assigned higher priorities than bulk processing tasks, improving bandwidth allocated to the real-time threads as compared to the bulk tasks.Type: ApplicationFiled: November 22, 2021Publication date: March 17, 2022Inventors: Aditya Kesiraju, Andrew J. Beaumont-Smith, Brian P. Lilly, James Vash, Jason M. Kassoff, Krishna C. Potnuru, Rajdeep L. Bhuyar, Ran A. Chachick, Tyler J. Huberty, Derek R. Kumar
-
Patent number: 11210104Abstract: A system may include a plurality of processors and a coprocessor. A plurality of coprocessor context priority registers corresponding to a plurality of contexts supported by the coprocessor may be included. The plurality of processors may use the plurality of contexts, and may program the coprocessor context priority register corresponding to a context with a value specifying a priority of the context relative to other contexts. An arbiter may arbitrate among instructions issued by the plurality of processors based on the priorities in the plurality of coprocessor context priority registers. In one embodiment, real-time threads may be assigned higher priorities than bulk processing tasks, improving bandwidth allocated to the real-time threads as compared to the bulk tasks.Type: GrantFiled: September 11, 2020Date of Patent: December 28, 2021Assignee: Apple Inc.Inventors: Aditya Kesiraju, Andrew J. Beaumont-Smith, Brian P. Lilly, James Vash, Jason M. Kassoff, Krishna C. Potnuru, Rajdeep L. Bhuyar, Ran A. Chachick, Tyler J. Huberty, Derek R. Kumar