Patents by Inventor Murali Chinnakonda
Murali Chinnakonda has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9569361Abstract: According to one general aspect, an apparatus may include a cache pre-fetcher, and a pre-fetch scheduler. The cache pre-fetcher may be configured to predict, based at least in part upon a virtual address, data to be retrieved from a memory system. The pre-fetch scheduler may be configured to convert the virtual address of the data to a physical address of the data, and request the data from one of a plurality of levels of the memory system. The memory system may include a plurality of levels, each level of the memory system configured to store data.Type: GrantFiled: July 7, 2014Date of Patent: February 14, 2017Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Arun Radhakrishnan, Kevin Lepak, Rama Gopal, Murali Chinnakonda, Karthik Sundaram, Brian Grayson
-
Patent number: 9418018Abstract: A Fill Buffer (FB) based data forwarding scheme that stores a combination of Virtual Address (VA), TLB (Translation Look-aside Buffer) entry# or an indication of a location of a Page Table Entry (PTE) in the TLB, and a TLB page size information in the FB and uses these values to expedite FB forwarding. Load (Ld) operations send their non-translated VA for an early comparison against the VA entries in the FB, and are then further qualified with the TLB entry# to determine a “hit.” This hit determination is fast and enables FB forwarding at higher frequencies without waiting for a comparison of Physical Addresses (PA) to conclude in the FB. A safety mechanism may detect a false hit in the FB and generate a late load cancel indication to cancel the earlier-started FB forwarding by ignoring the data obtained as a result of the Ld execution. The Ld is then re-executed later and tries to complete successfully with the correct data.Type: GrantFiled: July 21, 2014Date of Patent: August 16, 2016Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Karthik Sundaram, Rama Gopal, Murali Chinnakonda
-
Patent number: 9418019Abstract: An embodiment includes a system, comprising: a cache configured to store a plurality of cache lines, each cache line associated with a priority state from among N priority states; and a controller coupled to the cache and configured to: search the cache lines for a cache line with a lowest priority state of the priority states to use as a victim cache line; if the cache line with the lowest priority state is not found, reduce the priority state of at least one of the cache lines; and select a random cache line of the cache lines as the victim cache line if, after performing each of the searching of the cache lines and the reducing of the priority state of at least one cache line K times, the cache line with the lowest priority state is not found. N is an integer greater than or equal to 3; and K is an integer greater than or equal to 1 and less than or equal to N?2.Type: GrantFiled: May 2, 2014Date of Patent: August 16, 2016Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Kevin Lepak, Tarun Nakra, Khang Nguyen, Murali Chinnakonda, Edwin Silvera
-
Publication number: 20150331608Abstract: An electronic system includes: a storage unit configured to store a data array; a control unit configured to: determine availability of the data array; reorder access to the data array; and provide access to the data array.Type: ApplicationFiled: November 14, 2014Publication date: November 19, 2015Inventors: Edwin Silvera, Murali Chinnakonda, Tarun Nakra, Kevin Lepak
-
Publication number: 20150199275Abstract: According to one general aspect, an apparatus may include a cache pre-fetcher, and a pre-fetch scheduler. The cache pre-fetcher may be configured to predict, based at least in part upon a virtual address, data to be retrieved from a memory system. The pre-fetch scheduler may be configured to convert the virtual address of the data to a physical address of the data, and request the data from one of a plurality of levels of the memory system. The memory system may include a plurality of levels, each level of the memory system configured to store data.Type: ApplicationFiled: July 7, 2014Publication date: July 16, 2015Inventors: Arun RADHAKRISHNAN, Kevin LEPAK, Rama GOPAL, Murali CHINNAKONDA, Karthik SUNDARAM, Brian GRAYSON
-
Publication number: 20150186280Abstract: An embodiment includes a system, comprising: a cache configured to store a plurality of cache lines, each cache line associated with a priority state from among N priority states; and a controller coupled to the cache and configured to: search the cache lines for a cache line with a lowest priority state of the priority states to use as a victim cache line; if the cache line with the lowest priority state is not found, reduce the priority state of at least one of the cache lines; and select a random cache line of the cache lines as the victim cache line if, after performing each of the searching of the cache lines and the reducing of the priority state of at least one cache line K times, the cache line with the lowest priority state is not found. N is an integer greater than or equal to 3; and K is an integer greater than or equal to 1 and less than or equal to N?2.Type: ApplicationFiled: May 2, 2014Publication date: July 2, 2015Applicant: Samsung Electronics Co., Ltd.Inventors: Kevin LEPAK, Tarun NAKRA, Khang NGUYEN, Murali CHINNAKONDA, Edwin SILVERA
-
Publication number: 20150186292Abstract: A Fill Buffer (FB) based data forwarding scheme that stores a combination of Virtual Address (VA), TLB (Translation Look-aside Buffer) entry# or an indication of a location of a Page Table Entry (PTE) in the TLB, and a TLB page size information in the FB and uses these values to expedite FB forwarding. Load (Ld) operations send their non-translated VA for an early comparison against the VA entries in the FB, and are then further qualified with the TLB entry# to determine a “hit.” This hit determination is fast and enables FB forwarding at higher frequencies without waiting for a comparison of Physical Addresses (PA) to conclude in the FB. A safety mechanism may detect a false hit in the FB and generate a late load cancel indication to cancel the earlier-started FB forwarding by ignoring the data obtained as a result of the Ld execution. The Ld is then re-executed later and tries to complete successfully with the correct data.Type: ApplicationFiled: July 21, 2014Publication date: July 2, 2015Inventors: Karthik SUNDARAM, Rama GOPAL, Murali CHINNAKONDA
-
Publication number: 20040158694Abstract: Methods and apparatus are provided for use in a digital processor having a pipeline for executing instructions. The method includes monitoring instructions in the pipeline for instructions that write to a resource and instructions that read from the resource; for each instruction that writes to the resource, storing a write instruction type and write instruction tracking data; for each instruction that reads from the resource, determining a read instruction type and generating a latency value based on the write instruction type and the read instruction type; and stalling execution of the instruction that reads from the resource by a number of stall cycles in response to the latency value and the write instruction tracking data.Type: ApplicationFiled: February 10, 2003Publication date: August 12, 2004Inventors: Thomas J. Tomazin, David Witt, Murali Chinnakonda, William H. Hooper
-
Patent number: 6298423Abstract: A load/store functional unit and a corresponding data cache of a superscalar microprocessor is disclosed. The load/store functional unit includes a plurality of reservation station entries which are accessed in parallel and which are coupled to the data cache in parallel. The load/store functional unit also includes a store buffer circuit having a plurality of store buffer entries. The store buffer entries are organized to provide a first in first out buffer where the outputs from less significant entries of the buffer are provided as inputs to more significant entries of the buffer.Type: GrantFiled: August 26, 1996Date of Patent: October 2, 2001Assignee: Advanced Micro Devices, Inc.Inventors: William M. Johnson, David B. Witt, Murali Chinnakonda
-
Patent number: 5878245Abstract: A load/store functional unit and a corresponding data cache of a superscalar microprocessor is disclosed. The load/store functional unit includes a plurality of reservation station entries which are accessed in parallel and which are coupled to the data cache in parallel. The load/store functional unit also includes a store buffer circuit having a plurality of store buffer entries. The store buffer entries are organized to provide a first in first out buffer where the outputs from less significant entries of the buffer are provided as inputs to more significant entries of the buffer.Type: GrantFiled: October 29, 1993Date of Patent: March 2, 1999Assignee: Advanced Micro Devices, Inc.Inventors: William M. Johnson, David B. Witt, Murali Chinnakonda