Patents by Inventor John Kalamatianos
John Kalamatianos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20170083474Abstract: A plurality of first controllers operate according to a plurality of access protocols to control a plurality of memory modules. A second controller receives access requests that target the plurality of memory modules and selectively provides the access requests and control information to the plurality of first controllers based on physical addresses in the access requests. The second controller generates the control information for the first controllers based on statistical representations of the access requests to the plurality of memory modules.Type: ApplicationFiled: September 22, 2015Publication date: March 23, 2017Inventors: Mitesh R. Meswani, David A. Roberts, Yasuko Eckert, Kapil Dev, John Kalamatianos, Indrani Paul
-
Publication number: 20170083444Abstract: A cache controller to configure a portion of a first memory as cache for a second memory responsive to an indicator of locality of memory access requests to the second memory. The indicator of locality determines a probability that a location of a memory access request to the second memory is predictable based upon at least one previous memory access request. The cache controller may determine a size of the cache based on a value of the indicator of locality or modify the size of the cache in response to changes in the value of the indicator of locality.Type: ApplicationFiled: September 22, 2015Publication date: March 23, 2017Inventors: Kapil Dev, Mitesh R. Meswani, David A. Roberts, Yasuko Eckert, Indrani Paul, John Kalamatianos
-
Publication number: 20170031853Abstract: A communication device includes a data source that generates data for transmission over a bus, and that further includes a data encoder coupled to receive and encode outgoing data. The encoder further includes a coupling toggle rate (CTR) calculator configured to calculate a CTR for the outgoing data, a threshold calculator configured to determine an expected value of the CTR as a threshold value, a comparator configured to compare the calculated CTR to the threshold value wherein the comparison is used to determine whether to perform an encoding step by an encoding block configured to selectively encode said data. A method according to one embodiment includes determining and comparing a CTR and an expected CTR to determine whether to encode the outgoing data. Any one of a plurality different coding techniques may be used including bus inversion.Type: ApplicationFiled: July 30, 2015Publication date: February 2, 2017Applicant: ADVANCED MICRO DEVICES, INC.Inventors: Greg Sadowski, John Kalamatianos
-
Patent number: 9529720Abstract: The present application describes embodiments of techniques for picking a data array lookup request for execution in a data array pipeline a variable number of cycles behind a corresponding tag array lookup request that is concurrently executing in a tag array pipeline. Some embodiments of a method for picking the data array lookup request include picking the data array lookup request for execution in a data array pipeline of a cache concurrently with execution of a tag array lookup request in a tag array pipeline of the cache. The data array lookup request is picked for execution in response to resources of the data array pipeline becoming available after picking the tag array lookup request for execution. Some embodiments of the method may be implemented in a cache.Type: GrantFiled: June 7, 2013Date of Patent: December 27, 2016Assignee: Advanced Micro Devices, Inc.Inventors: Marius Evers, John Kalamatianos, Carl D. Dietz, Richard E. Klass, Ravindra N. Bhargava
-
Publication number: 20160291678Abstract: In one form, power consumed in transmitting data over a bus interconnect is reduced. The power is reduced by configuring a buffer that is used to store data to be transmitted over the bus interconnect as a two-dimensional (2D) buffer array having a plurality of rows and columns. The data stored in the 2D buffer array is then analyzed to determine a mode of transmitting the data that uses a least amount of power. The determined mode is used to transmit the data over the bus interconnect.Type: ApplicationFiled: March 30, 2015Publication date: October 6, 2016Applicant: ADVANCED MICRO DEVICES, INC.Inventors: Greg Sadowski, John Kalamatianos
-
Patent number: 9424195Abstract: A method of managing cache memory includes accessing a cache memory at a primary index that corresponds to an address specified in an access request. A determination is made that accessing the cache memory at the primary index does not result in a cache hit on a cache line with an error-free status. In response to this determination, the primary index is mapped to a secondary index and data for the address is written to a cache line at the secondary index.Type: GrantFiled: April 15, 2014Date of Patent: August 23, 2016Assignee: ADVANCED MICRO DEVICES, INC.Inventors: John Kalamatianos, Johnsy Kanjirapallil John, Phillip E. Nevius, Robert G. Gelinas
-
Patent number: 9304919Abstract: The present application describes some embodiments of a prefetcher that tracks multiple stride sequences for prefetching. Some embodiments of the prefetcher implement a method including generating a sum-of-strides for each of a plurality of stride lengths that are larger than one by summing a number of previous strides that is equal to the stride length. Some embodiments of the method also include prefetching data in response to repetition of one or more of the sum-of-strides for one or more of the plurality of stride lengths.Type: GrantFiled: May 31, 2013Date of Patent: April 5, 2016Assignee: Advanced Micro Devices, Inc.Inventors: John Kalamatianos, Paul E. Keltcher
-
Patent number: 9223705Abstract: A processor employs a prefetch prediction module that predicts, for each prefetch request, whether the prefetch request is likely to be satisfied from (“hit”) the cache. The arbitration priority of prefetch requests that are predicted to hit the cache is reduced relative to demand requests or other prefetch requests that are predicted to miss in the cache. Accordingly, an arbiter for the cache is less likely to select prefetch requests that hit the cache, thereby improving processor throughput.Type: GrantFiled: April 1, 2013Date of Patent: December 29, 2015Assignee: Advanced Micro Devices, Inc.Inventors: Ramkumar Jayaseelan, John Kalamatianos
-
Patent number: 9189326Abstract: Hard errors in the memory array can be detected and corrected in real-time using reusable entries in an error status buffer. Data may be rewritten to a portion of a memory array and a register in response to a first error in data read from the portion of the memory array. The rewritten data may then be written from the register to an entry of an error status buffer in response to the rewritten data read from the register differing from the rewritten data read from the portion of the memory array.Type: GrantFiled: October 8, 2013Date of Patent: November 17, 2015Assignee: Advanced Micro Devices, Inc.Inventors: John Kalamatianos, Johnsy Kanjirapallil John, Robert Gelinas, Vilas K. Sridharan, Phillip E. Nevius
-
Publication number: 20150293854Abstract: A method of managing cache memory includes accessing a cache memory at a primary index that corresponds to an address specified in an access request. A determination is made that accessing the cache memory at the primary index does not result in a cache hit on a cache line with an error-free status. In response to this determination, the primary index is mapped to a secondary index and data for the address is written to a cache line at the secondary index.Type: ApplicationFiled: April 15, 2014Publication date: October 15, 2015Applicant: Advanced Micro Devices, Inc.Inventors: John Kalamatianos, Johnsy Kanjirapallil John, Phillip E. Nevius, Robert G. Gelinas
-
Patent number: 9058278Abstract: A method, an apparatus, and a non-transitory computer readable medium for tracking accuracy and coverage of a prefetcher in a processor are presented. A table is maintained and indexed by an address, wherein each entry in the table corresponds to one address. A number of demand requests that hit in the table on a prefetch, a total number of demand requests, and a number of prefetch requests are counted. The accuracy of the prefetcher is calculated by dividing the number of demand requests that hit in the table on a prefetch by the number of prefetch requests. The coverage of the prefetcher is calculated by dividing the number of demand requests that hit in the table on a prefetch by the total number of demand requests. The table and the counters are reset when a reset condition is reached.Type: GrantFiled: December 19, 2012Date of Patent: June 16, 2015Assignee: Advanced Micro Devices, Inc.Inventors: John Kalamatianos, Paul Keltcher
-
Patent number: 9058277Abstract: Methods and systems for prefetching data for a processor are provided. A system is configured for and a method includes selecting one of a first prefetching control logic and a second prefetching control logic of the processor as a candidate feature, capturing the performance metric of the processor over an inactive sample period when the candidate feature is inactive, capturing a performance metric of the processor over an active sample period when the candidate feature is active, comparing the performance metric of the processor for the active and inactive sample periods, and setting a status of the candidate feature as enabled when the performance metric in the active period indicates improvement over the performance metric in the inactive period, and as disabled when the performance metric in the inactive period indicates improvement over the performance metric in the active period.Type: GrantFiled: November 8, 2012Date of Patent: June 16, 2015Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Sharad Dilip Bade, Alok Garg, John Kalamatianos, Paul Keltcher, Marius Evers, Chitresh Narasimhaiah
-
Patent number: 9047173Abstract: A method, an apparatus, and a non-transitory computer readable medium for tracking prefetches generated by a stride prefetcher are presented. Responsive to a prefetcher table entry for an address stream locking on a stride, prefetch suppression logic is updated and prefetches from the prefetcher table entry are suppressed when suppression is enabled for that prefetcher table entry. A stride is a difference between consecutive addresses in the address stream. A prefetch request is issued from the prefetcher table entry when suppression is not enabled for that prefetcher table entry.Type: GrantFiled: February 21, 2013Date of Patent: June 2, 2015Assignee: Advanced Micro Devices, Inc.Inventors: Alok Garg, Sharad Bade, John Kalamatianos
-
Patent number: 9021207Abstract: In response to a processor core exiting a low-power state, a cache is set to a minimum size so that fewer than all of the cache's entries are available to store data, thus reducing the cache's power consumption. Over time, the size of the cache can be increased to account for heightened processor activity, thus ensuring that processing efficiency is not significantly impacted by a reduced cache size. In some embodiments, the cache size is increased based on a measured processor performance metric, such as an eviction rate of the cache. In some embodiments, the cache size is increased at regular intervals until a maximum size is reached.Type: GrantFiled: December 20, 2012Date of Patent: April 28, 2015Assignee: Advanced Micro Devices, Inc.Inventors: John Kalamatianos, Edward J. McLellan, Paul Keltcher, Srilatha Manne, Richard E. Klass, James M. O'Connor
-
Publication number: 20150100848Abstract: Hard errors in the memory array can be detected and corrected in real-time using reusable entries in an error status buffer. Data may be rewritten to a portion of a memory array and a register in response to a first error in data read from the portion of the memory array. The rewritten data may then be written from the register to an entry of an error status buffer in response to the rewritten data read from the register differing from the rewritten data read from the portion of the memory array.Type: ApplicationFiled: October 8, 2013Publication date: April 9, 2015Applicant: Advanced Micro Devices, Inc.Inventors: John Kalamatianos, Johnsy Kanjirapallil John, Robert Gelinas, Vilas K. Sridharan, Phillip E. Nevius
-
Publication number: 20150026414Abstract: A prefetcher maintains the state of stored prefetch information, such as a prefetch confidence level, when a prefetch would cross a memory page boundary. The maintained prefetch information can be used both to identify whether the stride pattern for a particular sequence of demand requests persists after the memory page boundary has been crossed, and to continue to issue prefetch requests according to the identified pattern. The prefetcher therefore does not have re-identify a stride pattern each time a page boundary is crossed by a sequence of demand requests, thereby improving the efficiency and accuracy of the prefetcher.Type: ApplicationFiled: July 17, 2013Publication date: January 22, 2015Applicant: Advanced Micro Devices, Inc.Inventors: John Kalamatianos, Paul Keltcher, Marius Evers, Chitresh Narasimhaiah
-
Publication number: 20140365729Abstract: The present application describes embodiments of techniques for picking a data array lookup request for execution in a data array pipeline a variable number of cycles behind a corresponding tag array lookup request that is concurrently executing in a tag array pipeline. Some embodiments of a method for picking the data array lookup request include picking the data array lookup request for execution in a data array pipeline of a cache concurrently with execution of a tag array lookup request in a tag array pipeline of the cache. The data array lookup request is picked for execution in response to resources of the data array pipeline becoming available after picking the tag array lookup request for execution. Some embodiments of the method may be implemented in a cache.Type: ApplicationFiled: June 7, 2013Publication date: December 11, 2014Inventors: Marius Evers, John Kalamatianos, Carl D. Dietz, Richard E. Klass, Ravindra N. Bhargava
-
Patent number: 8909866Abstract: A processor transfers prefetch requests from their targeted cache to another cache in a memory hierarchy based on a fullness of a miss address buffer (MAB) or based on confidence levels of the prefetch requests. Each cache in the memory hierarchy is assigned a number of slots at the MAB. In response to determining the fullness of the slots assigned to a cache is above a threshold when a prefetch request to the cache is received, the processor transfers the prefetch request to the next lower level cache in the memory hierarchy. In response, the data targeted by the access request is prefetched to the next lower level cache in the memory hierarchy, and is therefore available for subsequent provision to the cache. In addition, the processor can transfer a prefetch request to lower level caches based on a confidence level of a prefetch request.Type: GrantFiled: November 6, 2012Date of Patent: December 9, 2014Assignee: Advanced Micro Devices, Inc.Inventors: John Kalamatianos, Ravindra Nath Bhargava, Ramkumar Jayaseelan
-
Publication number: 20140359221Abstract: The present application describes some embodiments of a prefetcher that tracks multiple stride sequences for prefetching. Some embodiments of the prefetcher implement a method including generating a sum-of-strides for each of a plurality of stride lengths that are larger than one by summing a number of previous strides that is equal to the stride length. Some embodiments of the method also include prefetching data in response to repetition of one or more of the sum-of-strides for one or more of the plurality of stride lengths.Type: ApplicationFiled: May 31, 2013Publication date: December 4, 2014Inventors: John Kalamatianos, Paul E. Keltcher
-
Publication number: 20140297965Abstract: A processor employs a prefetch prediction module that predicts, for each prefetch request, whether the prefetch request is likely to be satisfied from (“hit”) the cache. The arbitration priority of prefetch requests that are predicted to hit the cache is reduced relative to demand requests or other prefetch requests that are predicted to miss in the cache. Accordingly, an arbiter for the cache is less likely to select prefetch requests that hit the cache, thereby improving processor throughput.Type: ApplicationFiled: April 1, 2013Publication date: October 2, 2014Applicant: Advanced Micro Devices, Inc.Inventors: Ramkumar Jayaseelan, John Kalamatianos