Patents by Inventor Yasuko ECKERT

Yasuko ECKERT has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170083444
    Abstract: A cache controller to configure a portion of a first memory as cache for a second memory responsive to an indicator of locality of memory access requests to the second memory. The indicator of locality determines a probability that a location of a memory access request to the second memory is predictable based upon at least one previous memory access request. The cache controller may determine a size of the cache based on a value of the indicator of locality or modify the size of the cache in response to changes in the value of the indicator of locality.
    Type: Application
    Filed: September 22, 2015
    Publication date: March 23, 2017
    Inventors: Kapil Dev, Mitesh R. Meswani, David A. Roberts, Yasuko Eckert, Indrani Paul, John Kalamatianos
  • Publication number: 20170083065
    Abstract: A three-dimensional (3-D) processor stack includes a plurality of processor cores implemented in a plurality of layers. A controller is to selectively throttle one or more of a plurality of processor cores in response to detecting a thermal event. The controller selectively throttles the one or more of the plurality of processor cores based on values of thermal couplings between the plurality of layers and based on measures of criticality of threads executing on the plurality of processor cores.
    Type: Application
    Filed: September 22, 2015
    Publication date: March 23, 2017
    Inventors: Wei Huang, Manish Arora, Yasuko Eckert, Indrani Paul
  • Patent number: 9529718
    Abstract: To efficiently transfer of data from a cache to a memory, it is desirable that more data corresponding to the same page in the memory be loaded in a line buffer. Writing data to a memory page that is not currently loaded in a row buffer requires closing an old page and opening a new page. Both operations consume energy and clock cycles and potentially delay more critical memory read requests. Hence it is desirable to have more than one write going to the same DRAM page to amortize the cost of opening and closing DRAM pages. A desirable approach is batch write backs to the same DRAM page by retaining modified blocks in the cache until a sufficient number of modified blocks belonging to the same memory page are ready for write backs.
    Type: Grant
    Filed: December 12, 2014
    Date of Patent: December 27, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Syed Ali R. Jafri, Yasuko Eckert, Srilatha Manne, Mithuna S. Thottethodi, Gabriel H. Loh
  • Patent number: 9524164
    Abstract: A system and method for efficient predicting and processing of memory access dependencies. A computing system includes control logic that marks a detected load instruction as a first type responsive to predicting the load instruction has high locality and is a candidate for store-to-load (STL) data forwarding. The control logic marks the detected load instruction as a second type responsive to predicting the load instruction has low locality and is not a candidate for STL data forwarding. The control logic processes a load instruction marked as the first type as if the load instruction is dependent on an older store operation. The control logic processes a load instruction marked as the second type as if the load instruction is independent on any older store operation.
    Type: Grant
    Filed: August 30, 2013
    Date of Patent: December 20, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Lena E. Olson, Yasuko Eckert, Srilatha Manne
  • Patent number: 9507410
    Abstract: Power gating logic detects a transition of a component of a processing device into an idle state. In response to detecting the transition, the entry/exit power gating logic selectively implements one or more entry prediction techniques for power gating the component based on estimates of reliability of the entry prediction techniques. The entry/exit power gating logic also selectively implements one or more exit prediction techniques for exiting the power gated state based on estimates of reliability of the exit prediction techniques.
    Type: Grant
    Filed: June 20, 2014
    Date of Patent: November 29, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Yasuko Eckert, Manish Arora, Indrani Paul
  • Publication number: 20160338230
    Abstract: A cooling system controller for a set of computing resources of a data center includes a first interface to couple to a first flow controller that controls a rate of thermal energy transfer to a PCM store from the set of computing resources, a second interface to couple to a second flow controller that controls a rate of thermal energy transfer from the PCM store to a cooling system, and a controller to determine a current set of operational parameters for the data center and to manipulate the first and second flow controllers and via the first and second interfaces to control a net thermal energy transfer to and from the PCM store based on the current set of parameters.
    Type: Application
    Filed: May 12, 2015
    Publication date: November 17, 2016
    Inventors: Fulya Kaplan, Manish Arora, Wayne P. Burleson, Indrani Paul, Yasuko Eckert
  • Publication number: 20160266629
    Abstract: A method includes adjusting a maximum skin temperature threshold of a device based on a device state, adjusting a power limit for the device based on the adjusted maximum skin temperature threshold, and operating the device based on the adjusted power limit. A processor includes a processing unit and a power management controller to adjust a maximum skin temperature threshold based on a device state and adjust a power limit for the processing unit based on the adjusted maximum skin temperature threshold.
    Type: Application
    Filed: March 9, 2015
    Publication date: September 15, 2016
    Inventors: Ali Akbar Merrikh, Ashish Jain, Benjamin David Bates, Yasuko Eckert, Indrani Paul, Wei Huang, Manish Arora, Alexander Joseph Branover, Sridhar V. Gada, Andrew McNamara, Samuel David Naffziger, Steven Frederick Liepe, Madhu Saravana Sibi Govindan
  • Patent number: 9443561
    Abstract: Embodiments are described for a communications interconnect scheme for 3D stacked memory devices. A ring network design is used for networks of memory chips organized as individual devices with multiple dies or wafers. The design comprises a three-tier ring network where each ring serves a different set of memory blocks. One ring or set of rings interconnects memory within a die (inter-bank), a second ring or set of rings interconnects memory across die in a stack (inter-die), and the third ring or set of rings interconnects memory across stacks or chip packages (inter-stack).
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: September 13, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: David Roberts, Yasuko Eckert, Mitesh Meswani, Indrani Paul
  • Patent number: 9442557
    Abstract: The described embodiments include a computing device with one or more entities (processor cores, processors, etc.). In some embodiments, during operation, a thermal power management unit in the computing device uses a linear prediction to compute a predicted duration of a next idle period for an entity based on the duration of one or more previous idle periods for the entity. Based on the predicted duration of the next idle period, the thermal power management unit configures the entity to operate in a corresponding idle state.
    Type: Grant
    Filed: November 8, 2013
    Date of Patent: September 13, 2016
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Manish Arora, Nuwan S. Jayasena, Yasuko Eckert, Madhu Saravana Sibi Govindan, William L. Bircher, Michael J. Schulte, Srilatha Manne
  • Publication number: 20160246360
    Abstract: A processor prunes state information based on information provided by software, thereby reducing the amount of state information to be stored prior to the processor entering a low-power state. The software, such as an operating system or application program executing at the processor, indicates one or more registers of the processor as storing data that is no longer useful. When preparing to enter the low-power state, the processor omits the indicated registers from the state information stored to memory.
    Type: Application
    Filed: February 25, 2015
    Publication date: August 25, 2016
    Inventors: Yasuko Eckert, Derek Hower, Marc Orr
  • Publication number: 20160232097
    Abstract: An integrated circuit (IC) package includes a stacked-die memory device. The stacked-die memory device includes a set of one or more stacked memory dies implementing memory cell circuitry. The stacked-die memory device further includes a set of one or more logic dies electrically coupled to the memory cell circuitry. The set of one or more logic dies includes a query controller and a memory controller. The memory controller is coupleable to at least one device external to the stacked-die memory device. The query controller is to perform a query operation on data stored in the memory cell circuitry responsive to a query command received from the external device.
    Type: Application
    Filed: February 5, 2016
    Publication date: August 11, 2016
    Inventors: Gabriel H. Loh, Nuwan S. Jayasena, James M. O'Connor, Yasuko Eckert
  • Patent number: 9378153
    Abstract: A level of cache memory receives modified data from a higher level of cache memory. A set of cache lines with an index associated with the modified data is identified. The modified data is stored in the set in a cache line with an eviction priority that is at least as high as an eviction priority, before the modified data is stored, of an unmodified cache line with a highest eviction priority among unmodified cache lines in the set.
    Type: Grant
    Filed: August 27, 2013
    Date of Patent: June 28, 2016
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Syed Ali Jafri, Yasuko Eckert, Srilatha Manne
  • Patent number: 9372803
    Abstract: A system and method are presented. Some embodiments include a processing unit, at least one memory coupled to the processing unit, and at least one cache coupled to the processing unit and divided into a series of blocks, wherein at least one of the series of cache blocks includes data identified as being in a modified state. The modified state data is flushed by writing the data to the at least one memory based on a write back policy and the aggressiveness of the policy is based on at least one factor including the number of idle cores, the proximity of the last cache flush, the activity of the thread associated with the data, and which cores are idle and if the idle core is associated with the data.
    Type: Grant
    Filed: December 20, 2012
    Date of Patent: June 21, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Srilatha Manne, Michael Schulte, Lloyd Bircher, Madhu Saravana Sibi Govindan, Yasuko Eckert
  • Publication number: 20160170887
    Abstract: To efficiently transfer of data from a cache to a memory, it is desirable that more data corresponding to the same page in the memory be loaded in a line buffer. Writing data to a memory page that is not currently loaded in a row buffer requires closing an old page and opening a new page. Both operations consume energy and clock cycles and potentially delay more critical memory read requests. Hence it is desirable to have more than one write going to the same DRAM page to amortize the cost of opening and closing DRAM pages. A desirable approach is batch write backs to the same DRAM page by retaining modified blocks in the cache until a sufficient number of modified blocks belonging to the same memory page are ready for write backs.
    Type: Application
    Filed: December 12, 2014
    Publication date: June 16, 2016
    Inventors: Syed Ali R. JAFRI, Yasuko ECKERT, Srilatha MANNE, Mithuna S. THOTTETHODI, Gabriel H. LOH
  • Publication number: 20160170919
    Abstract: A system includes a plurality of memory classes and a set of one or more processing units coupled to the plurality of memory classes. The system further includes a data migration controller to select a traffic rate as a maximum traffic rate for transferring data between the plurality of memory classes based on a net benefit metric associated with the traffic rate, and to enforce the maximum traffic rate for transferring data between the plurality of memory classes.
    Type: Application
    Filed: December 15, 2014
    Publication date: June 16, 2016
    Inventors: Sergey Blagodurov, Gabriel H. Loh, Yasuko Eckert
  • Patent number: 9367455
    Abstract: The described embodiments include a core that uses predictions for store-to-load forwarding. In the described embodiments, the core comprises a load-store unit, a store buffer, and a prediction mechanism. During operation, the prediction mechanism generates a prediction that a load will be satisfied using data forwarded from the store buffer because the load loads data from a memory location in a stack. Based on the prediction, the load-store unit first sends a request for the data to the store buffer in an attempt to satisfy the load using data forwarded from the store buffer. If data is returned from the store buffer, the load is satisfied using the data. However, if the attempt to satisfy the load using data forwarded from the store buffer is unsuccessful, the load-store unit then separately sends a request for the data to a cache to satisfy the load.
    Type: Grant
    Filed: September 5, 2013
    Date of Patent: June 14, 2016
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Yasuko Eckert, Lena E. Olson, Srilatha Manne, James M. O'Connor
  • Patent number: 9298615
    Abstract: A method of partitioning a data cache comprising a plurality of sets, the plurality of sets comprising a plurality of ways, is provided. Responsive to a stack data request, the method stores a cache line associated with the stack data in one of a plurality of designated ways of the data cache, wherein the plurality of designated ways is configured to store all requested stack data.
    Type: Grant
    Filed: July 19, 2013
    Date of Patent: March 29, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Lena E. Olson, Yasuko Eckert, Vilas K. Sridharan, James M. O'Connor, Mark D. Hill, Srilatha Manne
  • Publication number: 20160085219
    Abstract: A processing device includes a plurality of components and a system management unit to selectively schedule an application phase to one of the plurality of components based on one or more comparisons of predictions of a plurality of thermal impacts of executing the application phase on each of the plurality of components. The predictions may be generated based on a thermal history associated with the application phase, thermal sensitivities of the plurality of components, or a layout of the plurality of components in the processing device.
    Type: Application
    Filed: September 22, 2014
    Publication date: March 24, 2016
    Inventors: Indrani Paul, Manish Arora, Yasuko Eckert, Srilatha Manne
  • Publication number: 20160086654
    Abstract: A method of managing thermal levels in a memory system may include determining an expected thermal level associated with each of a plurality of locations in a memory structure, and for each operation of a plurality of operations addressed to the memory structure, assigning the operation to a target location of the plurality of physical locations in the memory structure based on a thermal penalty associated with the operation and the expected thermal level associated with the target location.
    Type: Application
    Filed: September 21, 2014
    Publication date: March 24, 2016
    Inventors: Manish Arora, Indrani Paul, Yasuko Eckert, Nuwan Jayasena, Dong Ping Zhang
  • Publication number: 20160077545
    Abstract: A processing device includes a producing processor unit in a first timing domain and a consuming processor unit in a second timing domain that is asynchronous with the first timing domain. A queue is used to convey data between the producing processor unit and the consuming processor unit. A system management unit is to modify one or both of an operating frequency or an operating voltage of one or both of the producing processor unit or the consuming processor unit based on a rate of change of a fullness of the queue.
    Type: Application
    Filed: September 17, 2014
    Publication date: March 17, 2016
    Inventors: Wayne P. Burleson, Manish Arora, Indrani Paul, Yasuko Eckert