Patents by Inventor Aaron C. Sawdey

Aaron C. Sawdey has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9612974
    Abstract: A method for storing service level agreement (“SLA”) compliance data includes reserving a memory location to store SLA compliance data of a software thread. The method includes directing the software thread to run on a selected hardware device. The method includes enabling SLA compliance data to be stored in the memory location. The SLA compliance data is from a hardware counting device in communication with the selected hardware device. The SLA compliance data corresponds to operation of the software thread on the selected hardware device.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: April 4, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Rajarshi Das, Aaron C. Sawdey, Philip L. Vitale
  • Patent number: 9600336
    Abstract: An apparatus for storing service level agreement (“SLA”) compliance data is disclosed. A method and a computer program product also perform the functions of the apparatus. The apparatus includes a reservation module that reserves a memory location to store SLA compliance data of a software thread. The apparatus includes a directing module that directs the software thread to run on a selected hardware device. The apparatus includes an enabling module that enables SLA compliance data to be stored in the memory location. The SLA compliance data is from a hardware counting device in communication with the selected hardware device. The SLA compliance data corresponds to operation of the software thread on the selected hardware device. At least a portion of the reservation, the module, and the enabling modules includes one or more of hardware and program instructions. The program instructions are stored on one or more computer readable storage media.
    Type: Grant
    Filed: August 28, 2015
    Date of Patent: March 21, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Rajarshi Das, Aaron C. Sawdey, Philip L. Vitale
  • Publication number: 20170060766
    Abstract: A method for storing service level agreement (“SLA”) compliance data includes reserving a memory location to store SLA compliance data of a software thread. The method includes directing the software thread to run on a selected hardware device. The method includes enabling SLA compliance data to be stored in the memory location. The SLA compliance data is from a hardware counting device in communication with the selected hardware device. The SLA compliance data corresponds to operation of the software thread on the selected hardware device.
    Type: Application
    Filed: September 24, 2015
    Publication date: March 2, 2017
    Inventors: RAJARSHI DAS, AARON C. SAWDEY, PHILIP L. VITALE
  • Publication number: 20170060631
    Abstract: An apparatus for storing service level agreement (“SLA”) compliance data is disclosed. A method and a computer program product also perform the functions of the apparatus. The apparatus includes a reservation module that reserves a memory location to store SLA compliance data of a software thread. The apparatus includes a directing module that directs the software thread to run on a selected hardware device. The apparatus includes an enabling module that enables SLA compliance data to be stored in the memory location. The SLA compliance data is from a hardware counting device in communication with the selected hardware device. The SLA compliance data corresponds to operation of the software thread on the selected hardware device. At least a portion of the reservation, the module, and the enabling modules includes one or more of hardware and program instructions. The program instructions are stored on one or more computer readable storage media.
    Type: Application
    Filed: August 28, 2015
    Publication date: March 2, 2017
    Inventors: RAJARSHI DAS, AARON C. SAWDEY, PHILIP L. VITALE
  • Patent number: 9298651
    Abstract: In-memory accumulation of hardware counts in a computer system is carried out by continuously sending count values from full-speed hardware counter units to a memory controller. A sending unit periodically samples performance data from the hardware counter units, and transmits count values to a bus interface for an interconnection bus which communicates with the memory controller. The memory controller responsively updates an accumulated count value stored in system memory using the current count value, e.g., incrementing the accumulated count value. A count value can be sent with a pointer to a memory location and an instruction on how the location is to be updated. The instruction may be an atomic read-modify-write operation, and the memory controller can include a dedicated arithmetic logic unit to carry out that operation. A data harvester can then be used to harvest accumulated count values by reading them from a table in system memory.
    Type: Grant
    Filed: June 24, 2013
    Date of Patent: March 29, 2016
    Assignee: International Business Machines Corporation
    Inventors: Peter J. Heyrman, Venkat R. Indukuru, Carl E. Love, Aaron C. Sawdey, Philip L. Vitale
  • Patent number: 9218292
    Abstract: A technique for scheduling cache cleaning operations maintains a clean distance between a set of least-recently-used (LRU) clean lines and the LRU dirty (modified) line for each congruence class in the cache. The technique is generally employed at a victim cache at the highest-order level of the cache memory hierarchy, so that write-backs to system memory are scheduled to avoid having to generate a write-back in response to a cache miss in the next lower-order level of the cache memory hierarchy. The clean distance can be determined by counting all of the LRU clean lines in each congruence class that have a reference count that is less than or equal to the reference count of the LRU dirty line.
    Type: Grant
    Filed: June 18, 2013
    Date of Patent: December 22, 2015
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli
  • Patent number: 9213647
    Abstract: A technique for scheduling cache cleaning operations maintains a clean distance between a set of least-recently-used (LRU) clean lines and the LRU dirty (modified) line for each congruence class in the cache. The technique is generally employed at a victim cache at the highest-order level of the cache memory hierarchy, so that write-backs to system memory are scheduled to avoid having to generate a write-back in response to a cache miss in the next lower-order level of the cache memory hierarchy. The clean distance can be determined by counting all of the LRU clean lines in each congruence class that have a reference count that is less than or equal to the reference count of the LRU dirty line.
    Type: Grant
    Filed: September 23, 2013
    Date of Patent: December 15, 2015
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli
  • Patent number: 8930625
    Abstract: A mechanism is provided for weighted history allocation prediction. For each member in a plurality of members in a lower level cache, an associated reference counter is initialized to an initial value based on an operation type that caused data to be allocated to a member location of the member. For each access to the member in the lower level cache, the associated reference counter is incremented. Responsive to a new allocation of data to the lower level cache and responsive to the new allocation of data requiring the victimization of another member in the lower level cache, a member of the lower level cache is identified that has a lowest reference count value in its associated reference counter. The member with the lowest reference count value in its associated reference counter is then evicted.
    Type: Grant
    Filed: September 12, 2012
    Date of Patent: January 6, 2015
    Assignee: International Business Machines Corporation
    Inventors: David M. Daly, Benjiman L. Goodman, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli
  • Publication number: 20140379953
    Abstract: In-memory accumulation of hardware counts in a computer system is carried out by continuously sending count values from full-speed hardware counter units to a memory controller. A sending unit periodically samples performance data from the hardware counter units, and transmits count values to a bus interface for an interconnection bus which communicates with the memory controller. The memory controller responsively updates an accumulated count value stored in system memory using the current count value, e.g., incrementing the accumulated count value. A count value can be sent with a pointer to a memory location and an instruction on how the location is to be updated. The instruction may be an atomic read-modify-write operation, and the memory controller can include a dedicated arithmetic logic unit to carry out that operation. A data harvester can then be used to harvest accumulated count values by reading them from a table in system memory.
    Type: Application
    Filed: June 24, 2013
    Publication date: December 25, 2014
    Inventors: Peter J. Heyrman, Venkat R. Indukuru, Carl E. Love, Aaron C. Sawdey, Philip L. Vitale
  • Publication number: 20140372705
    Abstract: A technique for scheduling cache cleaning operations maintains a clean distance between a set of least-recently-used (LRU) clean lines and the LRU dirty (modified) line for each congruence class in the cache. The technique is generally employed at a victim cache at the highest-order level of the cache memory hierarchy, so that write-backs to system memory are scheduled to avoid having to generate a write-back in response to a cache miss in the next lower-order level of the cache memory hierarchy. The clean distance can be determined by counting all of the LRU clean lines in each congruence class that have a reference count that is less than or equal to the reference count of the LRU dirty line.
    Type: Application
    Filed: September 23, 2013
    Publication date: December 18, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli
  • Publication number: 20140372704
    Abstract: A technique for scheduling cache cleaning operations maintains a clean distance between a set of least-recently-used (LRU) clean lines and the LRU dirty (modified) line for each congruence class in the cache. The technique is generally employed at a victim cache at the highest-order level of the cache memory hierarchy, so that write-backs to system memory are scheduled to avoid having to generate a write-back in response to a cache miss in the next lower-order level of the cache memory hierarchy. The clean distance can be determined by counting all of the LRU clean lines in each congruence class that have a reference count that is less than or equal to the reference count of the LRU dirty line.
    Type: Application
    Filed: June 18, 2013
    Publication date: December 18, 2014
    Inventors: Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli
  • Patent number: 8910125
    Abstract: Systems, methods and computer program products may provide monitoring of software performance on a computer. A method of monitoring software performance in a computer may include marking at least one of a load request and a store request, the marked request including an effective instruction address and an effective data address, recording the effective instruction and data addresses in a processor core and sending the marked request to a memory subsystem. The method may also include receiving a fabric response for the marked request, recording the fabric response in the core and tying the effective instruction and data addresses and the fabric response together in a sample.
    Type: Grant
    Filed: September 27, 2012
    Date of Patent: December 9, 2014
    Assignee: International Business Machines Corporation
    Inventors: Guy L Guthrie, Randall R Heisch, Venkat R Indukuru, Aaron C Sawdey
  • Patent number: 8843707
    Abstract: A mechanism is provided for dynamic cache allocation using bandwidth. A bandwidth between a higher level cache and a lower level cache is monitored. Responsive to bandwidth usage between the higher level cache and the lower level cache being below a predetermined low bandwidth threshold, the higher level cache and the lower level cache are set to operate in accordance with a first allocation policy. Responsive to bandwidth usage between the higher level cache and the lower level cache being above a predetermined high bandwidth threshold, the higher level cache and the lower level cache are set to operate in accordance with a second allocation policy.
    Type: Grant
    Filed: December 9, 2011
    Date of Patent: September 23, 2014
    Assignee: International Business Machines Corporation
    Inventors: David M. Daly, Benjiman L. Goodman, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli
  • Patent number: 8788757
    Abstract: A mechanism is provided for dynamic cache allocation using a cache hit rate. A first cache hit rate is monitored in a first subset utilizing a first allocation policy of N sets of a lower level cache. A second cache hit rate is also monitored in a second subset utilizing a second allocation policy different from the first allocation policy of the N sets of the lower level cache. A periodic comparison of the first cache hit rate to the second cache hit rate is made to identify a third allocation policy for a third subset of the N-sets of the lower level cache. The third allocation policy for the third subset is then periodically adjusted to at least one of the first allocation policy or the second allocation policy based on the comparison of the first cache hit rate to the second cache hit rate.
    Type: Grant
    Filed: December 9, 2011
    Date of Patent: July 22, 2014
    Assignee: International Business Machines Corporation
    Inventors: David M. Daly, Benjiman L. Goodman, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli
  • Patent number: 8688915
    Abstract: A mechanism is provided for weighted history allocation prediction. For each member in a plurality of members in a lower level cache, an associated reference counter is initialized to an initial value based on an operation type that caused data to be allocated to a member location of the member. For each access to the member in the lower level cache, the associated reference counter is incremented. Responsive to a new allocation of data to the lower level cache and responsive to the new allocation of data requiring the victimization of another member in the lower level cache, a member of the lower level cache is identified that has a lowest reference count value in its associated reference counter. The member with the lowest reference count value in its associated reference counter is then evicted.
    Type: Grant
    Filed: December 9, 2011
    Date of Patent: April 1, 2014
    Assignee: International Business Machines Corporation
    Inventors: David M. Daly, Benjiman L. Goodman, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli
  • Publication number: 20140089902
    Abstract: Systems, methods and computer program products may provide monitoring of software performance on a computer. A method of monitoring software performance in a computer may include marking at least one of a load request and a store request, the marked request including an effective instruction address and an effective data address, recording the effective instruction and data addresses in a processor core and sending the marked request to a memory subsystem. The method may also include receiving a fabric response for the marked request, recording the fabric response in the core and tying the effective instruction and data addresses and the fabric response together in a sample.
    Type: Application
    Filed: September 27, 2012
    Publication date: March 27, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Guy L Guthrie, Randall R Heisch, Venkat R Indukuru, Aaron C Sawdey
  • Publication number: 20130151777
    Abstract: A mechanism is provided for dynamic cache allocation using a cache hit rate. A first cache hit rate is monitored in a first subset utilizing a first allocation policy of N sets of a lower level cache. A second cache hit rate is also monitored in a second subset utilizing a second allocation policy different from the first allocation policy of the N sets of the lower level cache. A periodic comparison of the first cache hit rate to the second cache hit rate is made to identify a third allocation policy for a third subset of the N-sets of the lower level cache. The third allocation policy for the third subset is then periodically adjusted to at least one of the first allocation policy or the second allocation policy based on the comparison of the first cache hit rate to the second cache hit rate.
    Type: Application
    Filed: December 9, 2011
    Publication date: June 13, 2013
    Applicant: International Business Machines Corporation
    Inventors: David M. Daly, Benjiman L. Goodman, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli
  • Publication number: 20130151779
    Abstract: A mechanism is provided for weighted history allocation prediction. For each member in a plurality of members in a lower level cache, an associated reference counter is initialized to an initial value based on an operation type that caused data to be allocated to a member location of the member. For each access to the member in the lower level cache, the associated reference counter is incremented. Responsive to a new allocation of data to the lower level cache and responsive to the new allocation of data requiring the victimization of another member in the lower level cache, a member of the lower level cache is identified that has a lowest reference count value in its associated reference counter. The member with the lowest reference count value in its associated reference counter is then evicted.
    Type: Application
    Filed: December 9, 2011
    Publication date: June 13, 2013
    Applicant: International Business Machines Corporation
    Inventors: David M. Daly, Benjiman L. Goodman, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli
  • Publication number: 20130151780
    Abstract: A mechanism is provided for weighted history allocation prediction. For each member in a plurality of members in a lower level cache, an associated reference counter is initialized to an initial value based on an operation type that caused data to be allocated to a member location of the member. For each access to the member in the lower level cache, the associated reference counter is incremented. Responsive to a new allocation of data to the lower level cache and responsive to the new allocation of data requiring the victimization of another member in the lower level cache, a member of the lower level cache is identified that has a lowest reference count value in its associated reference counter. The member with the lowest reference count value in its associated reference counter is then evicted.
    Type: Application
    Filed: September 12, 2012
    Publication date: June 13, 2013
    Applicant: International Business Machines Corporation
    Inventors: David M. Daly, Benjiman L. Goodman, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli
  • Publication number: 20130151778
    Abstract: A mechanism is provided for dynamic cache allocation using bandwidth. A bandwidth between a higher level cache and a lower level cache is monitored. Responsive to bandwidth usage between the higher level cache and the lower level cache being below a predetermined low bandwidth threshold, the higher level cache and the lower level cache are set to operate in accordance with a first allocation policy. Responsive to bandwidth usage between the higher level cache and the lower level cache being above a predetermined high bandwidth threshold, the higher level cache and the lower level cache are set to operate in accordance with a second allocation policy.
    Type: Application
    Filed: December 9, 2011
    Publication date: June 13, 2013
    Applicant: International Business Machines Corporation
    Inventors: David M. Daly, Benjiman L. Goodman, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli