Patents by Inventor Guy L. Guthrie

Guy L. Guthrie has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9251111
    Abstract: In one or more embodiments, one or more systems, devices, methods, and/or processes described can continually increase a command rate of an interconnect if one or more requests to lower the command rate are not received within one or more periods of time. In one example, the command rate can be set to a fastest level. In another example, the command rate can be incrementally increased over periods of time. If a request to lower the command rate is received, the command rate can be set to a reference level or can be decremented to one slower rate level. In one or more embodiments, the one or more requests to lower the command rate can be based on at least one of an issue rate of speculative commands and a number of overcommit failures, among others.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: February 2, 2016
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, David J. Krolak, Charles F. Marino, Praveen S. Reddy, Michael S. Siegel
  • Patent number: 9244724
    Abstract: In a data processing system having a processor core and a shared memory system including a cache memory that supports the processor core, a transactional memory access request is issued by the processor core in response to execution of a memory access instruction in a memory transaction undergoing execution by the processor core. In response to receiving the transactional memory access request, dispatch logic of the cache memory evaluates the transactional memory access request for dispatch, where the evaluation includes determining whether the memory transaction has a failing transaction state. In response to determining the memory transaction has a failing transaction state, the dispatch logic refrains from dispatching the memory access request for service by the cache memory and refrains from updating at least replacement order information of the cache memory in response to the transactional memory access request.
    Type: Grant
    Filed: August 15, 2013
    Date of Patent: January 26, 2016
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Sanjeev Ghai, Guy L. Guthrie, Jonathan R. Jackson, Derek E. Williams
  • Patent number: 9244725
    Abstract: In a data processing system having a processor core and a shared memory system including a cache memory that supports the processor core, a transactional memory access request is issued by the processor core in response to execution of a memory access instruction in a memory transaction undergoing execution by the processor core. In response to receiving the transactional memory access request, dispatch logic of the cache memory evaluates the transactional memory access request for dispatch, where the evaluation includes determining whether the memory transaction has a failing transaction state. In response to determining the memory transaction has a failing transaction state, the dispatch logic refrains from dispatching the memory access request for service by the cache memory and refrains from updating at least replacement order information of the cache memory in response to the transactional memory access request.
    Type: Grant
    Filed: September 26, 2013
    Date of Patent: January 26, 2016
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Sanjeev Ghai, Guy L. Guthrie, Jonathan R. Jackson, Derek E. Williams
  • Publication number: 20150378770
    Abstract: A virtual machine backup method includes utilizing a log to indicate updates to memory of a virtual machine when the updates are evicted from a cache of the virtual machine. A guard band is determined that indicates a threshold amount of free space for the log. A determination is made that the guard band will be or has been encroached upon corresponding to indicating an update in the log. A backup image of the virtual machine is updated based, at least in part, on a set of one or more entries of the log, wherein the set of entries is sufficient to comply with the guard band. The set of entries is removed from the log.
    Type: Application
    Filed: June 1, 2015
    Publication date: December 31, 2015
    Inventors: Guy L. Guthrie, Naresh Nayar, Geraint North, William J. Starke, Albert J. Van Norstrand, JR.
  • Publication number: 20150363316
    Abstract: A technique for operating a memory system for a node includes interrogating, by a cache, an associated cache directory to determine whether a shared cache line to be installed in the cache is associated with an invalid global state in the cache. The invalid global state specifies that a version of the shared cache line has been intervened off-node. In response to the shared cache line being in the invalid global state the cache spawns a castout invalid global command for the shared cache line. The shared cache line is installed in the cache. A coherence state for the shared cache line is updated in the associated cache directory to indicate the shared cache line is shared.
    Type: Application
    Filed: June 10, 2015
    Publication date: December 17, 2015
    Inventors: GUY L. GUTHRIE, HIEN MINH LE, JEFFREY A. STUECHELI, PHILLIP G. WILLIAMS
  • Publication number: 20150363317
    Abstract: A technique for operating a memory system for a node includes interrogating, by a cache, an associated cache directory to determine whether a shared cache line to be installed in the cache is associated with an invalid global state in the cache. The invalid global state specifies that a version of the shared cache line has been intervened off-node. In response to the shared cache line being in the invalid global state the cache spawns a castout invalid global command for the shared cache line. The shared cache line is installed in the cache. A coherence state for the shared cache line is updated in the associated cache directory to indicate the shared cache line is shared.
    Type: Application
    Filed: June 17, 2014
    Publication date: December 17, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HIEN MINH LE, JEFFREY A. STUECHELI, PHILLIP G. WILLIAMS
  • Publication number: 20150331796
    Abstract: In response to execution in a memory transaction of a transactional load instruction that speculatively binds to a value held in a store-through upper level cache, a processor core sets a flag, transmits a transactional load operation to a store-in lower level cache that tracks a target cache line address of a target cache line containing the value, monitors, during a core TM tracking interval, the target cache line address for invalidation messages from the store-in lower level cache until the store-in lower level cache signals that the store-in lower level cache has assumed responsibility for tracking the target cache line address, and responsive to receipt during the core TM tracking interval of an invalidation message indicating presence of a conflicting snooped operation, resets the flag. At termination of the memory transaction, the processor core fails the memory transaction responsive to the flag being reset.
    Type: Application
    Filed: June 23, 2014
    Publication date: November 19, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, DEREK E. WILLIAMS
  • Publication number: 20150331798
    Abstract: In response to execution in a memory transaction of a transactional load instruction that speculatively binds to a value held in a store-through upper level cache, a processor core sets a flag, transmits a transactional load operation to a store-in lower level cache that tracks a target cache line address of a target cache line containing the value, monitors, during a core TM tracking interval, the target cache line address for invalidation messages from the store-in lower level cache until the store-in lower level cache signals that the store-in lower level cache has assumed responsibility for tracking the target cache line address, and responsive to receipt during the core TM tracking interval of an invalidation message indicating presence of a conflicting snooped operation, resets the flag. At termination of the memory transaction, the processor core fails the memory transaction responsive to the flag being reset.
    Type: Application
    Filed: May 15, 2014
    Publication date: November 19, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, DEREK E. WILLIAMS
  • Patent number: 9189403
    Abstract: A data processing system includes first and second processing units and a system memory. The first processing unit has first upper and first lower level caches, and the second processing unit has second upper and lower level caches. In response to a data request, a victim cache line to be castout from the first lower level cache is selected, and the first lower level cache selects between performing a lateral castout (LCO) of the victim cache line to the second lower level cache and a castout of the victim cache line to the system memory based upon a confidence indicator associated with the victim cache line. In response to selecting an LCO, the first processing unit issues an LCO command on the interconnect fabric and removes the victim cache line from the first lower level cache, and the second lower level cache holds the victim cache line.
    Type: Grant
    Filed: December 30, 2009
    Date of Patent: November 17, 2015
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, William J. Starke, Jeffrey Stuecheli, Derek E. Williams, Thomas R. Puzak
  • Patent number: 9176876
    Abstract: A data processing system includes first and second processing units and a system memory. The first processing unit has first upper and first lower level caches, and the second processing unit has second upper and lower level caches. In response to a data request, a victim cache line to be castout from the first lower level cache is selected, and the first lower level cache selects between performing a lateral castout (LCO) of the victim cache line to the second lower level cache and a castout of the victim cache line to the system memory based upon a confidence indicator associated with the victim cache line. In response to selecting an LCO, the first processing unit issues an LCO command on the interconnect fabric and removes the victim cache line from the first lower level cache, and the second lower level cache holds the victim cache line.
    Type: Grant
    Filed: April 12, 2012
    Date of Patent: November 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, William J. Starke, Jeffrey A. Stuecheli, Derek E. Williams, Thomas R. Puzak
  • Publication number: 20150286569
    Abstract: A technique for operating a cache memory of a data processing system includes creating respective pollution vectors to track which of multiple concurrent threads executed by an associated processor core are currently polluted by a store operation resident in the cache memory. Dependencies in a dependency data structure of a store queue of the cache memory are set based on the pollution vectors to reduce unnecessary ordering effects. Store operations are dispatched from the store queue in accordance with the dependencies indicated by the dependency data structure.
    Type: Application
    Filed: April 4, 2014
    Publication date: October 8, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, WILLIAM J. STARKE, DEREK E. WILLIAMS
  • Publication number: 20150286570
    Abstract: A technique for operating a cache memory of a data processing system includes creating respective pollution vectors to track which of multiple concurrent threads executed by an associated processor core are currently polluted by a store operation resident in the cache memory. Dependencies in a dependency data structure of a store queue of the cache memory are set based on the pollution vectors to reduce unnecessary ordering effects. Store operations are dispatched from the store queue in accordance with the dependencies indicated by the dependency data structure.
    Type: Application
    Filed: August 28, 2014
    Publication date: October 8, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, WILLIAM J. STARKE, DEREK E. WILLIAMS
  • Publication number: 20150269076
    Abstract: Statistical data is used to enable or disable snooping on a bus of a processor. A command is received via a first bus or a second bus communicably coupling processor cores and caches of chiplets on the processor. Cache logic on a chiplet determines whether or not a local cache on the chiplet can satisfy a request for data specified in the command. In response to determining that the local cache can satisfy the request for data, the cache logic updates statistical data maintained on the chiplet. The statistical data indicates a probability that the local cache can satisfy a future request for data. Based at least in part on the statistical data, the cache logic determines whether to enable or disable snooping on the second bus by the local cache.
    Type: Application
    Filed: June 8, 2015
    Publication date: September 24, 2015
    Inventors: Guy L. Guthrie, Hien M. Le, Hugh Shen, Derek E. Williams, Phillip G. Williams
  • Publication number: 20150242250
    Abstract: In at least some embodiments, a cache memory of a data processing system receives a speculative memory access request including a target address of data speculatively requested for a processor core. In response to receipt of the speculative memory access request, transactional memory logic determines whether or not the target address of the speculative memory access request hits a store footprint of a memory transaction. In response to determining that the target address of the speculative memory access request hits a store footprint of a memory transaction, the transactional memory logic causes the cache memory to reject servicing the speculative memory access request.
    Type: Application
    Filed: February 27, 2014
    Publication date: August 27, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, WILLIAM J. STARKE, DEREK E. WILLIAMS
  • Publication number: 20150242320
    Abstract: In some embodiments, in response to execution of a load-reserve instruction that binds to a load target address held in a store-through upper level cache, a processor core sets a core reservation flag, transmits a load-reserve operation to a store-in lower level cache, and tracks, during a core reservation tracking interval, the reservation requested by the load-reserve operation until the store-in lower level cache signals that the store-in lower level cache has assumed responsibility for tracking the reservation. In response to receipt during the core reservation tracking interval of an invalidation signal indicating presence of a conflicting snooped operation, the processor core cancels the reservation by resetting the core reservation flag and fails a subsequent store-conditional operation.
    Type: Application
    Filed: February 27, 2014
    Publication date: August 27, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, WILLIAM J. STARKE, DEREK E. WILLIAMS
  • Publication number: 20150242327
    Abstract: In some embodiments, in response to execution of a load-reserve instruction that binds to a load target address held in a store-through upper level cache, a processor core sets a core reservation flag, transmits a load-reserve operation to a store-in lower level cache, and tracks, during a core reservation tracking interval, the reservation requested by the load-reserve operation until the store-in lower level cache signals that the store-in lower level cache has assumed responsibility for tracking the reservation. In response to receipt during the core reservation tracking interval of an invalidation signal indicating presence of a conflicting snooped operation, the processor core cancels the reservation by resetting the core reservation flag and fails a subsequent store-conditional operation.
    Type: Application
    Filed: September 15, 2014
    Publication date: August 27, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, DEREK E. WILLIAMS
  • Publication number: 20150242251
    Abstract: In at least some embodiments, a cache memory of a data processing system receives a speculative memory access request including a target address of data speculatively requested for a processor core. In response to receipt of the speculative memory access request, transactional memory logic determines whether or not the target address of the speculative memory access request hits a store footprint of a memory transaction. In response to determining that the target address of the speculative memory access request hits a store footprint of a memory transaction, the transactional memory logic causes the cache memory to reject servicing the speculative memory access request.
    Type: Application
    Filed: June 23, 2014
    Publication date: August 27, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, WILLIAM J. STARKE, DEREK E. WILLIAMS
  • Patent number: 9110808
    Abstract: In response to a memory access request of a processor core that targets a target cache line, the lower level cache of a vertical cache hierarchy associated with the processor core supplies a copy of the target cache line to an upper level cache in the vertical cache hierarchy and retains a copy in a shared coherence state. The upper level cache holds the copy of the target cache line in a private shared ownership coherence state indicating that each cached copy of the target memory block is cached within the vertical cache hierarchy associated with the processor core. In response to the upper level cache signaling replacement of the copy of the target cache line in the private shared ownership coherence state, the lower level cache updates its copy of the target cache line to the exclusive ownership coherence state without coherency messaging with other vertical cache hierarchies.
    Type: Grant
    Filed: December 30, 2009
    Date of Patent: August 18, 2015
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, William J. Starke, Jeffrey Stuecheli, Derek E. Williams, Phillip G. Williams
  • Publication number: 20150227464
    Abstract: Statistical data is used to enable or disable snooping on a bus of a processor. A command is received via a first bus or a second bus communicably coupling processor cores and caches of chiplets on the processor. Cache logic on a chiplet determines whether or not a local cache on the chiplet can satisfy a request for data specified in the command. In response to determining that the local cache can satisfy the request for data, the cache logic updates statistical data maintained on the chiplet. The statistical data indicates a probability that the local cache can satisfy a future request for data. Based at least in part on the statistical data, the cache logic determines whether to enable or disable snooping on the second bus by the local cache.
    Type: Application
    Filed: February 10, 2014
    Publication date: August 13, 2015
    Applicant: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hien M. Le, Hugh Shen, Derek E. Williams, Phillip G. Williams
  • Publication number: 20150178238
    Abstract: In one or more embodiments, one or more systems, devices, methods, and/or processes described can continually increase a command rate of an interconnect if one or more requests to lower the command rate are not received within one or more periods of time. In one example, the command rate can be set to a fastest level. In another example, the command rate can be incrementally increased over periods of time. If a request to lower the command rate is received, the command rate can be set to a reference level or can be decremented to one slower rate level. In one or more embodiments, the one or more requests to lower the command rate can be based on at least one of an issue rate of speculative commands and a number of overcommit failures, among others.
    Type: Application
    Filed: December 20, 2013
    Publication date: June 25, 2015
    Inventors: GUY L. GUTHRIE, DAVID J. KROLAK, CHARLES F. MARINO, PRAVEEN S. REDDY, MICHAEL S. SIEGEL