Patents by Inventor Hugh Shen

Hugh Shen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9418007
    Abstract: In response to execution in a memory transaction of a transactional load instruction that speculatively binds to a value held in a store-through upper level cache, a processor core sets a flag, transmits a transactional load operation to a store-in lower level cache that tracks a target cache line address of a target cache line containing the value, monitors, during a core TM tracking interval, the target cache line address for invalidation messages from the store-in lower level cache until the store-in lower level cache signals that the store-in lower level cache has assumed responsibility for tracking the target cache line address, and responsive to receipt during the core TM tracking interval of an invalidation message indicating presence of a conflicting snooped operation, resets the flag. At termination of the memory transaction, the processor core fails the memory transaction responsive to the flag being reset.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: August 16, 2016
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, Derek E. Williams
  • Publication number: 20160217045
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Application
    Filed: February 19, 2016
    Publication date: July 28, 2016
    Inventors: GUY LYNN GUTHRIE, NARESH NAYAR, GERAINT NORTH, HUGH SHEN, WILLIAM STARKE, PHILLIP WILLIAMS
  • Publication number: 20160210197
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Application
    Filed: February 19, 2016
    Publication date: July 21, 2016
    Inventors: GUY LYNN GUTHRIE, NARESH NAYAR, GERAINT NORTH, HUGH SHEN, WILLIAM STARKE, PHILLIP WILLIAMS
  • Patent number: 9396127
    Abstract: In some embodiments, in response to execution of a load-reserve instruction that binds to a load target address held in a store-through upper level cache, a processor core sets a core reservation flag, transmits a load-reserve operation to a store-in lower level cache, and tracks, during a core reservation tracking interval, the reservation requested by the load-reserve operation until the store-in lower level cache signals that the store-in lower level cache has assumed responsibility for tracking the reservation. In response to receipt during the core reservation tracking interval of an invalidation signal indicating presence of a conflicting snooped operation, the processor core cancels the reservation by resetting the core reservation flag and fails a subsequent store-conditional operation.
    Type: Grant
    Filed: February 27, 2014
    Date of Patent: July 19, 2016
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, Derek E. Williams
  • Patent number: 9390026
    Abstract: In some embodiments, in response to execution of a load-reserve instruction that binds to a load target address held in a store-through upper level cache, a processor core sets a core reservation flag, transmits a load-reserve operation to a store-in lower level cache, and tracks, during a core reservation tracking interval, the reservation requested by the load-reserve operation until the store-in lower level cache signals that the store-in lower level cache has assumed responsibility for tracking the reservation. In response to receipt during the core reservation tracking interval of an invalidation signal indicating presence of a conflicting snooped operation, the processor core cancels the reservation by resetting the core reservation flag and fails a subsequent store-conditional operation.
    Type: Grant
    Filed: September 15, 2014
    Date of Patent: July 12, 2016
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, Derek E. Williams
  • Patent number: 9390024
    Abstract: In response to receipt of a store-conditional (STCX) request of a processor core, the STCX request is buffered in an entry of a store queue for eventual service by a read-claim (RC) machine by reference to a cache array, and the STCX request is concurrently transmitted via a bypass path bypassing the store queue. In response to dispatch logic dispatching the STCX request transmitted via the bypass path to the RC machine for service by reference to the cache array, the entry of the STCX request in the store queue is updated to prohibit selection of the STCX request in the store queue for service. In response to the STCX request transmitted via the bypass path not being dispatched by the dispatch logic, the STCX is thereafter transmitted from the store queue to the dispatch logic and dispatched to the RC machine for service by reference to the cache array.
    Type: Grant
    Filed: June 23, 2014
    Date of Patent: July 12, 2016
    Assignee: International Business Machines Corporation
    Inventors: Sanjeev Ghai, Guy L Guthrie, Hugh Shen, Derek E. Williams
  • Patent number: 9372797
    Abstract: Statistical data is used to enable or disable snooping on a bus of a processor. A command is received via a first bus or a second bus communicably coupling processor cores and caches of chiplets on the processor. Cache logic on a chiplet determines whether or not a local cache on the chiplet can satisfy a request for data specified in the command. In response to determining that the local cache can satisfy the request for data, the cache logic updates statistical data maintained on the chiplet. The statistical data indicates a probability that the local cache can satisfy a future request for data. Based at least in part on the statistical data, the cache logic determines whether to enable or disable snooping on the second bus by the local cache.
    Type: Grant
    Filed: February 10, 2014
    Date of Patent: June 21, 2016
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hien M. Le, Hugh Shen, Derek E. Williams, Phillip G. Williams
  • Publication number: 20160170881
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Application
    Filed: July 3, 2014
    Publication date: June 16, 2016
    Inventors: GUY LYNN GUTHRIE, NARESH NAYAR, GERAINT NORTH, HUGH SHEN, WILLIAM STARKE, PHILLIP WILLIAMS
  • Publication number: 20160154663
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Application
    Filed: July 2, 2014
    Publication date: June 2, 2016
    Inventors: GUY LYNN GUTHRIE, NARESH NAYAR, GERAINT NORTH, HUGH SHEN, WILLIAM STARKE, PHILLIP WILLIAMS
  • Patent number: 9336142
    Abstract: A technique for operating a data processing system includes determining whether a cache line that is to be victimized from a cache includes high availability (HA) data that has not been logged. In response determining that the cache line that is to be victimized from the cache includes HA data that has not been logged, an address for the HA data is written to an HA dirty address data structure, e.g., a dirty address table (DAT), in a first memory via a first non-blocking channel. The cache line that is victimized from the cache is written to a second memory via a second non-blocking channel.
    Type: Grant
    Filed: November 6, 2013
    Date of Patent: May 10, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Sanjeev Ghai, Guy Lynn Guthrie, Hien Minh Le, Hugh Shen, Philip G. Williams
  • Patent number: 9304936
    Abstract: In response to receipt of a store-conditional (STCX) request of a processor core, the STCX request is buffered in an entry of a store queue for eventual service by a read-claim (RC) machine by reference to a cache array, and the STCX request is concurrently transmitted via a bypass path bypassing the store queue. In response to dispatch logic dispatching the STCX request transmitted via the bypass path to the RC machine for service by reference to the cache array, the entry of the STCX request in the store queue is updated to prohibit selection of the STCX request in the store queue for service. In response to the STCX request transmitted via the bypass path not being dispatched by the dispatch logic, the STCX is thereafter transmitted from the store queue to the dispatch logic and dispatched to the RC machine for service by reference to the cache array.
    Type: Grant
    Filed: December 9, 2013
    Date of Patent: April 5, 2016
    Assignee: International Business Machines Corporation
    Inventors: Sanjeev Ghai, Guy L. Guthrie, Hugh Shen, Derek E. Williams
  • Publication number: 20150331798
    Abstract: In response to execution in a memory transaction of a transactional load instruction that speculatively binds to a value held in a store-through upper level cache, a processor core sets a flag, transmits a transactional load operation to a store-in lower level cache that tracks a target cache line address of a target cache line containing the value, monitors, during a core TM tracking interval, the target cache line address for invalidation messages from the store-in lower level cache until the store-in lower level cache signals that the store-in lower level cache has assumed responsibility for tracking the target cache line address, and responsive to receipt during the core TM tracking interval of an invalidation message indicating presence of a conflicting snooped operation, resets the flag. At termination of the memory transaction, the processor core fails the memory transaction responsive to the flag being reset.
    Type: Application
    Filed: May 15, 2014
    Publication date: November 19, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, DEREK E. WILLIAMS
  • Publication number: 20150331796
    Abstract: In response to execution in a memory transaction of a transactional load instruction that speculatively binds to a value held in a store-through upper level cache, a processor core sets a flag, transmits a transactional load operation to a store-in lower level cache that tracks a target cache line address of a target cache line containing the value, monitors, during a core TM tracking interval, the target cache line address for invalidation messages from the store-in lower level cache until the store-in lower level cache signals that the store-in lower level cache has assumed responsibility for tracking the target cache line address, and responsive to receipt during the core TM tracking interval of an invalidation message indicating presence of a conflicting snooped operation, resets the flag. At termination of the memory transaction, the processor core fails the memory transaction responsive to the flag being reset.
    Type: Application
    Filed: June 23, 2014
    Publication date: November 19, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, DEREK E. WILLIAMS
  • Publication number: 20150286569
    Abstract: A technique for operating a cache memory of a data processing system includes creating respective pollution vectors to track which of multiple concurrent threads executed by an associated processor core are currently polluted by a store operation resident in the cache memory. Dependencies in a dependency data structure of a store queue of the cache memory are set based on the pollution vectors to reduce unnecessary ordering effects. Store operations are dispatched from the store queue in accordance with the dependencies indicated by the dependency data structure.
    Type: Application
    Filed: April 4, 2014
    Publication date: October 8, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, WILLIAM J. STARKE, DEREK E. WILLIAMS
  • Publication number: 20150286570
    Abstract: A technique for operating a cache memory of a data processing system includes creating respective pollution vectors to track which of multiple concurrent threads executed by an associated processor core are currently polluted by a store operation resident in the cache memory. Dependencies in a dependency data structure of a store queue of the cache memory are set based on the pollution vectors to reduce unnecessary ordering effects. Store operations are dispatched from the store queue in accordance with the dependencies indicated by the dependency data structure.
    Type: Application
    Filed: August 28, 2014
    Publication date: October 8, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, WILLIAM J. STARKE, DEREK E. WILLIAMS
  • Publication number: 20150269076
    Abstract: Statistical data is used to enable or disable snooping on a bus of a processor. A command is received via a first bus or a second bus communicably coupling processor cores and caches of chiplets on the processor. Cache logic on a chiplet determines whether or not a local cache on the chiplet can satisfy a request for data specified in the command. In response to determining that the local cache can satisfy the request for data, the cache logic updates statistical data maintained on the chiplet. The statistical data indicates a probability that the local cache can satisfy a future request for data. Based at least in part on the statistical data, the cache logic determines whether to enable or disable snooping on the second bus by the local cache.
    Type: Application
    Filed: June 8, 2015
    Publication date: September 24, 2015
    Inventors: Guy L. Guthrie, Hien M. Le, Hugh Shen, Derek E. Williams, Phillip G. Williams
  • Publication number: 20150242320
    Abstract: In some embodiments, in response to execution of a load-reserve instruction that binds to a load target address held in a store-through upper level cache, a processor core sets a core reservation flag, transmits a load-reserve operation to a store-in lower level cache, and tracks, during a core reservation tracking interval, the reservation requested by the load-reserve operation until the store-in lower level cache signals that the store-in lower level cache has assumed responsibility for tracking the reservation. In response to receipt during the core reservation tracking interval of an invalidation signal indicating presence of a conflicting snooped operation, the processor core cancels the reservation by resetting the core reservation flag and fails a subsequent store-conditional operation.
    Type: Application
    Filed: February 27, 2014
    Publication date: August 27, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, WILLIAM J. STARKE, DEREK E. WILLIAMS
  • Publication number: 20150242251
    Abstract: In at least some embodiments, a cache memory of a data processing system receives a speculative memory access request including a target address of data speculatively requested for a processor core. In response to receipt of the speculative memory access request, transactional memory logic determines whether or not the target address of the speculative memory access request hits a store footprint of a memory transaction. In response to determining that the target address of the speculative memory access request hits a store footprint of a memory transaction, the transactional memory logic causes the cache memory to reject servicing the speculative memory access request.
    Type: Application
    Filed: June 23, 2014
    Publication date: August 27, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, WILLIAM J. STARKE, DEREK E. WILLIAMS
  • Publication number: 20150242250
    Abstract: In at least some embodiments, a cache memory of a data processing system receives a speculative memory access request including a target address of data speculatively requested for a processor core. In response to receipt of the speculative memory access request, transactional memory logic determines whether or not the target address of the speculative memory access request hits a store footprint of a memory transaction. In response to determining that the target address of the speculative memory access request hits a store footprint of a memory transaction, the transactional memory logic causes the cache memory to reject servicing the speculative memory access request.
    Type: Application
    Filed: February 27, 2014
    Publication date: August 27, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, WILLIAM J. STARKE, DEREK E. WILLIAMS
  • Publication number: 20150242327
    Abstract: In some embodiments, in response to execution of a load-reserve instruction that binds to a load target address held in a store-through upper level cache, a processor core sets a core reservation flag, transmits a load-reserve operation to a store-in lower level cache, and tracks, during a core reservation tracking interval, the reservation requested by the load-reserve operation until the store-in lower level cache signals that the store-in lower level cache has assumed responsibility for tracking the reservation. In response to receipt during the core reservation tracking interval of an invalidation signal indicating presence of a conflicting snooped operation, the processor core cancels the reservation by resetting the core reservation flag and fails a subsequent store-conditional operation.
    Type: Application
    Filed: September 15, 2014
    Publication date: August 27, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HUGH SHEN, DEREK E. WILLIAMS