Patents by Inventor Hugh Shen

Hugh Shen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10169103
    Abstract: In at least some embodiments, a cache memory of a data processing system receives a speculative memory access request including a target address of data speculatively requested for a processor core. In response to receipt of the speculative memory access request, transactional memory logic determines whether or not the target address of the speculative memory access request hits a store footprint of a memory transaction. In response to determining that the target address of the speculative memory access request hits a store footprint of a memory transaction, the transactional memory logic causes the cache memory to reject servicing the speculative memory access request.
    Type: Grant
    Filed: February 27, 2014
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, William J. Starke, Derek E. Williams
  • Patent number: 10162755
    Abstract: A technique for operating a cache memory of a data processing system includes creating respective pollution vectors to track which of multiple concurrent threads executed by an associated processor core are currently polluted by a store operation resident in the cache memory. Dependencies in a dependency data structure of a store queue of the cache memory are set based on the pollution vectors to reduce unnecessary ordering effects. Store operations are dispatched from the store queue in accordance with the dependencies indicated by the dependency data structure.
    Type: Grant
    Filed: October 31, 2016
    Date of Patent: December 25, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, William J. Starke, Derek E. Williams
  • Publication number: 20180357131
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Application
    Filed: August 22, 2018
    Publication date: December 13, 2018
    Inventors: GUY LYNN GUTHRIE, NARESH NAYAR, GERAINT NORTH, HUGH SHEN, WILLIAM STARKE, PHILLIP WILLIAMS
  • Publication number: 20180357129
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Application
    Filed: August 22, 2018
    Publication date: December 13, 2018
    Inventors: GUY LYNN GUTHRIE, NARESH NAYAR, GERAINT NORTH, HUGH SHEN, WILLIAM STARKE, PHILLIP WILLIAMS
  • Patent number: 10152385
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Grant
    Filed: July 3, 2014
    Date of Patent: December 11, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy Lynn Guthrie, Naresh Nayar, Geraint North, Hugh Shen, William Starke, Phillip Williams
  • Publication number: 20180350426
    Abstract: A data processing system includes a plurality of processor cores each having a respective store-through upper level cache and a store-in banked lower level cache. Store requests of the plurality of processor cores destined for the banked lower level cache are buffered in multiple store queues including a first store queue and a second store queue. In response to determining that the multiple store queues contain store requests targeting a common bank of the banked lower level cache, store requests from the first store queue are temporarily favored for selection for issuance to the banked lower level cache over those in the second store queue.
    Type: Application
    Filed: June 6, 2017
    Publication date: December 6, 2018
    Inventors: SANJEEV GHAI, GUY L. GUTHRIE, HUGH SHEN, DEREK E. WILLIAMS
  • Publication number: 20180350427
    Abstract: A data processing system includes a plurality of processor cores each having a respective store-through upper level cache and a store-in banked lower level cache. Store requests of the plurality of processor cores destined for the banked lower level cache are buffered in multiple store queues including a first store queue and a second store queue. In response to determining that the multiple store queues contain store requests targeting a common bank of the banked lower level cache, store requests from the first store queue are temporarily favored for selection for issuance to the banked lower level cache over those in the second store queue.
    Type: Application
    Filed: November 29, 2017
    Publication date: December 6, 2018
    Inventors: SANJEEV GHAI, GUY L. GUTHRIE, HUGH SHEN, DEREK E. WILLIAMS
  • Patent number: 10133641
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: November 20, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy Lynn Guthrie, Naresh Nayar, Geraint North, Hugh Shen, William Starke, Phillip Williams
  • Publication number: 20180329826
    Abstract: A technique for operating a lower level cache memory of a data processing system includes receiving an operation that is associated with a first thread. Logical partition (LPAR) information for the operation is used to limit dependencies in a dependency data structure of a store queue of the lower level cache memory that are set and to remove dependencies that are otherwise unnecessary.
    Type: Application
    Filed: July 9, 2018
    Publication date: November 15, 2018
    Inventors: GUY L. GUTHRIE, HUGH SHEN, DEREK E. WILLIAMS
  • Patent number: 10108464
    Abstract: In at least some embodiments, a cache memory of a data processing system receives a speculative memory access request including a target address of data speculatively requested for a processor core. In response to receipt of the speculative memory access request, transactional memory logic determines whether or not the target address of the speculative memory access request hits a store footprint of a memory transaction. In response to determining that the target address of the speculative memory access request hits a store footprint of a memory transaction, the transactional memory logic causes the cache memory to reject servicing the speculative memory access request.
    Type: Grant
    Filed: June 23, 2014
    Date of Patent: October 23, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, William J. Starke, Derek E. Williams
  • Patent number: 10108498
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Grant
    Filed: February 19, 2016
    Date of Patent: October 23, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy Lynn Guthrie, Naresh Nayar, Geraint North, Hugh Shen, William Starke, Phillip Williams
  • Patent number: 10083142
    Abstract: A technique for handling cache-inhibited operations in a data processing system includes receiving, at a topology specific replicated bus unit, a cache-inhibited (CI) operation that is scope limited. The replicated bus unit determines whether an address associated with the CI operation matches an address for the replicated bus unit. In response to the address associated with the CI operation matching the address for the replicated bus unit, the replicated bus unit processes the CI operation based on the scope being limited to that of the replicated bus unit. In response to the address associated with the CI operation not matching the address for the replicated bus unit, the replicated bus unit ignores the CI operation.
    Type: Grant
    Filed: March 28, 2016
    Date of Patent: September 25, 2018
    Assignee: International Business Machines Corporation
    Inventors: Richard L. Arndt, Florian Auernhammer, Hugh Shen, Derek E. Williams
  • Patent number: 10019374
    Abstract: A technique for operating a lower level cache memory of a data processing system includes receiving an operation that is associated with a first thread. Logical partition (LPAR) information for the operation is used to limit dependencies in a dependency data structure of a store queue of the lower level cache memory that are set and to remove dependencies that are otherwise unnecessary.
    Type: Grant
    Filed: April 28, 2016
    Date of Patent: July 10, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, Derek E. Williams
  • Publication number: 20180165392
    Abstract: In a data processing system, a processor creating level qualifying logic within instrumentation of a hardware description language (HDL) simulation model of a design. The level qualifying logic is configured to generate a first event of a first type for a first simulation level and to generate a second event of second type for a second simulation level. The processor simulates the design utilizing the HDL simulation model, where the simulation includes generating the first event of the first type responsive to the simulating being performed at the first simulation level and generating the second event of the second type responsive to the simulating being performed at the second simulation level. Responsive to the simulating, the processor records, within data storage, at least one occurrence of an event from a set including the first event and the second event.
    Type: Application
    Filed: December 12, 2016
    Publication date: June 14, 2018
    Inventors: GUY L. GUTHRIE, HUGH SHEN, DEREK E. WILLIAMS
  • Patent number: 9959213
    Abstract: A technique for operating a lower level cache memory of a data processing system includes receiving, by a store queue controller, an operation that is associated with a first thread. The store queue controller uses level one (L1) cache memory miss information for the operation to limit dependencies in a dependency data structure of a store queue of the lower level cache memory that are set and to remove dependencies that are otherwise unnecessary.
    Type: Grant
    Filed: April 28, 2016
    Date of Patent: May 1, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, Derek E. Williams
  • Patent number: 9928119
    Abstract: In a multithreaded data processing system including a plurality of processor cores, storage-modifying requests of a plurality of concurrently executing hardware threads are received in a shared queue. The storage-modifying requests include a translation invalidation request of an initiating hardware thread. The translation invalidation request is removed from the shared queue and buffered in sidecar logic in one of a plurality of sidecars each associated with a respective one of the plurality of hardware threads. While the translation invalidation request is buffered in the sidecar, the sidecar logic broadcasts the translation invalidation request so that it is received and processed by the plurality of processor cores. In response to confirmation of completion of processing of the translation invalidation request by the initiating processor core, the sidecar logic removes the translation invalidation request from the sidecar.
    Type: Grant
    Filed: March 28, 2016
    Date of Patent: March 27, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, Derek E. Williams
  • Publication number: 20180074911
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Application
    Filed: November 7, 2017
    Publication date: March 15, 2018
    Inventors: GUY LYNN GUTHRIE, NARESH NAYAR, GERAINT NORTH, HUGH SHEN, WILLIAM STARKE, PHILLIP WILLIAMS
  • Publication number: 20180067816
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Application
    Filed: November 7, 2017
    Publication date: March 8, 2018
    Inventors: GUY LYNN GUTHRIE, NARESH NAYAR, GERAINT NORTH, HUGH SHEN, WILLIAM STARKE, PHILLIP WILLIAMS
  • Patent number: 9910782
    Abstract: In at least some embodiments, a processor core generates a store operation by executing a store instruction in an instruction sequence. The store operation is marked as a high priority store operation in response to the store instruction being marked as high priority and is not so marked otherwise. The store operation is buffered in a store queue associated with a cache memory of the processor core. Handling of the store operation in the store queue is expedited in response to the store operation being marked as a high priority store operation and not expedited otherwise.
    Type: Grant
    Filed: August 28, 2015
    Date of Patent: March 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, Jeffrey A. Stuecheli, Derek E. Williams
  • Publication number: 20180052771
    Abstract: Statistical data is used to enable or disable snooping on a bus of a processor. A command is received via a first bus or a second bus communicably coupling processor cores and caches of chiplets on the processor. Cache logic on a chiplet determines whether or not a local cache on the chiplet can satisfy a request for data specified in the command. In response to determining that the local cache can satisfy the request for data, the cache logic updates statistical data maintained on the chiplet. The statistical data indicates a probability that the local cache can satisfy a future request for data. Based at least in part on the statistical data, the cache logic determines whether to enable or disable snooping on the second bus by the local cache.
    Type: Application
    Filed: October 27, 2017
    Publication date: February 22, 2018
    Inventors: Guy L. Guthrie, Hien M. Le, Hugh Shen, Derek E. Williams, Phillip G. Williams