Patents by Inventor Eric Francis Robinson

Eric Francis Robinson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170286303
    Abstract: Systems and methods relate to servicing a demand miss for a cache line in a first cache (e.g., an L1 cache) of a processing system, for example, when none of one or more fill buffers for servicing the demand miss are available. In exemplary aspects, the demand miss is converted to a prefetch operation to prefetch the cache line into a second cache (e.g., an L2 cache), wherein the second cache is a backing storage location for the first cache. Thus, servicing the demand miss is not delayed until a fill buffer becomes available, and once a fill buffer becomes available, the prefetched cache line is returned from the second cache to the available fill buffer.
    Type: Application
    Filed: March 31, 2016
    Publication date: October 5, 2017
    Inventors: Khary Jason ALEXANDER, Eric Francis ROBINSON
  • Publication number: 20170255557
    Abstract: The disclosure relates to filtering snoops in coherent multiprocessor systems. For example, in response to a request to update a target memory location at a Level-2 (L2) cache shared among multiple local processing units each having a Level-1 (L1) cache, a lookup based on the target memory location may be performed in a snoop filter that tracks entries in the L1 caches. If the lookup misses the snoop filter and the snoop filter lacks space to store a new entry, a victim entry to evict from the snoop filter may be selected and a request to invalidate every cache line that maps to the victim entry may be sent to at least one of the processing units with one or more cache lines that map to the victim entry. The victim entry may then be replaced in the snoop filter with the new entry corresponding to the target memory location.
    Type: Application
    Filed: March 7, 2016
    Publication date: September 7, 2017
    Inventors: Eric Francis ROBINSON, Khary Jason ALEXANDER, Zeid Hartuon SAMOAIL, Benjamin Charles MICHELSON
  • Patent number: 8397029
    Abstract: A method for maintaining cache coherency operates in a data processing system with a system memory and a plurality of processing units (PUs), each PU having a cache, and each PU coupled to at least another one of the plurality of PUs. A first PU receives a first data block for storage in a first cache of the first PU. The first PU stores the first data block in the first cache. The first PU assigns a first coherency state and a first tag to the first data block, wherein the first coherency state is one of a plurality of coherency states that indicate whether the first PU has accessed the first data block. The plurality of coherency states further indicate whether, in the event the first PU has not accessed the first data block, the first PU received the first data block from a neighboring PU.
    Type: Grant
    Filed: December 19, 2007
    Date of Patent: March 12, 2013
    Assignee: International Business Machines Corporation
    Inventors: Richard Nicholas, Jason Alan Cox, Robert John Dorsey, Hien Minh Le, Eric Francis Robinson, Thuong Quang Truong
  • Patent number: 8296520
    Abstract: A method for managing data operates in a data processing system with a system memory and a plurality of processing units (PUs), each PU having a cache comprising a plurality of cache lines, each cache line having one of a plurality of coherency states, and each PU coupled to at least another one of the plurality of PUs. A first PU selects a castout cache line of a plurality of cache lines in a first cache of the first PU to be castout of the first cache. The first PU sends a request to a second PU, wherein the second PU is a neighboring PU of the first PU, and the request comprises a first address and first coherency state of the selected castout cache line. The second PU determines whether the first address matches an address of any cache line in the second PU. The second PU sends a response to the first PU based on a coherency state of each of a plurality of cache lines in the second cache and whether there is an address hit.
    Type: Grant
    Filed: December 19, 2007
    Date of Patent: October 23, 2012
    Assignee: International Business Machines Corporation
    Inventors: Hien Minh Le, Jason Alan Cox, Robert John Dorsey, Richard Nicholas, Eric Francis Robinson, Thuong Quang Truong
  • Patent number: 7836257
    Abstract: A method for managing a cache operates in a data processing system with a system memory and a plurality of processing units (PUs). A first PU determines that one of a plurality of cache lines in a first cache of the first PU must be replaced with a first data block, and determines whether the first data block is a victim cache line from another one of the plurality of PUs. In the event the first data block is not a victim cache line from another one of the plurality of PUs, the first cache does not contain a cache line in coherency state invalid, and the first cache contains a cache line in coherency state moved, the first PU selects a cache line in coherency state moved, stores the first data block in the selected cache line and updates the coherency state of the first data block.
    Type: Grant
    Filed: December 19, 2007
    Date of Patent: November 16, 2010
    Assignee: International Business Machines Corpation
    Inventors: Robert John Dorsey, Jason Alan Cox, Hien Minh Le, Richard Nicholas, Eric Francis Robinson, Thuong Quang Truong
  • Patent number: 7818509
    Abstract: A cache coherency technique used in a multi-node symmetric multi-processor system that reduces the number of message phases of a read request from 5 to 4, canceling the combined response phase for read requests in most cases, thereby improving system performance and reducing the overall system power consumption.
    Type: Grant
    Filed: October 31, 2007
    Date of Patent: October 19, 2010
    Assignee: International Business Machines Corporation
    Inventors: Brian Mitchell Bass, Eric Francis Robinson, Thuong Quang Truong
  • Patent number: 7640414
    Abstract: Methods, systems, and computer program products for forwarding store data to loads in a pipelined processor are provided. In one implementation, a processor is provided that includes a decoder operable to decode an instruction, and a plurality of execution units operable to respectively execute a decoded instruction from the decoder. The plurality of execution units include a load/store execution unit operable to execute decoded load instructions and decoded store instructions and generate corresponding load memory operations and store memory operations. The store queue is operable to buffer one or more store memory operations prior to the one or more memory operations being completed, and the store queue is operable to forward store data of the one or more store memory operations buffered in the store queue to a load memory operation on a byte-by-byte basis.
    Type: Grant
    Filed: November 16, 2006
    Date of Patent: December 29, 2009
    Assignee: International Business Machines Corporation
    Inventors: Jason Alan Cox, Kevin Chih Kang Lin, Eric Francis Robinson
  • Publication number: 20090164735
    Abstract: A method for maintaining cache coherency operates in a data processing system with a system memory and a plurality of processing units (PUs), each PU having a cache, and each PU coupled to at least another one of the plurality of PUs. A first PU receives a first data block for storage in a first cache of the first PU. The first PU stores the first data block in the first cache. The first PU assigns a first coherency state and a first tag to the first data block, wherein the first coherency state is one of a plurality of coherency states that indicate whether the first PU has accessed the first data block. The plurality of coherency states further indicate whether, in the event the first PU has not accessed the first data block, the first PU received the first data block from a neighboring PU.
    Type: Application
    Filed: December 19, 2007
    Publication date: June 25, 2009
    Inventors: Richard Nicholas, Jason Alan Cox, Robert John Dorsey, Hien Minh Le, Eric Francis Robinson, Thuong Quang Truong
  • Publication number: 20090164731
    Abstract: A method for managing data operates in a data processing system with a system memory and a plurality of processing units (PUs), each PU having a cache comprising a plurality of cache lines, each cache line having one of a plurality of coherency states, and each PU coupled to at least another one of the plurality of PUs. A first PU selects a castout cache line of a plurality of cache lines in a first cache of the first PU to be castout of the first cache. The first PU sends a request to a second PU, wherein the second PU is a neighboring PU of the first PU, and the request comprises a first address and first coherency state of the selected castout cache line. The second PU determines whether the first address matches an address of any cache line in the second PU. The second PU sends a response to the first PU based on a coherency state of each of a plurality of cache lines in the second cache and whether there is an address hit.
    Type: Application
    Filed: December 19, 2007
    Publication date: June 25, 2009
    Inventors: Hien Minh Le, Jason Alan Cox, Robert John Dorsey, Richard Nicholas, Eric Francis Robinson, Thuong Quang Truong
  • Publication number: 20090164736
    Abstract: A method for managing a cache operates in a data processing system with a system memory and a plurality of processing units (PUs). A first PU determines that one of a plurality of cache lines in a first cache of the first PU must be replaced with a first data block, and determines whether the first data block is a victim cache line from another one of the plurality of PUs. In the event the first data block is not a victim cache line from another one of the plurality of PUs, the first cache does not contain a cache line in coherency state invalid, and the first cache contains a cache line in coherency state moved, the first PU selects a cache line in coherency state moved, stores the first data block in the selected cache line and updates the coherency state of the first data block.
    Type: Application
    Filed: December 19, 2007
    Publication date: June 25, 2009
    Inventors: Robert John Dorsey, Jason Alan Cox, Hien Minh Le, Richard Nicholas, Eric Francis Robinson, Thuong Quang Truong
  • Publication number: 20090113138
    Abstract: A cache coherency technique used in a multi-node symmetric multi-processor system that reduces the number of message phases of a read request from 5 to 4, canceling the combined response phase for read requests in most cases, thereby improving system performance and reducing the overall system power consumption.
    Type: Application
    Filed: October 31, 2007
    Publication date: April 30, 2009
    Inventors: Brian Mitchell Bass, Eric Francis Robinson, Thuong Quang Truong
  • Publication number: 20080120472
    Abstract: Methods, systems, and computer program products for forwarding store data to loads in a pipelined processor are provided. In one implementation, a processor is provided that includes a decoder operable to decode an instruction, and a plurality of execution units operable to respectively execute a decoded instruction from the decoder. The plurality of execution units include a load/store execution unit operable to execute decoded load instructions and decoded store instructions and generate corresponding load memory operations and store memory operations. The store queue is operable to buffer one or more store memory operations prior to the one or more memory operations being completed, and the store queue is operable to forward store data of the one or more store memory operations buffered in the store queue to a load memory operation on a byte-by-byte basis.
    Type: Application
    Filed: November 16, 2006
    Publication date: May 22, 2008
    Inventors: Jason Alan Cox, Kevin Chih Kang Lin, Eric Francis Robinson