Patents by Inventor Judson E. Veazey

Judson E. Veazey has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8924653
    Abstract: A method for providing a transactional memory is described. A cache coherency protocol is enforced upon a cache memory including cache lines, wherein each line is in one of a modified state, an owned state, an exclusive state, a shared state, and an invalid state. Upon initiation of a transaction accessing at least one of the cache lines, each of the lines is ensured to be either shared or invalid. During the transaction, in response to an external request for any cache line in the modified, owned, or exclusive state, each line in the modified or owned state is invalidated without writing the line to a main memory. Also, each exclusive line is demoted to either the shared or invalid state, and the transaction is aborted.
    Type: Grant
    Filed: October 31, 2006
    Date of Patent: December 30, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Blaine D. Gaither, Judson E. Veazey
  • Patent number: 8051250
    Abstract: A system for pushing data, the system includes a source node that stores a coherent copy of a block of data. The system also includes a push engine configured to determine a next consumer of the block of data. The determination being made in the absence of the push engine detecting a request for the block of data from the next consumer. The push engine causes the source node to push the block of data to a memory associated with the next consumer to reduce latency of the next consumer accessing the block of data.
    Type: Grant
    Filed: March 14, 2007
    Date of Patent: November 1, 2011
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Blaine D Gaither, Darel N. Emmot, Judson E. Veazey, Benjamin D. Osecky
  • Patent number: 7739478
    Abstract: A method is provided for pre-fetching data into a cache memory. A first cache-line address of each of a number of data requests from at least one processor is stored. A second cache-line address of a next data request from the processor is compared to the first cache-line addresses. If the second cache-line address is adjacent to one of the first cache-line addresses, data associated with a third cache-line address adjacent to the second cache-line address is pre-fetched into the cache memory, if not already present in the cache memory.
    Type: Grant
    Filed: March 8, 2007
    Date of Patent: June 15, 2010
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Judson E. Veazey, Blaine D. Gaither
  • Patent number: 7600079
    Abstract: A method comprises, while a first device has ownership of a data unit, a second device issuing a request to perform a memory write of said data unit. The method further comprises a memory controller performing the memory write without changing ownership to the second device.
    Type: Grant
    Filed: October 27, 2006
    Date of Patent: October 6, 2009
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Blaine D. Gaither, Judson E. Veazey, Patrick Knebel
  • Publication number: 20080229009
    Abstract: A system for pushing data, the system includes a source node that stores a coherent copy of a block of data. The system also includes a push engine configured to determine a next consumer of the block of data. The determination being made in the absence oft he push engine detecting a request for the block of data from the next consumer. The push engine causes the source node to push the block of data to a memory associated with the next consumer to reduce latency of the next consumer accessing the block of data.
    Type: Application
    Filed: March 14, 2007
    Publication date: September 18, 2008
    Inventors: Blaine D. Gaither, Darel N. Emmot, Judson E. Veazey, Benjamin D. Osecky
  • Publication number: 20080222343
    Abstract: A method is provided for pre-fetching data into a cache memory. A first cache-line address of each of a number of data requests from at least one processor is stored. A second cache-line address of a next data request from the processor is compared to the first cache-line addresses. If the second cache-line address is adjacent to one of the first cache-line addresses, data associated with a third cache-line address adjacent to the second cache-line address is pre-fetched into the cache memory, if not already present in the cache memory.
    Type: Application
    Filed: March 8, 2007
    Publication date: September 11, 2008
    Inventors: Judson E. Veazey, Blaine D. Gaither
  • Publication number: 20080104336
    Abstract: A method comprises, while a first device has ownership of a data unit, a second device issuing a request to perform a memory write of said data unit. The method further comprises a memory controller performing the memory write without changing ownership to the second device.
    Type: Application
    Filed: October 27, 2006
    Publication date: May 1, 2008
    Inventors: Blaine D. Gaither, Judson E. Veazey, Patrick Knebel
  • Publication number: 20080104333
    Abstract: A cache memory system is provided which includes a higher-level cache, a lower-level cache, and a bus coupling the higher-level cache and the lower-level cache together. Also included is a directory array coupled with the lower-level cache. The lower-level cache is configured to track all of the data contents of the higher-level cache in the directory array without duplicating the data contents in the lower-level cache.
    Type: Application
    Filed: October 31, 2006
    Publication date: May 1, 2008
    Inventor: Judson E. Veazey
  • Publication number: 20080104332
    Abstract: A method for providing a transactional memory is described. A cache coherency protocol is enforced upon a cache memory including cache lines, wherein each line is in one of a modified state, an owned state, an exclusive state, a shared state, and an invalid state. Upon initiation of a transaction accessing at least one of the cache lines, each of the lines is ensured to be either shared or invalid. During the transaction, in response to an external request for any cache line in the modified, owned, or exclusive state, each line in the modified or owned state is invalidated without writing the line to a main memory. Also, each exclusive line is demoted to either the shared or invalid state, and the transaction is aborted.
    Type: Application
    Filed: October 31, 2006
    Publication date: May 1, 2008
    Inventors: Blaine D. Gaither, Judson E. Veazey
  • Publication number: 20080098178
    Abstract: A computing system is provided which includes a number of processing units, and a switching system coupled with each of the processing units. The switching system includes a memory. Each of the processing units is configured to access data from another of the processing units through the switching system. The switching system is configured to store a copy of the data passing therethrough into the memory as the data passes between the processing units though the switching system. Each of the processing units is also configured to access the copy of the data in the memory of the switching system.
    Type: Application
    Filed: October 23, 2006
    Publication date: April 24, 2008
    Inventors: Judson E. Veazey, Donna E. Ott
  • Patent number: 6879270
    Abstract: A compression/decompression (codec) engine is provided for use in conjunction with a fabric agent chip in a multiprocessor computer system. The fabric agent chip serves as an interface between a first memory controller on a first cell board in the computer system and other memory controllers on other cell boards in the computer system. Cell boards in the computer system are interconnected by a system fabric. Memory data read by the first memory controller is compressed by the codec engine prior to being transmitted over the system fabric by the fabric agent chip. Conversely, memory data received over the system fabric by the fabric agent chip is decompressed by the codec engine prior to being provided to the first memory controller. Other fabric agent chips in the computer system may similarly be provided with corresponding codec engines.
    Type: Grant
    Filed: August 20, 2003
    Date of Patent: April 12, 2005
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Judson E. Veazey