Patents by Inventor Donald C. Englin

Donald C. Englin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7356650
    Abstract: Systems and methods are provided for a data processing system and a cache arrangement. The data processing system includes at least one processor, a first-level cache, a second-level cache, and a memory arrangement. The first-level cache bypasses storing data for a memory request when a do-not-cache attribute is associated with the memory request. The second-level cache stores the data for the memory request. The second-level cache also bypasses updating of least-recently-used indicators of the second-level cache when the do-not-cache attribute is associated with the memory request.
    Type: Grant
    Filed: June 17, 2005
    Date of Patent: April 8, 2008
    Assignee: Unisys Corporation
    Inventors: Donald C. Englin, James A. Williams
  • Patent number: 7356647
    Abstract: A cache arrangement of a data processing system provides a cache flush operation initiated by a command from a maintenance processor. The cache arrangement includes a cache memory, a mode register, and a controller. The mode register is settable by the maintenance processor to one of first and second values. The controller selectively writes all of the modified information in the cache memory to the system memory responsive to the command. Also in response to this command, all of the information is invalidated in the cache memory if the mode register is set to the second value. In one embodiment, none of the information except the modified data is invalidated if the mode register is set to the first value. The second value may be utilized to efficiently reassign one or more cache memories to a new partition.
    Type: Grant
    Filed: August 23, 2005
    Date of Patent: April 8, 2008
    Assignee: Unisys Corporation
    Inventors: Robert H. Andrighetti, Donald C. Englin, Douglas A. Fuller
  • Patent number: 7120836
    Abstract: A system and method for increasing computing throughput through execution of parallel data error detection/correction and cache hit detection operations. In one path, hit detection occurs independent of and concurrent with error detection and correction operations, and reliance on hit detection in this path is based on the absence of storage errors. A single error correction code (ECC) is used to minimize storage requirements, and data hit comparisons based on the cached address and requested address are performed exclusive of ECC bits to minimize bit comparison requirements.
    Type: Grant
    Filed: November 7, 2000
    Date of Patent: October 10, 2006
    Assignee: Unisys Corporation
    Inventors: Donald C. Englin, Kelvin S. Vartti
  • Patent number: 7065614
    Abstract: The current invention provides a system and method for maintaining memory coherency within a multiprocessor environment that includes multiple requesters such as instruction processors coupled to a shared main memory. Within the system of the current invention, data may be provided from the shared memory to a requester for update purposes before all other read-only copies of this data stored elsewhere within the system have been invalidated. To ensure that this acceleration mechanism does not result in memory incoherency, an instruction is provided for inclusion within the instruction set of the processor. Execution of this instruction causes the executing processor to discontinue execution until all outstanding invalidation activities have completed for any data that has been retrieved and updated by the processor.
    Type: Grant
    Filed: June 20, 2003
    Date of Patent: June 20, 2006
    Assignee: Unisys Corporation
    Inventors: Kelvin S. Vartti, James A. Williams, Donald C. Englin
  • Patent number: 6993630
    Abstract: A system and method for pre-fetching data signals is disclosed. According to one aspect of the invention, an Instruction Processor (IP) generates requests to access data signals within the cache. Predetermined ones of the requests are provided to pre-fetch control logic, which determines whether the data signals are available within the cache. If not, the data signals are retrieved from another memory within the data processing system, and are stored to the cache. According to one aspect, the rate at which pre-fetch requests are generated may be programmably selected to match the rate at which the associated requests to access the data signals are provided to the cache. In another embodiment, pre-fetch control logic receives information to generate pre-fetch requests using a dedicated interface coupling the pre-fetch control logic to the IP.
    Type: Grant
    Filed: September 26, 2002
    Date of Patent: January 31, 2006
    Assignee: Unisys Corporation
    Inventors: James A. Williams, Robert H. Andrighetti, Conrad S. Shimada, Donald C. Englin, Kelvin S. Vartti
  • Patent number: 6976128
    Abstract: A system and method is provided to selectively flush data from cache memory to a main memory irrespective of the replacement algorithm that is used to manage the cache data. According to one aspect of the invention, novel “page flush” and “cache line flush” instructions are provided to flush a page and a cache line of memory data, respectively, from a cache to a main memory. In one embodiment, these instructions are included within the hardware instruction set of an Instruction Processor (IP). According to another aspect of the invention, flush operations are initiated using a background interface that interconnects the IP with its associated cache memory. A primary interface that also interconnects the IP to the cache memory is used to simultaneously issue higher-priority requests so that processor throughput is increased.
    Type: Grant
    Filed: September 26, 2002
    Date of Patent: December 13, 2005
    Assignee: Unisys Corporation
    Inventors: James A. Williams, Robert H. Andrighetti, Conrad S. Shimada, Donald C. Englin
  • Patent number: 6928517
    Abstract: A method of and apparatus for improving the efficiency of a data processing system employing a multiple level cache memory system. The efficiencies result from enhancing the response to SNOOP requests. To accomplish this, the system memory bus is provided separate and independent paths to the level two cache and tag memories. Therefore, SNOOP requests are permitted to directly access the tag memories without reference to the cache memory. Secondly, the SNOOP requests are given a higher priority than operations associated with local processor data requests. Though this may slow down the local processor, the remote processors have less wait time for SNOOP operations improving overall system performance.
    Type: Grant
    Filed: August 30, 2000
    Date of Patent: August 9, 2005
    Assignee: Unisys Corporation
    Inventors: Donald C. Englin, Donald W. Mackenthun, Kelvin S. Vartti
  • Patent number: 6868482
    Abstract: Each dual multi-processing system has a number of processors, with each processor having a store in first-level write through cache to a second-level cache. A third-level memory is shared by the dual system with the first-level and second-level caches being globally addressable to all of the third-level memory. Processors can write through to the local second-level cache and have access to the remote second-level cache via the local storage controller. A coherency scheme for the dual system provides each second-level cache with indicators for each cache line showing which ones are valid and which ones have been modified or are different than what is reflected in the corresponding third level memory. The flush apparatus uses these two indicators to transfer all cache lines that are within the remote memory address range and have been modified, back to the remote memory prior to dynamically removing the local cache resources due to either system maintenance or dynamic partitioning.
    Type: Grant
    Filed: February 17, 2000
    Date of Patent: March 15, 2005
    Assignee: Unisys Corporation
    Inventors: Donald W. Mackenthun, Mitchell A. Bauman, Donald C. Englin
  • Patent number: 6857049
    Abstract: A method of and apparatus for improving the efficiency of a data processing system employing a multiple level cache memory system. The efficiencies result from managing the process of flushing old data from the second level cache memory. In the present invention, the second level cache memory is a store-in memory. Therefore, when data is to be deleted from the second level cache memory, a determination is made whether the data has been modified by the processor. If the data has been modified, the data must be rewritten to lower level memory. To free the second level cache memory for storage of the newly requested data, the data to be flush is loaded into a flush buffer for storage during the rewriting process.
    Type: Grant
    Filed: August 30, 2000
    Date of Patent: February 15, 2005
    Assignee: Unisys Corporation
    Inventors: Donald C. Englin, Kelvin S. Vartti, James L. Federici
  • Patent number: 6799249
    Abstract: An apparatus for and method of queuing memory access requests resulting from level two cache memory misses. The requests are preferably queued separately by processor. To provide the most recent data to the system, write (i.e., input) requests are optimally given preference over read (i.e., output) requests for input/output processors. However, instruction processor program instruction fetches (i.e., read-only requests) are preferably given priority over operand transfers (i.e., read/write requests) to reduce instruction processor latency.
    Type: Grant
    Filed: August 30, 2000
    Date of Patent: September 28, 2004
    Assignee: Unisys Corporation
    Inventors: Donald C. Englin, Kelvin S. Vartti
  • Patent number: 6122711
    Abstract: Flush apparatus for a dual multi-processing system. Each dual multi-processing system has a number of processors, with each processor having a store in first-level write through cache to a second-level cache. A third-level memory is shared by the dual system with the first-level and second-level caches being globally addressable to all of the third-level memory. Processors can write through to the local second-level cache and have access to the remote second-level cache via the local storage controller. A coherency scheme for the dual system provides each second-level cache with indicators for each cache line showing which ones are valid and which ones have been modified or are different than what is reflected in the corresponding third level memory.
    Type: Grant
    Filed: January 7, 1997
    Date of Patent: September 19, 2000
    Assignee: Unisys Corporation
    Inventors: Donald W. Mackenthun, Mitchell A. Bauman, Donald C. Englin
  • Patent number: 5946710
    Abstract: Method and apparatus for maximizing cache memory throughput in a system where a plurality of requesters may contend for access to a same memory simultaneously. The memory utilizes an interleaved addressing scheme wherein each memory segment is associated with a separate queuing structure and the memory is mapped noncontiguously within the same segment so that all segments are accessed equally. Throughput is maximized as the plurality of requesters are queued evenly throughout the system.
    Type: Grant
    Filed: November 14, 1996
    Date of Patent: August 31, 1999
    Assignee: Unisys Corporation
    Inventors: Mitchell A. Bauman, Donald C. Englin
  • Patent number: 5875462
    Abstract: A cache architecture for a multiprocessor data processing system. The cache architecture includes multiple first-level caches, two second-level caches, and main storage that is addressable by each of the processors. Each first-level cache is dedicated to a respective one of the processors. Each of the second-level caches is coupled to the other second-level cache, coupled to the main storage, and coupled to predetermined ones of the first-level caches. The range of cacheable addresses for both of the second-level caches encompasses the entire address space of the main storage. Each of the second-level caches may be viewed as dedicated for write access to the set of processors associated with the predetermined set of first-level caches, and shared for read access to the other set of processors. The dedicated and shared nature enhances system efficiency. The cache architecture includes coherency control that filters invalidation traffic between the second-level caches.
    Type: Grant
    Filed: December 28, 1995
    Date of Patent: February 23, 1999
    Assignee: Unisys Corporation
    Inventors: Mitchell A. Bauman, Donald C. Englin, Mark L. Balding
  • Patent number: 5860093
    Abstract: Method and apparatus for reducing address/function transfer pins in a system where cache memories in a system controller are accessed by a number of instruction processors. The reduction of pins is obtained by using two data transfers. The increase in data addressing time, which would otherwise occur using two data transfers, is reduced to nearly the time of the data transfers themselves by responding to the first data transfer while the second data transfer is taking place.
    Type: Grant
    Filed: January 21, 1997
    Date of Patent: January 12, 1999
    Assignee: Unisys Corporation
    Inventors: Donald C. Englin, Mitchell A. Bauman