Patents by Inventor Kelvin S. Vartti
Kelvin S. Vartti has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 7797472Abstract: A multiprocessor system in which a defer phase response method is utilized that allows for a deferring agent to interrupt the normal flow of bus transactions once it gains control of system interface bus. The deferring agent is allowed to look ahead to determine if a continuous stream of defer phase cycles are pending transfer. If pending, the deferring agent will not release control of the bus until the pending defer phase cycles have been depleted. The look ahead feature allows expedited return of higher priority defer data, while minimizing bus dead cycles caused by interleaving defer phase cycles with normal bus traffic.Type: GrantFiled: August 25, 2004Date of Patent: September 14, 2010Assignee: Unisys CorporationInventors: Gregory B. Wiedenman, Nathan A. Eckel, Kelvin S. Vartti
-
Publication number: 20100131719Abstract: A data processing system is described that reduces read latency of requested memory data, thereby resulting in improved system performance. An exemplary system includes a bus, a processor, and a controller associated with the processor. The controller is configured to send a request for data to a memory storage unit, receive, from the memory storage unit, an early response indicating that the controller will later receive the requested data, and upon receipt of the early response indicator, start a timer to wait a period of time. The controller is further configured to, after expiration of the timer but prior to receipt of the requested data, send an arbitration request to initiate a transaction on the bus to communicate the requested data from the controller to the processor when the requested data is later received by the controller.Type: ApplicationFiled: January 27, 2010Publication date: May 27, 2010Inventors: Mark D. Luba, Gary J. Lucas, Kelvin S. Vartti
-
Publication number: 20090172225Abstract: A multiprocessor system in which a defer phase response method is utilized that allows for a deferring agent to interrupt the normal flow of bus transactions once it gains control of system interface bus. The deferring agent is allowed to look ahead to determine if a continuous stream of defer phase cycles are pending transfer. If pending, the deferring agent will not release control of the bus until the pending defer phase cycles have been depleted. The look ahead feature allows expedited return of higher priority defer data, while minimizing bus dead cycles caused by interleaving defer phase cycles with normal bus traffic.Type: ApplicationFiled: August 25, 2004Publication date: July 2, 2009Inventors: Gregory B. Wiedenman, Nathan A. Eckel, Kelvin S. Vartti
-
Publication number: 20090164689Abstract: A data processing system is described that reduces read latency of requested memory data, thereby resulting in improved system performance. An exemplary system includes a bus, a processor, and a controller associated with the processor. The controller is configured to send a request for data to a memory storage unit, receive, from the memory storage unit, an early response indicating that the controller will later receive the requested data, and upon receipt of the early response indicator, start a timer to wait a period of time. The controller is further configured to, after expiration of the timer but prior to receipt of the requested data, send an arbitration request to initiate a transaction on the bus to communicate the requested data from the controller to the processor when the requested data is later received by the controller.Type: ApplicationFiled: December 21, 2007Publication date: June 25, 2009Inventors: Mark D. Luba, Gary J. Lucas, Kelvin S. Vartti
-
Patent number: 7533223Abstract: A system and method are provided for tracking memory requests within a data processing system. The system includes a request tracking circuit that is coupled to receive requests for data from multiple processors. Multiple pending requests to the same memory address are tracked using a linked list. Only the oldest pending one of these multiple requests is issued to the memory. When data is returned from the memory, the requests are processed in an order determined by the linked list. That is, the data is provided to a processor associated with the oldest request. Thereafter, the data is retrieved and provided to the processor associated with the next request, and so on. A request issued by the memory soliciting the return of the data to the memory may also be added to the linked list to be processed in the foregoing manner.Type: GrantFiled: April 6, 2007Date of Patent: May 12, 2009Assignee: Unisys CorporationInventors: Kelvin S. Vartti, Ross M. Weber
-
Patent number: 7496715Abstract: A memory control system and method is disclosed. The system includes cache tag logic and an optional cache coupled to a main memory. If available, the cache retains a subset of the data stored within the main memory. This subset is selected by programmable control indicators. These indicators further control which data will be recorded by the tag logic. The indicators may select the sub-sets based on which type of memory request results in the return of data from the main memory to the cache, for example. Alternatively, or in addition, these indicators may specify the identity of a requester, a memory response type, or a storage mode to control the selection of the sub-sets of data stored within the cache and recorded by the tag logic. In one embodiment, data may be tracked by the cache tag logic but not stored within the cache itself.Type: GrantFiled: July 16, 2003Date of Patent: February 24, 2009Assignee: Unisys CorporationInventors: Kelvin S. Vartti, Ross M. Weber
-
Patent number: 7299311Abstract: A system and method for arbitrating for access to a resource group between agents according to a respective programmable weight for each agent. For each agent, a programmable mapping module selectively couples a respective arbitration handshake signal of the agent to one or more arbitration ports, and the number of the coupled arbitration ports for the agent is the respective programmable weight. A selection module selects one of the arbitration ports in response to a priority ranking of the arbitration ports, and access to the resource group is granted to the agent that has the respective arbitration handshake signal that is selectively coupled by the programmable mapping module to the selected arbitration port. A ranking module provides the priority ranking of the arbitration ports and updates the priority ranking in response to the selection module selecting the selected arbitration port.Type: GrantFiled: December 29, 2005Date of Patent: November 20, 2007Assignee: Unisys CorporationInventors: Chad M. Sepeda, Kelvin S. Vartti, Ross M. Weber
-
Patent number: 7260677Abstract: A memory control system and method is disclosed. In one embodiment, a first memory is coupled to one or more additional memories. The first memory receives requests for data that are completed by retrieving the data from the first memory and/or the one or more additional memories. The manner in which this data is retrieved is determined by the state of programmable control indicators. In one mode of operation, a reference is made to the first memory to retrieve the data. If it is later determined from tag information stored by the first memory that the one or more additional memories must be accessed to fulfill the request, the necessary additional memory references are initiated. In another mode of operation, references to the one or more additional memories are initiated irrespective of whether these references are required. The operating mode may be selected to optimize system efficiency.Type: GrantFiled: July 16, 2003Date of Patent: August 21, 2007Assignee: Unisys CorporationInventors: Kelvin S. Vartti, Ross M. Weber, Mitchell A. Bauman
-
Patent number: 7222222Abstract: A system and method are provided for tracking memory requests within a data processing system. The system includes a request tracking circuit that is coupled to receive requests for data from multiple processors. Multiple pending requests to the same memory address are tracked using a linked list. Only the oldest pending one of these multiple requests is issued to the memory. When data is returned from the memory, the requests are processed in an order determined by the linked list. That is, the data is provided to a processor associated with the oldest request. Thereafter, the data is retrieved and provided to the processor associated with the next request, and so on. A request issued by the memory soliciting the return of the data to the memory may also be added to the linked list to be processed in the foregoing manner.Type: GrantFiled: June 20, 2003Date of Patent: May 22, 2007Assignee: Unisys CorporationInventors: Kelvin S. Vartti, Ross M. Weber
-
Patent number: 7120836Abstract: A system and method for increasing computing throughput through execution of parallel data error detection/correction and cache hit detection operations. In one path, hit detection occurs independent of and concurrent with error detection and correction operations, and reliance on hit detection in this path is based on the absence of storage errors. A single error correction code (ECC) is used to minimize storage requirements, and data hit comparisons based on the cached address and requested address are performed exclusive of ECC bits to minimize bit comparison requirements.Type: GrantFiled: November 7, 2000Date of Patent: October 10, 2006Assignee: Unisys CorporationInventors: Donald C. Englin, Kelvin S. Vartti
-
Patent number: 7065614Abstract: The current invention provides a system and method for maintaining memory coherency within a multiprocessor environment that includes multiple requesters such as instruction processors coupled to a shared main memory. Within the system of the current invention, data may be provided from the shared memory to a requester for update purposes before all other read-only copies of this data stored elsewhere within the system have been invalidated. To ensure that this acceleration mechanism does not result in memory incoherency, an instruction is provided for inclusion within the instruction set of the processor. Execution of this instruction causes the executing processor to discontinue execution until all outstanding invalidation activities have completed for any data that has been retrieved and updated by the processor.Type: GrantFiled: June 20, 2003Date of Patent: June 20, 2006Assignee: Unisys CorporationInventors: Kelvin S. Vartti, James A. Williams, Donald C. Englin
-
Patent number: 6993630Abstract: A system and method for pre-fetching data signals is disclosed. According to one aspect of the invention, an Instruction Processor (IP) generates requests to access data signals within the cache. Predetermined ones of the requests are provided to pre-fetch control logic, which determines whether the data signals are available within the cache. If not, the data signals are retrieved from another memory within the data processing system, and are stored to the cache. According to one aspect, the rate at which pre-fetch requests are generated may be programmably selected to match the rate at which the associated requests to access the data signals are provided to the cache. In another embodiment, pre-fetch control logic receives information to generate pre-fetch requests using a dedicated interface coupling the pre-fetch control logic to the IP.Type: GrantFiled: September 26, 2002Date of Patent: January 31, 2006Assignee: Unisys CorporationInventors: James A. Williams, Robert H. Andrighetti, Conrad S. Shimada, Donald C. Englin, Kelvin S. Vartti
-
Patent number: 6973548Abstract: A dual-channel memory system and accompanying coherency mechanism is disclosed. The memory includes both a request and a response channel. The memory provides data to a requester such as an instruction processor via the response channel. If this data is provided for update purposes, other read-only copies of the data must be invalidated. This invalidation may occur after the data is provided for update purposes, and is accomplished by issuing one or more invalidation requests via one of the memory request or the response channel. Memory coherency is maintained by preventing a requester from storing any data back to memory until all invalidation activities that may be directly or indirectly associated with that data have been completed.Type: GrantFiled: June 20, 2003Date of Patent: December 6, 2005Assignee: Unisys CorporationInventors: Kelvin S. Vartti, Ross M. Weber, Mitchell A. Bauman, Ronald G. Arnold
-
Patent number: 6973541Abstract: An improved system and method are provided for initializing memory in a data processing system. According to one aspect of the invention, a “page zero” instruction is provided that may be executed by an Instruction Processor to initiate memory initialization. Upon instruction execution, the IP issues one or more page zero requests using a background interface of the IP. In one embodiment, each request results in the initialization of a page of memory. While page zero requests are issued over the background interface, the IP may continue issuing other read and write requests to memory over a primary interface of the IP.Type: GrantFiled: September 26, 2002Date of Patent: December 6, 2005Assignee: Unisys CorporationInventors: James A. Williams, Robert H. Andrighetti, Conrad S. Shimada, Kelvin S. Vartti, Stephen Sutter, Chad M. Sonmore
-
Patent number: 6934810Abstract: A mechanism to selectively leak data signals from a cache memory is provided. According to one aspect of the invention, an Instruction Processor (IP) is coupled to generate requests to access data signals within the cache. Some requests include a leaky designator, which is activated if the associated data signals are considered “leaky”. These data signals are flushed from the cache memory after a predetermined delay has occurred. The delay is provided to allow the IP to complete any subsequent requests for the same data before the flush operation is performed, thereby preventing memory thrashing. Pre-fetch logic may also be provided to pre-fetch the data signals associated with the requests. In one embodiment, the rate at which data signals are flushed from cache memory is programmable, and is based on the rate at which requests are processing for pre-fetch purposes.Type: GrantFiled: September 26, 2002Date of Patent: August 23, 2005Assignee: Unisys CorporationInventors: James A. Williams, Robert H. Andrighetti, Kelvin S. Vartti, David P. Williams
-
Patent number: 6928517Abstract: A method of and apparatus for improving the efficiency of a data processing system employing a multiple level cache memory system. The efficiencies result from enhancing the response to SNOOP requests. To accomplish this, the system memory bus is provided separate and independent paths to the level two cache and tag memories. Therefore, SNOOP requests are permitted to directly access the tag memories without reference to the cache memory. Secondly, the SNOOP requests are given a higher priority than operations associated with local processor data requests. Though this may slow down the local processor, the remote processors have less wait time for SNOOP operations improving overall system performance.Type: GrantFiled: August 30, 2000Date of Patent: August 9, 2005Assignee: Unisys CorporationInventors: Donald C. Englin, Donald W. Mackenthun, Kelvin S. Vartti
-
Patent number: 6857049Abstract: A method of and apparatus for improving the efficiency of a data processing system employing a multiple level cache memory system. The efficiencies result from managing the process of flushing old data from the second level cache memory. In the present invention, the second level cache memory is a store-in memory. Therefore, when data is to be deleted from the second level cache memory, a determination is made whether the data has been modified by the processor. If the data has been modified, the data must be rewritten to lower level memory. To free the second level cache memory for storage of the newly requested data, the data to be flush is loaded into a flush buffer for storage during the rewriting process.Type: GrantFiled: August 30, 2000Date of Patent: February 15, 2005Assignee: Unisys CorporationInventors: Donald C. Englin, Kelvin S. Vartti, James L. Federici
-
Patent number: 6816952Abstract: The current invention provides an improved system and method for locking shared resources. The invention may operate in a data processing environment including a main memory system coupled to multiple instruction processors (IPs). Lock-type instructions are included within the hardware instruction set of ones of the IPs. These lock-type instructions are executed to gain access to a software-lock stored at a predetermined location within the main memory. After activating the software-lock, further, indivisible execution of the lock-type instruction causes one or more addresses associated with the software-lock to be retrieved. These addresses are used as pointers to, in turn, retrieve the data signals protected by the software-lock. Requests for the protected data signals are issued automatically by the hardware on behalf of the requesting IP, and the IP is allowed to continue instruction execution.Type: GrantFiled: May 31, 2002Date of Patent: November 9, 2004Assignee: Unisys CorporationInventors: Kelvin S. Vartti, Wayne D. Ward, Hans C. Mikkelsen
-
Patent number: 6799249Abstract: An apparatus for and method of queuing memory access requests resulting from level two cache memory misses. The requests are preferably queued separately by processor. To provide the most recent data to the system, write (i.e., input) requests are optimally given preference over read (i.e., output) requests for input/output processors. However, instruction processor program instruction fetches (i.e., read-only requests) are preferably given priority over operand transfers (i.e., read/write requests) to reduce instruction processor latency.Type: GrantFiled: August 30, 2000Date of Patent: September 28, 2004Assignee: Unisys CorporationInventors: Donald C. Englin, Kelvin S. Vartti
-
Patent number: 6728835Abstract: An apparatus for and method of improving the efficiency of a level two cache memory. In response to a level one cache miss, a request is made to the level two cache. A signal sent with the request identifies when the requester does not anticipate a near term subsequent use for the requested data element. If a level two cache hit occurs, the requested data element is marked as least recently used in response to the signal. If a level two cache miss occurs, a request is made to level three storage. When the level three storage request is honored, the requested data element is immediately flushed from the level two cache memory in response to the signal.Type: GrantFiled: August 30, 2000Date of Patent: April 27, 2004Assignee: Unisys CorporationInventors: Mitchell A. Bauman, Conrad S. Shimada, Kelvin S. Vartti, William L. Borgerding