Patents by Inventor Erik Hagersten
Erik Hagersten has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 6578033Abstract: A probabilistic queue lock divides requesters for a lock into at least three sets. In one embodiment, the requesters are divided into the owner of the lock, the first waiting contender, and the other waiting contenders. The first waiting contender is made probabilistically more likely to obtain the lock by having it spin faster than the other waiting contenders. Because the other waiting contenders spin more slowly, the first waiting contender is more likely to be able to observe the free lock and acquire it before the other waiting contenders notice that it is free. The first of the other waiting contenders that determines that the previous first waiting contender has acquired the lock is promoted to be the new first waiting contender and begins spinning fast. Because only the first waiting contender is spinning fast on the lock, it is probable that only the first waiting contender will attempt to acquire the lock when it becomes available.Type: GrantFiled: June 20, 2000Date of Patent: June 10, 2003Assignee: Sun Microsystems, Inc.Inventors: Ashok Singhal, Erik Hagersten
-
Patent number: 6148300Abstract: A probabilistic queue lock divides requesters for a lock into at least three sets. In one embodiment, the requesters are divided into the owner of the lock, the first waiting contender, and the other waiting contenders. The first waiting contender is made probabilistically more likely to obtain the lock by having it spin faster than the other waiting contenders. Because the other waiting contenders spin more slowly, the first waiting contender is more likely to be able to observe the free lock and acquire it before the other waiting contenders notice that it is free. The first of the other waiting contenders that determines that the previous first waiting contender has acquired the lock is promoted to be the new first waiting contender and begins spinning fast. Because only the first waiting contender is spinning fast on the lock, it is probable that only the first waiting contender will attempt to acquire the lock when it becomes available.Type: GrantFiled: June 19, 1998Date of Patent: November 14, 2000Assignee: Sun Microsystems, Inc.Inventors: Ashok Singhal, Erik Hagersten
-
Patent number: 6141692Abstract: A method and apparatus are provided which eliminate the need for an active traffic flow control protocol to manage request transaction flow between the nodes of a directory-based, scaleable, shared-memory, multi-processor computer system. This is accomplished by determining the maximum number of requests that any node can receive at any given time, providing an input buffer at each node which can store at least the maximum number of requests that any node can receive at any given time and transferring stored requests from the buffer as the node completes requests in process and is able to process additional incoming requests. As each node may have only a certain finite number of pending requests, this is the maximum number of requests that can be received by a node acting in slave capacity from any another node acting in requester capacity. In addition, each node may also issue requests that must be processed within that node.Type: GrantFiled: July 1, 1996Date of Patent: October 31, 2000Assignee: Sun Microsystems, Inc.Inventors: Paul Loewenstein, Erik Hagersten
-
Patent number: 6078996Abstract: Method for increasing data-processing speed in computer systems containing at least one microprocessor, a memory device, and a so-called cache connected to the processor, in which the cache is arranged to fetch data from the addresses in the memory device requested by the processor and then also fetches data from one or several addresses in the memory device not requested by the processor. The computer system includes a circuit called the stream-detection circuit, connected to interact with a cache such that the stream-detection circuit detects the addresses which the processor requests in the cache and registers whether the addresses requested already existed in cache. The stream-detection circuit is arranged such that it is made to detect one or several sequential series of addresses requested by the processor in the cache.Type: GrantFiled: August 31, 1998Date of Patent: June 20, 2000Assignee: Sun Microsystems, Inc.Inventor: Erik Hagersten
-
Method and apparatus providing short latency round-robin arbitration for access to a shared resource
Patent number: 5987549Abstract: Low-latency distributed round-robin arbitration is used to grant requests for access to a shared resource such as a computer system bus. A plurality of circuit board cards that each include two devices such as CPUs, I/O units, and ram and an address controller plugs into an Address Bus in the bus system. Each address controller contains logic implementing the arbitration mechanism with a two-level hierarchy: a single top arbitrator and preferably four leaf arbitrators. Each address controller is coupled to two devices and the logical "OR" of their arbitration request is coupled via an Arbitration Bus to other address controllers on other boards. Each leaf arbitrator has four prioritized request in lines, each such line being coupled to a single address controller serviced by that leaf arbitrator. By default, each leaf arbitrator and the top arbitrator implement a prioritized algorithm.Type: GrantFiled: July 1, 1996Date of Patent: November 16, 1999Assignee: Sun Microsystems, Inc.Inventors: Erik Hagersten, Ashok Singhal -
Patent number: 5978874Abstract: Snooping is implemented on a split transaction snooping bus for a computer system having one or many such buses. Circuit boards including CPU or other devices and/or distributed memory, data input/output buffers, queues including request tag queues, coherent input queues ("CIQ"), and address controller implementing address bus arbitration plug-into one or more split transaction snooping bus systems. All devices snoop on the address bus to learn whether an identified line is owned or shared, and an appropriate owned/shared signal is issued. Receipt of an ignore signal blocks CIQ loading of a transaction until the transaction is reloaded and ignore is deasserted. Ownership of a requested memory line transfers immediately at time of request. Asserted requests are queued such that state transactions on the address bus occur atomically logically without dependence upon the request. Subsequent requests for the same data are tagged to become the responsibility of the owner-requestor.Type: GrantFiled: July 1, 1996Date of Patent: November 2, 1999Assignee: Sun Microsystems, Inc.Inventors: Ashok Singhal, Bjorn Liencres, Jeff Price, Frederick M. Cerauskis, David Broniarczyk, Gerald Cheung, Erik Hagersten, Nalini Agarwal
-
Patent number: 5960179Abstract: In a networked computer system that includes an omnibus system coupled to a plurality of workstation/computer subsystems, an optimal global reordering of transactions seeking Address Bus access is provided. Access requests are asserted by devices associated with circuit cards, each such card including an address controller, memory, and a coherent input queue. Transactions occurring on the omnibus are loaded into the associated address controller coherent input queue. A global network interface is coupled to the omnibus system and may assert an IGNORE signal, amd includes a table storing all cache lines in the distributed memory system. A transaction seeking to access an address holding invalid data or a remote address is detected by the global network interface, which asserts the IGNORE signal, thus blocking the transaction from loading into the coherent input queue. At a later time when the subject address retains valid data, the interface reissues an identical transaction on the bus.Type: GrantFiled: July 1, 1996Date of Patent: September 28, 1999Assignee: Sun Microsystems, Inc.Inventor: Erik Hagersten
-
Patent number: 5926829Abstract: The present invention provides a hybrid Non-Uniform Memory Architecture (NUMA) and Cache-Only Memory Architecture (COMA) caching architecture together with a cache-coherent protocol for a computer system having a plurality of sub-systems coupled to each other via a system interconnect. In one implementation, each sub-system includes at least one processor, a page-oriented COMA cache and a line-oriented hybrid NUMA/COMA cache. Such a hybrid system provides flexibility and efficiency in caching both large and small, and/or sparse and packed data structures. Each sub-system is able to independently store data in COMA mode or in NUMA mode. When caching in COMA mode, a sub-system allocates a page of memory space and then stores the data within the allocated page in its COMA cache. Depending on the implementation, while caching in COMA mode, the sub-system may also store the same data in its hybrid cache for faster access.Type: GrantFiled: January 9, 1998Date of Patent: July 20, 1999Assignee: Sun Microsystems, Inc.Inventors: Erik Hagersten, Robert C. Zak, Jr.
-
Patent number: 5911052Abstract: A split transaction snooping bus protocol and architecture is provided for use in a system having one or many such buses. Circuit boards including CPU or other devices and/or distributed memory, data input/output buffers, queues including request tag queues, coherent input queues ("CIQ"), and address controller implementing address bus arbitration plug-into one or more split transaction snooping bus systems. All devices snoop on the address bus to learn whether an identified line is owned or shared, and an appropriate owned/shared signal is issued. Receipt of an ignore signal blocks CIQ loading of a transaction until the transaction is reloaded and ignore is deasserted. Ownership of a requested memory line transfers immediately at time of request. Asserted requests are queued such that state transactions on the address bus occur atomically logically without dependence upon the request. Subsequent requests for the same data are tagged to become the responsibility of the owner-requestor.Type: GrantFiled: July 1, 1996Date of Patent: June 8, 1999Assignee: Sun Microsystems, Inc.Inventors: Ashok Singhal, Bjorn Liencres, Jeff Price, Frederick M. Cerauskis, David Broniarczyk, Gerald Cheung, Erik Hagersten, Nalini Agarwal
-
Patent number: 5893160Abstract: An efficient streamlined coherent protocol for a multi-processor multi-cache computing system. Each subsystem includes at least one processor and an associated cache and directory. The subsystems are coupled to a global interconnect via global interfaces. In one embodiment, each global interface includes a request agent (RA), a directory agent (DA) and a slave agent (SA). The RA provides a subsystem with a mechanism for sending read and write request to the DA of another subsystem. The DA is responsible for accessing and updating its home directory. The SA is responsible for responding to requests from the DA of another subsystem. Each subsystem also includes a blocker coupled to a DA and associated with a home directory. All requests for a cache line are screened by the blocker associated with each home directory. Blockers are responsible for blocking new request(s) for a cache line until an outstanding request for that cache line has been serviced.Type: GrantFiled: April 8, 1996Date of Patent: April 6, 1999Assignee: Sun Microsystems, Inc.Inventors: Paul N. Loewenstein, Erik Hagersten
-
Patent number: 5893144Abstract: The present invention provides a hybrid Non-Uniform Memory Architecture (NUMA) and Cache-Only Memory Architecture (COMA) caching architecture together with a cache-coherent protocol for a computer system having a plurality of sub-systems coupled to each other via a system interconnect. In one implementation, each sub-system includes at least one processor, a page-oriented COMA cache and a line-oriented hybrid NUMA/COMA cache. Such a hybrid system provides flexibility and efficiency in caching both large and small, and/or sparse and packed data structures. Each sub-system is able to independently store data in COMA mode or in NUMA mode. When caching in COMA mode, a sub-system allocates a page of memory space and then stores the data within the allocated page in its COMA cache. Depending on the implementation, while caching in COMA mode, the sub-system may also store the same data in its hybrid cache for faster access.Type: GrantFiled: December 22, 1995Date of Patent: April 6, 1999Assignee: Sun Microsystems, Inc.Inventors: David Wood, Erik Hagersten
-
Patent number: 5842026Abstract: An interrupt mechanism handles an interrupt transaction between a source processor and a target processor on separate nodes in a multi-processor system. The nodes are connected to a network through node interface controls between the node and the network. The transaction begins by initiating the interrupt transaction at the source processor. The interrupt mechanism detects if the target processor is at a remote node on a system bus across the network, and if it is the mechanism sends an ignore signal to the source processor. Then the mechanism suspends the interrupt transaction at the source processor if it detects the target processor is at a remote node. The mechanism performs an ACK/NACK (acknowledge/non-acknowledge) operation at the target processor and returning an ACK signal or a NACK signal to the source processor across the network. This ACK/NACK signal wakes-up the source processor.Type: GrantFiled: July 1, 1996Date of Patent: November 24, 1998Assignee: Sun Microsystems, Inc.Inventors: Monica C. Wong-Chan, Erik Hagersten
-
Patent number: 5829033Abstract: In a computer system implementing state transitions that change logically and atomically at an address packet independently of a response, the coherence domain is extended among distributed memory. As such, memory line ownership transfers upon request, and not upon requestor receipt of data. Requestor receipt of data is rapidly implemented by providing a ReadToShareFork transaction that simultaneously causes a write-type operation that updates invalid data from a requested memory address, and provides the updated data to the requesting device. More specifically, when writing valid data to memory, the ReadToShare Fork transaction simultaneously causes reissuance of the originally requested transaction using the same memory address and ID information. The requesting device upon recognizing its transaction ID on the bus system will pull the now valid data from the desired memory location.Type: GrantFiled: July 1, 1996Date of Patent: October 27, 1998Assignee: Sun Microsystems, Inc.Inventors: Erik Hagersten, Ashok Singhal, Bjorn Liencres
-
Patent number: 5802566Abstract: A Method for increasing data-processing speed in computer systems containing at least one microprocessor (1), a memory device (3), and a cache (2,4) connected to the processor, in which the cache (2,4) is arranged to fetch data from the addresses in the memory device (3) requested by the processor (1) and then also fetches data from one or several addresses in the memory device (3) not requested by the processor (1). The computer system includes a circuit called the stream-detection circuit (5), connected to interact with a cache (2,4) such that the stream-detection circuit (5) detects the addresses which the processor (1) requests in the cache (2,4) and registers whether the addresses requested already existed in cache (2,4) . The stream-detection circuit (5) is arranged such that it is made to detect one or several sequential series of addresses requested by the processor (1) in the cache (2,4).Type: GrantFiled: November 16, 1993Date of Patent: September 1, 1998Assignee: Sun Microsystems, Inc.Inventor: Erik Hagersten
-
Patent number: 5778427Abstract: The present invention provides a cache manager (CM) for use with an address translation table (ATT) which take advantage of way information, available when a cache line is first cached, for efficiently accessing a multi-way cache of a computer system having a main memory and one or more processors. The main memory and the ATT are page-oriented while the cache is organized using cache lines. The cache includes a plurality of cache lines divided into a number of segments corresponding to the number of "ways". Each cache line includes an address tag (AT) field and a data field. The way information is stored in the ATT for later cache access. In this implementation, "waylets" provide an efficiency mechanism for storing the way information whenever a cache line is cached. Accordingly, each table entry of the ATT includes a virtual address (VA) field, a physical address (PA) field, and a plurality of waylets associated with each pair of VA and PA fields.Type: GrantFiled: July 7, 1995Date of Patent: July 7, 1998Assignee: Sun Microsystems, Inc.Inventors: Erik Hagersten, Ashok Singhal
-
Patent number: 5710907Abstract: The present invention provides a hybrid Non-Uniform Memory Architecture (NUMA) and Cache-Only Memory Architecture (COMA) caching architecture together with a cache-coherent protocol for a computer system having a plurality of sub-systems coupled to each other via a system interconnect. In one implementation, each sub-system includes at least one processor, a page-oriented COMA cache and a line-oriented hybrid NUMA/COMA cache. Such a hybrid system provides flexibility and efficiency in caching both large and small, and/or sparse and packed data structures. Each sub-system is able to independently store data in COMA mode or in NUMA mode. When caching in COMA mode, a sub-system allocates a page of memory space and then stores the data within the allocated page in its COMA cache. Depending on the implementation, while caching in COMA mode, the sub-system may also store the same data in its hybrid cache for faster access.Type: GrantFiled: December 22, 1995Date of Patent: January 20, 1998Assignee: Sun Microsystems, Inc.Inventors: Erik Hagersten, Robert C. Zak, Jr.