Abstract: A system for controlling contention between conflicting transactions in a transactional memory system. During operation, the system receives a request to access a cache line and then determines if the cache line is already in use by an existing transaction in a cache state that is incompatible with the request. If so, the system determines if the request is from a processor which is in a polite mode. If this is true, the system denies the request to access the cache line and continues executing the existing transaction.
Type:
Grant
Filed:
April 18, 2005
Date of Patent:
February 24, 2009
Assignee:
Sun Microsystems, Inc.
Inventors:
Daniel S. Nussbaum, Victor M. Luchangco, Mark S. Moir, Ori Shalev, Nir N. Shavit
Abstract: A proximity interconnect module includes a plurality of off-chip cache memories. Either disposed external to the proximity interconnect module or on the proximity interconnect module are a plurality of processors that are dependent on the plurality of off-chip cache memories for servicing requests for data. The plurality of off-chip cache memories are operatively connected to either one another or to one or more of the plurality of processors by proximity communication. Each of the plurality of off-chip cache memories may cache certain portions of the physical address space.
Abstract: A memory system is disclosed. The memory system includes a memory controller coupled to one or more memory modules, at least one of the memory modules including a buffer. The memory controller is configured to convey a command to at least one of the memory modules in response to detecting that no memory requests addressed to the at least one of the memory modules have been received during a specified window of time. In response to the command, the buffer of the at least one of the memory modules is configured to enter a reduced power state. The specified window of time may be either a specified number of memory refresh intervals or buffer sync intervals. The memory controller maintains a count of memory refresh or buffer sync intervals.
Abstract: In data processing systems that use a snoopy based cache coherence protocol and which contain a read only cache memory with a bounded range of addresses, a cache line hit is detected by assuming that, if an address contained in a request falls within the bounded range, the cache line is present in the cache memory for snoop results. This is equivalent to assuming that the cache line is marked as shared when it might not be so marked.
Abstract: A lightweight, concurrent detection mechanism avoids global thread suspension by operating during runtime with threads under examination. A particular configuration combines a dependency (“waits for”) snapshot with a progression check to determine advancement of purportedly deadlocked threads. Thread blocking is enumerated in a table or graph which denotes dependencies of threads and the corresponding resources. For identified circular dependencies, a successive transition, or progression check ratifies the potential deadlock. A transition counter corresponding to each thread is analyzed in the progression check. The transition counter is indicative of a change in state for the process in question, hence is indicative of instruction execution, an activity not performed by a blocked process. Deadlock is therefore ratified if the transition counters associated with the threads in the potential deadlock have not advanced.
Abstract: A system and method of navigating a mobile device display includes highlighting a first icon in a main portion of the mobile device display. The main portion is traversed to a tertiary tray. The tertiary tray includes a second icon. The second icon is highlighted. A single navigation key is used to traverse the main portion and to highlight the second icon.
Abstract: The present invention generally relates to synchronization of multiple threads in an out-of-order microprocessor utilizing the insertion of a trap. In one embodiment, while synchronizing multiple running threads, an instruction within a first running thread is identified. Upon identification of this instruction, a trap is inserted into a second running thread. All instructions within the instructional pipeline that are scheduled for execution prior to this trapped instruction must retire before the subsequent execution of the synchronizing instruction. Following the execution of the synchronizing instruction, all instructions within the instruction pipeline slated for execution after the trapped instruction in the remaining threads are flushed and refetched.
Type:
Grant
Filed:
May 1, 2003
Date of Patent:
February 17, 2009
Assignee:
Sun Microsystems, Inc.
Inventors:
Evan H. Gewirtz, Todd D. Basso, Daniel L. Leibholz, Benjamin C. Cordes
Abstract: A method for synchronized renaming between a master processor and a coprocessor includes sending from the master processor an operation for execution by the coprocessor along with an identifier, at the coprocessor, renaming the operation for execution, including assigning a resource and associating the resource with the identifier, and at a subsequent time, sending the identifier from the master processor to the coprocessor to be used in conjunction with the execution of the renamed operation.
Type:
Grant
Filed:
October 31, 2006
Date of Patent:
February 17, 2009
Assignee:
Sun Microsystems, Inc.
Inventors:
John Gregory Favor, Christopher P. Nelson
Abstract: A method for testing an intent log for a file system that includes creating a first file system, issuing a command to freeze the first file system, performing a plurality of commands on the first file system to obtain a plurality of deltas, wherein each of the plurality of deltas is stored in the intent log and is not committed to the first file system, copying the first file system to obtain a second file system, committing each of the plurality of deltas in the intent log to the second file system, unfreezing the first file system and committing each of the deltas in the intent log to the first file system, and comparing the first file system, after committing each of the deltas in the intent log, to the second file system to determine whether the intent log is valid.
Abstract: According to one embodiment of the invention, a technique is provided for relocating the contents of kernel pages in a manner similar to techniques used for relocating the contents of user pages. Before the contents of a source page are moved to a target page, for each entry of a plurality of entries that correspond to the source page, it is determined whether a mapping indicated in that entry is a mapping into kernel virtual memory address space or user virtual memory address space. If the mapping is into user virtual memory address space, then the entry is marked invalid. If the mapping is into kernel virtual memory address space, then the mapping is marked suspended. Marking an entry suspended causes processes and threads that try to access the entry's mapping to wait until the entry is no longer marked suspended. Consequently, kernel pages may be distributed among all computing system boards.
Type:
Grant
Filed:
June 12, 2006
Date of Patent:
February 10, 2009
Assignee:
Sun Microsystems, Inc.
Inventors:
Udayakumar Cholleti, Sean McEnroe, Stan J. Studzinski
Abstract: A switch contains a first semiconductor die, which is configured to receive signals on a plurality of input ports and to output the signals on a plurality of output ports. The first semiconductor die is further configured to selectively couple the signals between the input and output ports using a plurality of switching elements in accordance with a set of control signals, which correspond to a configuration of the switch. During this process, a plurality of proximity connectors, proximate to a surface of the semiconductor die, are configured to communicate the signals by capacitive coupling.
Type:
Grant
Filed:
June 14, 2006
Date of Patent:
February 10, 2009
Assignee:
Sun Microsystems, Inc.
Inventors:
Hans Eberle, Nils Gura, Wladyslaw Olesinski
Abstract: Methods and apparatus for representing application dependencies are disclosed. A software application is executed according to an associated state machine. A set of dependencies relationship rules indicates dependencies of a set of software applications upon the software application based upon the state of the software application. The set of dependencies relationship rules may be represented by a dependencies graph, where the software application and the set of software applications are each represented by a dependency node in the dependencies graph and each line connecting the software application with one of the set of software applications corresponds to one or more dependency statements indicating a change in state in one of the set of software applications in response to a change in state of the software application.
Type:
Grant
Filed:
September 9, 2004
Date of Patent:
February 10, 2009
Assignee:
Sun Microsystems, Inc.
Inventors:
Stephen C. Hahn, Liane Praza, Michael W. Shapiro
Abstract: A resource adapter may include modular system management interface for providing an interface between server-provided management services and back-end systems. Enterprise servers may provide management services and may host application components implementing business logic. Back-end systems may provide resources to the application components. The servers may provide services to the back-end systems to enhance efficiency, scalability, and security. Resource adapters interfacing these systems may include service adapter modules to interface between the servers and the back-end systems. For each service that a server provides to a back-end system, the corresponding resource adapter may include a service adapter module installed in the resource adapters modular system management interface. The service adapter module may isolate the code that may interact the service.
Abstract: A mechanism is disclosed for selectively providing mount information to processes running within operating system partitions. In one implementation, a non-global operating system partition is created within a global operating system environment. A file system is maintained for this non-global partition. This file system comprises zero or more mounts, and may be part of a larger, overall file system. When a process running within the non-global partition requests information pertaining to mounts, a determination is made as to which partition the process is running in. Because the process is running within the non-global partition, only selected information is provided to the process. More specifically, only information pertaining to the mounts that are within the file system maintained for the non-global partition is provided to the process. By doing so, the process is limited to viewing only those mounts that are part of the non-global partition's file system.
Abstract: A random noise generator is included in the drive circuit supplying power to a system indicator that emits optical signals. The random noise generator generates a random noise signal that is introduced into a signal input to the drive circuit so that data or covert channel information is not recoverable from the optical signals emitted by the system indicator.
Abstract: A processor including a large register file utilizes a dirty bit storage coupled to the register file and a dirty bit logic that controls resetting of the dirty bit storage. The dirty bit logic determines whether a register or group of registers in the register file has been written since the process was loaded or the context was last restored and, if written generates a value in the dirty bit storage that designates the written condition of the register or group of registers. When the context is next saved, the dirty bit logic saves a particular register or group of registers when the dirty bit storage indicates that a register or group of registers was written. If the register or group of registers was not written, the context is switched without saving the register or group of registers. The dirty bit storage is initialized when a process is loaded or the context changes.
Abstract: Synchronized register renaming between a master processor and a coprocessor that receives operations from the master enables efficient implementation of register renaming and operation execution in the processors. An ideal and an external register allocation map are implemented in the coprocessor. When registers are no longer allocated according to the ideal allocation map and the registers are currently allocated according to the external allocation map, the registers are deallocated in the external map and the number of freed registers is reported to the master. The master increments a free register credit count accordingly, and decrements the credit count by one for each operation issued to the coprocessor. An operation is not issued to the coprocessor unless at least a register is free according to the credit count. The master also throttles coprocessor operation issue based on a credit count corresponding to free scheduler entries available in the coprocessor.
Type:
Grant
Filed:
October 31, 2006
Date of Patent:
February 10, 2009
Assignee:
Sun Microsystems, Inc.
Inventors:
John Gregory Favor, Christopher P. Nelson
Abstract: An apparatus and a method dynamically reassign resources in a coprocessor among master processors that require service from the coprocessor. The method includes each processor, in each processor cycle, keeping track of a number of resource units required for executing operations sent to the coprocessor in that processor cycle and receiving from the coprocessor a number of resource units released during the processor cycle. When the resources need to be reassigned, the coprocessor asserts a signal to the resource yielding processor to cause it to reduce its expectation of resources to zero and ceasing sending service requests to the coprocessor. The coprocessor then moves resources from the yielding processor to the resource receiving processor. Resources are then released to both processors over time to their respective adjusted resource allocations. Such resources may be the number of operations that is allowed to be executing in the coprocessor simulataneously.
Type:
Grant
Filed:
October 31, 2006
Date of Patent:
February 10, 2009
Assignee:
Sun Microsystems, Inc.
Inventors:
John Gregory Favor, Christopher P. Nelson
Abstract: One embodiment of the present invention provides a system that facilitates storing results of resolvable branches during speculative execution, and then using the results to predict the same branches during non-speculative execution. During operation, the system executes code within a processor. Upon encountering a stall condition, the system speculatively executes the code from the point of the stall, without committing results of the speculative execution to the architectural state of the processor. Upon encountering a branch instruction that is resolved during speculative execution, the system stores the result of the resolved branch in a branch queue, so that the result can be subsequently used to predict the branch during non-speculative execution.
Type:
Grant
Filed:
March 29, 2005
Date of Patent:
February 10, 2009
Assignee:
Sun Microsystems, Inc.
Inventors:
Marc Tremblay, Shailender Chaudhry, Quinn A. Jacobson
Abstract: A computer readable medium comprising software instructions for purchasing units, wherein the software instructions, when executed by a processor, enable a system including the processor to receive a request for an initial trial unit from a user, receive a request to purchase the initial trial unit from the user, complete the purchase of the initial trial unit using a first financial incentive, wherein the purchase of the initial trial unit is completed within an initial unit conversion period, provide, to the user, a second financial incentive to purchase at least one conversion unit after the completion of the purchase of the first unit, receive a request to purchase the at least one conversion unit, and complete the purchase of the at least one conversion unit using the second financial incentive, wherein the purchase of the second conversion unit is completed within a total promotion period.