Abstract: A system for tracing a simulation design involves an encoded assertion asserting a value of a node of the simulation design at a point in a simulation, a fanin cone detection facility configured to obtain a fanin cone for the encoded assertion, a waveform trace facility configured to obtain waveform data including a history of signal values for the node, and a simulation toolkit configured to obtain node data using the fanin cone and the waveform data.
Abstract: A method and apparatus for input data selection for content addressable memory. In one embodiment, the apparatus includes an array of CAM cells, a select circuit adapted to generate a plurality of select signals each indicative of a segment of input data provided to the CAM apparatus, and switch circuitry including a plurality of programmable switch circuits each programmable to output a respective bit of the input data as a comparand bit for the array of CAM cells in response to one of the select signals.
Abstract: A storage virtualization environment is provided that includes a system for providing one or more virtual volumes. The system may include a host system and a set of storage devices, each of which includes physical block addresses that stores data. Further, the system includes a network switch system connecting the host system and the set of storage devices and is configured to define and manage a virtual volume associated with data distributed across the physical block addresses. The network switch system includes a first virtualization layer that maintains first tier objects including information reflecting a relationship between the physical block addresses and one or more logical partitions of virtual volume data. Moreover, the network switch system includes a second virtualization layer that maintains second tier objects including information reflecting a logical configuration of the virtual volume.
Type:
Grant
Filed:
February 27, 2004
Date of Patent:
June 26, 2007
Assignee:
Sun Microsystems Inc.
Inventors:
Kevin Faulkner, Wai Yim, Rod DeKoning, David Kopper
Abstract: A method for generating output data for a transparent object in a digital image creates a plurality of image areas. The plurality of image areas covers a total area of the transparent object in the digital image. Each image area covers a different portion of the transparent object. The method combines information of the transparent object covered by an image area with information of a background image of the digital image also covered by the image area. The background image does not include the transparent object.
Abstract: A content addressable memory (CAM) device having concurrent compare and error checking capability. The content addressable memory (CAM) device includes circuitry to compare a comparand with a plurality of data words stored within the CAM device in a compare operation, and circuitry to determine, concurrently with the compare operation, whether one of the data words has an error.
Abstract: A method and apparatus for replicating an image from a source to a destination disk are provided. Specific embodiments may be optimized for single source to multiple destination replication requests, for example. In one embodiment, the present invention provides tools and techniques for synchronous data replication responsive to asynchronous same-source-to-different-destination replication requests.
Type:
Grant
Filed:
December 8, 2003
Date of Patent:
June 26, 2007
Assignee:
Sun Microsystems, Inc.
Inventors:
Martin Patterson, Shriram Krishnan, Jayaraman Manni, Benjamin H. Stoltz
Abstract: A system and method for preserving POST data on a server system are presented. Embodiments of the present invention include a method for preserving POST data comprising using a generic cache agent to intercept a POST request made by a client for a resource accessible from a server, creating a URI unique to the POST request, storing the URI and POST data associated with the POST request in a cache memory, redirecting the client to an authentication URL, and after authentication, retrieving the POST data from the cache memory, creating an HTML page, the HTML page comprising the POST data, and serving the HTML page to a web server. In another embodiment of the present invention, a cache engine clears stale POST data through a LRU (least recently used) cache mechanism. The present invention provides a generic cache engine that can be plugged into any web server running any kind of web application.
Abstract: A method and apparatus are provided for caching data for protocol processing. A protocol processing cache stores data needed for the generation of new headers, from both receive and transmit flows. Some data may be stored in the cache only after being processed (e.g., updated, calculated) by a protocol processor. Other data may bypass the protocol processor and be stored in the cache without being handled by the protocol processor. An entry in the cache includes data needed for header generation, a tag identifying an index into a control block memory of the TCP connection to which the data corresponds. An entry may also include one or more control indicators to indicate whether a transmit flow has been acknowledged, whether a receive flow has been observed, and whether the transmit flow has been updated with control data from the receive flow. The apparatus is protocol processor independent.
Abstract: Computing an output interval includes producing a first result from a conditional selection using a first operand, a second operand, and a third operand, the operands respectively including a second input interval upper-point, a first input interval upper-point, and a first input interval lower-point. Next, computing an output interval includes producing a second result from the conditional selection, the operands respectively including a second input interval upper-point, the first input interval upper-point, and the first input interval lower-point. Furthermore, computing an output interval includes producing a third result from a conditional division using the first operand, the second operand, and the third operand, the operands respectively including the first result, the second input interval upper-point, and the second input interval lower-point.
Abstract: Various embodiments of a computer system employing bundled prefetching are disclosed. In one embodiment, a cache memory subsystem implements a method for prefetching data. The method comprises the cache memory subsystem receiving a read request to access a line of data and determining that a cache miss with respect to the line occurred. The method further comprises transmitting a bundled transaction on a system interconnect in response to the cache miss, wherein the bundled transaction combines a request for the line of data and a prefetch request.
Abstract: If a consumer instruction specifies a 64 bit source register comprised of results provided by two 32 bit producer instructions, the number of dependencies that must be tracked per source register can be decreased by transforming one or more of the 32 bit producer instructions so that rather than simply storing its result in a 32 bit destination register, the transformed instruction stores its result into a 64 bit logical register along with another 32 bit value held in another 32 bit register.
Type:
Grant
Filed:
April 5, 2004
Date of Patent:
June 26, 2007
Assignee:
Sun Microsystems, Inc.
Inventors:
Julian A. Prabhu, Atul Kalambur, Sudarshan Kadambi, Daniel L. Liebholz, Julie M. Staraitis
Abstract: A network messaging protocol enabling messages from multiple network devices to share a single display device is disclosed. The protocol enables a display device to prioritize among incoming messages from different network devices and to prioritize among incoming multiple messages from a single device. The protocol further enables multiple networked devices communicating over an IP based network to share a display device, and also provides the ability for a network device to specify the display characteristics of its message. A display device executing the messaging protocol processes and displays multiple messages from multiple network devices without the need to overwrite important messages or display messages in unreadable sizes.
Abstract: Embodiments of the present invention provide an open and interoperable single sign-on session in a heterogeneous communication network. The open and interoperable single sign-on system is configured by exchanging an entity identifier, an account mapping, an attribute mapping, a site attribute list, an action mapping and/or the like. The entity identifier, account mapping, attribute mapping, site attribute list, action mapping and the like for each partner entity is stored in a partner list accessable to the particular entity. Thereafter, the open and interoperable single sign-on session may be provided upon receipt of a SAML request or assertion containing an entity identifier. The entity identifier contained in the SAML request or assertion is looked-up in the partner list of the particular entity which received the SAML request or assertion. A record containing a matching entity identifier provides the applicable account mapping, attribute mapping, site attribute list, and/or action mapping.
Abstract: In order to provide protection for first information, protection information, for example parity information, for the first information is spatially distributed with respect to the first information in memory. A logic unit maps the first information and the spatially distributed information corresponding thereto from them memory onto a connection operating under a protocol supporting the protection information.
Abstract: A cooling apparatus uses a plurality of pipes to cool one or more integrated circuits disposed on a circuit board. The cooling apparatus uses an array of magnets to create magnetic fields across segments of the plurality of pipes. Electrical currents are induced across the magnetic fields. A flow of electrically conductive fluid in the plurality of pipes is dependent on and controllable by the magnetic fields and/or the electrical currents.
Abstract: A heat sink has a plurality of pipes that are connected to an array of magnets. The plurality of pipes are connected to a lid that is operatively connected to an integrated circuit. Temperature sensors are disposed on the lid to measure temperatures of hot spots of the integrated circuit. Dependent on a temperature of one of the hot spots, the array of magnets may be used to propagate thermally conductive fluid toward the hot spot through the lid using the plurality of pipes.
Abstract: A heat sink operatively connected to an integrated circuit is configured to generate a magnetic field. Fluid flow toward and away from a hot spot of the integrated circuit is dependent on the magnetic field and an induced electrical current. A temperature sensor is used to take temperature measurements of the hot spot. A value of the induced electrical current is adjusted dependent on one or more temperature measurements taken by the temperature sensor.
Abstract: A deque of a local process in a memory work-stealing implementation may use one or more data structures to perform work. If the local process attempts to add a new value to its deque's circular array when the data structure is full (i.e., an overflow condition occurs), the contents of the data structure are copied to a larger allocated circular array (e.g., a circular array of greater size than the original circular array). The entries in the original, smaller-sized circular array are copied to positions in the now-active, larger-sized circular array, and the system is configured to work with the newly activated circular array. By this technique, the local process is thus provided with space to add the new value.
Abstract: A heat sink has a heat spreader structure containing magneto-hydrodynamic fluid. Also, the heat spreader includes a central metallic cylinder and a metal ring screen surrounding the central metallic cylinder. Electrical and magnetic fields induce the magneto-hydrodynamic fluid to undergo a swirling motion. The swirling motion acts as an MHD pump and provides efficient heat dissipation from a heat source contacting the heat spreader. A heat sink spreader has a central metallic cylinder surrounded by a metallic ring screen, and a magneto-hydrodynamic fluid.