Abstract: Described herein is a point-to-point memory communications architecture, having a point-to-point signal line set associated with each of a plurality of connectors or module positions. When the system is fully populated, there is a one-to-one correspondence between signal line sets and memory modules. In systems that are not fully populated, the system is configurable to use a plurality of the signal line sets for a single memory module.
Type:
Grant
Filed:
February 28, 2001
Date of Patent:
October 27, 2009
Assignee:
Rambus Inc.
Inventors:
Richard E. Perego, Frederick A. Ware, Ely K. Tsern, Craig E. Hampel
Abstract: A memory address decoding method for determining if a given address is located in one of a plurality of sections. Each section has a plurality of memory units and each memory unit has a unique corresponding address, the corresponding address using the binary system. The method includes making the corresponding address in a section with greater size smaller than the corresponding address in a section with smaller size, building a single bit-pattern for each section from all corresponding addresses, and comparing if at least one comparative bit of the given address matches those in any of the bit-patterns so as to determine the given address is located in one of the sections based on the comparison.
Abstract: According to one embodiment of the invention, a method comprises verifying that a cache block is not exclusively owned, and if not, transmitting a message identifying both the cache block and a caching agent requesting ownership of the cache block to a broadcast interconnect.
Type:
Grant
Filed:
December 19, 2005
Date of Patent:
October 20, 2009
Assignee:
Intel Corporation
Inventors:
Bratin Saha, Hariharan L. Thantry, Ali-Reza Adl-Tabatabai
Abstract: Methods and structures for dynamic multiple indirections to improve reliability and performance of a dynamically mapped storage devices. In a dynamically mapped storage device in which all user supplied logical blocks are dynamically mapped by the storage device controller to physical disk blocks, features and aspects hereof provide for dynamically altering the number of replicated copies (multiple mapped indirections) of user data stored on the storage device. Performance information regarding operation of the storage device may be gathered by the storage device controller such that where physical capacity of the storage device permits and as degrading reliability is detected, additional copies (multiple indirections) of stored user data may be written to the mapped storage device. Increased multiple indirections improves reliability by decreasing the probability of data loss in response to various failure modes of the storage device.
Type:
Grant
Filed:
October 19, 2006
Date of Patent:
October 13, 2009
Assignee:
Seagate Technology LLC
Inventors:
Bruce A. Liikanen, Mike L. Mallary, Andrew W. Vogan
Abstract: A dynamic memory refresh controller includes a first in first out (FIFO) memory, a scheduler, a refresh control unit, and a signal generator. The FIFO memory stores and manages requests from a master device. The scheduler reorders the requests from the master device based on priorities assigned to the master device or provides information about following requests. The refresh control unit determines a refresh timing of the dynamic memory based on the existence of the following requests and an idle state of banks constituting the dynamic memory. Accordingly, the dynamic memory refresh controller may maximize a refresh trigger interval by changing the management order of the requests from the master device based on the priority of the response latency.
Abstract: Provided is a method for managing retention of stored objects, comprising: receiving a modification request with respect to an attribute or archive policy for an object; determining whether an attribute modification protection flag or setting is set in response to the modification request requesting to modify the attribute for the object; allowing the modification of the attribute object in response to determining that the attribute modification protection flag or setting is not set; determining whether a protection retention mechanism or setting is set in response to the modification request requesting to modify the archive policy for the object; and allowing the modification of the archive policy for the object in response to determining that the protection retention mechanism or setting is not set.
Type:
Grant
Filed:
August 18, 2006
Date of Patent:
October 6, 2009
Assignee:
International Business Machines Corporation
Inventors:
Avishai Haim Hochberg, Toby Lyn Marek, David Maxwell Cannon, Howard Newton Martin, Donald Paul Warren, Jr., Mark Alan Haye, Alan L. Stuart
Abstract: A method of tying related process threads within non-related applications together in terms of memory paging behavior. In a data processing system, a first process thread is related to one or more “partner” threads within separate high latency storage locations. The kernel analyzes the memory “page-in” patterns of multiple threads and identifies one or more partner threads of the first thread based on user input, observed memory page-in patterns, and/or pre-defined identification information within the thread data structures. The kernel marks the first thread and its corresponding related partner threads with a unique thread identifier. When the first thread is subsequently paged into a lower latency memory, the kernel also pages-in the related partner threads that are marked with the unique thread identifier in lockstep. Tying related threads from non-related applications together in terms of memory paging behavior thus eliminates memory management delays.
Type:
Grant
Filed:
February 20, 2007
Date of Patent:
September 29, 2009
Assignee:
International Business Machines Corporation
Inventors:
Gerald F. McBrearty, Shawn P. Mullen, Jessica C. Murillo, Johnny Meng-Han Shieh
Abstract: This invention provides a specified retention date within a data set that is locked against deletion or modification within a WORM storage implementation. This retention date scheme does not utilize any proprietary application program interfaces (APIs) or protocols, but rather, employs native functionality within conventional file (or other data containers, data sets or block-based logical unit numbers) properties available in commonly used operating systems. In an illustrative embodiment, the retention date/time is calculated by querying the file's last-modified time prior to commit, adding the retention period to this value and thereby deriving a retention date after which the file can be released from WORM. Prior to commit, the computed retention date is stored in the file's “last access time” property/attribute field, or another metadata field that remains permanently associated with the file and that, in being used for retention date, does not interfere with file management in a WORM state.
Abstract: The present invention is a method of and system for program thread synchronization. In accordance with an embodiment of the invention, a method of synchronizing program threads for one or more processors is provided. An address for data for each of a plurality of program threads to be synchronized is determined. For each processor executing one or more of the threads to be synchronized, execution of the thread is halted at a barrier by attempting a data operation to the determined address and the address being unavailable. Execution of the threads is resumed.
Type:
Grant
Filed:
November 10, 2005
Date of Patent:
September 8, 2009
Assignee:
Hewlett-Packard Development Company, L.P.
Inventors:
Jeań-Francois C. P. Collard, Norman Paul Jouppi, John Morgan Sampson
Abstract: An improved system and method for removing a storage server in a distributed column chunk data store is provided. A distributed column chunk data store may be provided by multiple storage servers operably coupled to a network. A storage server provided may include a database engine for partitioning a data table into the column chunks for distributing across multiple storage servers, a storage shared memory for storing the column chunks during processing of semantic operations performed on the column chunks, and a storage services manager for striping column chunks of a partitioned data table across multiple storage servers. Any data table may be flexibly partitioned into column chunks using one or more columns with various partitioning methods. Storage servers may then be removed and column chunks may be redistributed among the remaining storage servers in the column chunk data store.
Abstract: Embodiments of the present invention provide a class of computer architectures generally referred to as lightweight multi-threaded architectures (LIMA). Other embodiments may be described and claimed.
Type:
Grant
Filed:
February 15, 2007
Date of Patent:
September 1, 2009
Assignees:
University of Notre Dame du Lac, Cray, Inc.
Inventors:
Peter M. Kogge, Jay B. Brockman, David Tennyson Harper, III, Burton Smith, Charles David Callahan, II
Abstract: A firmware memory manager allocates memory for code and data based on a lifespan associated with each allocation. The memory manager determines whether each allocated block of memory is needed only for a certain lifespan. Based on this determination, blocks of memory needed beyond the certain lifespan are all allocated adjacent to each other in memory. Once execution exceeds the certain lifespan, memory needed only for boot time is reported as being available for reuse by an operating system.
Abstract: Systems, methods, and computer program products are presented for transiently clearing a reservation on a device, where the reservation belongs to a host that owns the device and the reservation blocks a host that does not own the device from performing an operation with the device. The reservation is cleared transiently by the host that does not own the device. While the reservation is cleared, the operation is performed with the device using the host that does not own the device.
Abstract: An apparatus comprises a data storage medium including first and second partitions, wherein individual physical blocks in the first partition are paired with individual physical blocks in the second partition, a status flag for each of the pairs of physical blocks, and a controller for performing read and write operations on the physical blocks in accordance with the status flags. A method performed by the apparatus is also provided.
Abstract: A monitoring process for a NUMA system collects data from multiple monitored threads executing in different nodes of the system. The monitoring process executes on different processors in different nodes. The monitoring process intelligently collects data from monitored threads according to the node it which it is executing to reduce the proportion of inter-node data accesses. Preferably, the monitoring process has the capability to specify a node to which it should be dispatched next to the dispatcher, and traverses the nodes while collecting data from threads associated with the node in which the monitor is currently executing. By intelligently associating the data collection with the node of the monitoring process, the frequency of inter-node data accesses for purposes of collecting data by the monitoring process is reduced, increasing execution efficiency.
Type:
Grant
Filed:
April 19, 2008
Date of Patent:
August 11, 2009
Assignee:
International Business Machines Corporation
Abstract: A method and apparatus for enhancing performance of parity check in computer readable media is provided. For example, in a RAID (N+1) configuration, a virtual data strip is added for a calculation of parity. Data of the virtual data strip is set so that a predetermined portion of a data area in the virtual data strip has a predetermined value. Consequently, performance of parity check performed in a data processing system having a RAID configuration can be enhanced.
Abstract: A disk drive apparatus has a magnetic platter, a disk drive motor, and a disk drive controller. The disk drive controller is capable of storing data onto and retrieving data from the magnetic platter while the magnetic platter turns at a predefined maximum speed. The disk drive controller is configured to receive a command to access a storage location on the magnetic platter from an external storage controller, and direct the disk drive motor to increase rotational speed of the magnetic platter to the predefined maximum speed in response to the command. The disk drive controller is further configured to, prior to the magnetic platter reaching the predefined maximum speed, access the storage location on the magnetic platter in response to the command. Accordingly, early access to storage locations on the magnetic platter is not substantially hindered by the spin up process.
Abstract: Provided is a semiconductor memory system including a plurality of main memory chips and sub-memory chips as alternatives, in which each main memory chip includes a plurality of reserved memory blocks in the same chip as alternatives to an abnormal memory block. When it is detected that the number of the remaining reserved memory blocks unused as blocks to be reassigned has reached a first predetermined value in the main memory chip, the memory blocks in the sub-memory chip starts to be formatted. When the number of the remaining reserved memory blocks unused in the main memory chip reaches a second predetermined value, read/write with respect to the main memory chip is switched to the sub-memory chip, while bypassing the format process for the memory block in the sub-memory chip. Thus, in the semiconductor memory system including a main flash memory, an alternative flash memory, and a write cache memory, the capacity of a RAM for the write cache memory can be reduced.
Abstract: A computer-implemented method for enforcing cache coherence includes multicasting a cache request for a memory address from a requesting node without an ordering restriction over a network, collecting, by the requesting node, a combined snoop response of the cache request over a unidirectional ring embedded in the network, and enforcing cache coherence for the memory address at the requesting node, according to the combined snoop response.
Type:
Grant
Filed:
November 6, 2006
Date of Patent:
July 28, 2009
Assignee:
International Business Machines Corporation
Abstract: Circuits, methods, and apparatus that provide an L2 cache that services requests out of order. This L2 cache processes requests that are hits without waiting for data corresponding to requests that are misses to be returned from a graphics memory. A first auxiliary memory, referred to as a side pool, is used for holding subsequent requests for data at a specific address while a previous request for data at that address is serviced by a frame buffer interface and graphics memory. This L2 cache may also use a second auxiliary memory, referred to as a take pool, to store requests or pointers to data that is ready to be retrieved from an L2 cache.
Type:
Grant
Filed:
December 20, 2005
Date of Patent:
July 21, 2009
Assignee:
NVIDIA Corporation
Inventors:
Christopher D. S. Donham, John S. Montrym, Patrick R. Marchand