Abstract: A non-volatile storage device is used to track status changes in one or more items, where it is less costly to set bits in the non-volatile storage device than to reset bits. For each of the items to be tracked, at least two bits of storage space are allocated in the non-volatile storage device. One of the bits is set when the item changes status, and another of the bits is set when the item changes status again.
Abstract: In a particular embodiment, a circuit device is disclosed that includes a first interface to a high speed data bus of a host system and a second interface coupled to a first data storage device. The circuit device further includes a solid-state storage device having a first solid-state data storage medium and having at least one expansion slot to receive at least one second solid-state data storage medium to expand a memory capacity of the solid-state storage device. The circuit device also includes a control circuit adapted to receive data from the host system via the first interface and to selectively write the received data to one of the first data storage device and the solid-state storage device.
Type:
Application
Filed:
December 17, 2008
Publication date:
June 17, 2010
Applicant:
Seagate Technology LLC
Inventors:
Margot Ann LaPanse, Michael Edward Baum, Stanton MacDonough Keeler
Abstract: A system and method operable to manage a message queue is provided. This management may involve out-of-order asynchronous heterogeneous remote direct memory access (RDMA) to the message queue. This system includes a pair of processing devices, a primary processing device and an additional processing device, a memory in storage location and a data bus coupled to the processing devices. The processing devices cooperate to process queue data within a shared message queue wherein when an individual processing device successfully accesses queue data the queue data is locked for the exclusive use of the processing device. When the processing device acquires the queue data, the queue data is locked and the queue data acquired by the acquiring processing device includes the queue data for both the primary processing device and additional processing device such that the processing device has all queue data necessary to process the data and return processed queue data.
Abstract: A variable-width memory may comprise multiple memory banks from which data may be selectively read in such a way that overall memory access requirements may be reduced, which may result in associated reduction in power consumption.
Abstract: Improvements to apparatus, methods, and computer program products are disclosed to improve the efficiency of pinning objects in a heap memory that is subject to a garbage collection system.
Abstract: According to one embodiment, memory is allocated between an interleaver buffer and a de-interleaver buffer in a communication device based on downstream and upstream memory requirements. The upstream de-interleaver memory requirement is determined based on upstream channel conditions obtained for a communication channel used by the communication device. The memory is allocated between the interleaver and de-interleaver buffers based on the downstream and upstream memory requirements. The downstream interleaver memory requirement may be determined based on one or more predetermined downstream configuration parameters. Alternatively, the downstream interleaver memory requirement may also be determined based on the upstream channel conditions by estimating the downstream capacity of the communication channel based on the upstream channel conditions and determining an interleaver buffer size that satisfies one or more predetermined downstream configuration parameters and the downstream capacity estimate.
Abstract: A method, apparatus and computer program product is provided for traversing computer memory. In one example embodiment, a method comprises determining whether a next cluster associated with a file is located in contiguous memory and obtaining a location of a next cluster from a file allocation table when the next cluster associated with the file is not located in contiguous memory. For example, the determining step may comprise reading from a cluster descriptor that is associated with the file wherein the cluster descriptor comprises an indication that a contiguous cluster in a data region is associated with the file. In one embodiment, the file allocation table is located in a first memory and the cluster descriptor is located in a second memory.
Abstract: In a method and apparatus for saving power in a device coupled to a bus, the device is placed to operate in a power saving mode by powering off a selective portion of the device including a device clock. If data communication over the bus is addressed to the device then the selective portion of the device, including the device clock, is triggered to return to a power on state from the power off state. The data communication is stored in shadow registers using a bus clock while the device clock is transitioning to the power on state. The data communication stored in the shadow registers is transferred to a register map under the control of the device clock operating in the power on state. Upon completion of the transfer of the data communication to the register map, the device is returned to operate in the power saving mode.
Type:
Application
Filed:
May 23, 2008
Publication date:
November 26, 2009
Inventors:
GEORGE VINCENT KONNAIL, Robert Wayne Mounger, Jose Vicente Santos, Sanjay Pratap Singh
Abstract: A storage system connected to a terminal, the computer system includes: a plurality of drive devices that respectively drive a plurality of physical disks each having a physical storage area; a RAID configuration unit that configures a plurality of RAID groups by grouping two or more of the plurality of physical disks; a logical disk creation unit that creates, for the terminal through the RAID group, a logical disk having a logical storage area associated with the physical storage area; a memory for storing a RAID group control table showing, for each the RAID group, (i) a free capacity that is the amount of physical storage area remaining in the RAID group to be able to be associated with the logical disk and (ii) a power status of the RAID group; a receiver that receives a request for creating a new logical disk; and an area allocation unit that allocates to the new logical disk the physical storage area remaining in the RAID group selected by giving priority to a RAID group in a powered state over a RAID
Abstract: Techniques for remote redirection are discussed. Redirection may be used to mimic a local user experience on a remote system. Redirection may include redirecting a pointer file, such as a shortcut, to account for remote access to a source file designated in the pointer file. The pointer file may be remapped so that the pointer file's file path accounts for a path from the remote access to the source file. Operating system (OS) information may be forwarded to the accessing system so that the redirected pointer file may be presented in accordance with the remote system's (OS). Redirection may be used to present a directory remotely in accordance with operating system running on the system being accessed.
Type:
Application
Filed:
March 7, 2008
Publication date:
September 10, 2009
Applicant:
Microsoft Corporation
Inventors:
Seung-Hae Park, Rachel Popkin, Heather Ferguson
Abstract: The present disclosure provides a method for generating a standardized location code is provided that comprises extracting an address component from an input location data record, parsing the address component into one or more address words, and processing the address words by validating the address words one by one using one or more validation rules specific to the input location data record. The method also includes constructing the standardized location code by assembling the processed address words and producing an updated location data file to be shared with a plurality of subscribing applications.
Abstract: A storage apparatus includes: storage modules for storing data, each of the data including meta data for identifying an attribute of the data, each of the storage modules having a characteristic different from one another; a memory for storing information of the characteristic of each of the storage modules; and a controller for controlling the storage modules so as to optimize allocation of the data among the storage modules, including: a receiving module for receiving external information having meta data and condition information, a selecting module for selecting data having meta data corresponding to the meta data included in the external information among data stored in the storage modules, a determining module for determining a storage module having a characteristic optimum for the condition information, and a updating module for updating location of the selected data so as to be stored in the determined storage module.
Abstract: A method to access a data in a RAID array comprising a plurality of data storage media, wherein information is written to said plurality of data storage media using a RAID configuration, wherein the method receives from a requester a command comprising a data access priority indicator. If a RAID rebuild is in progress, the method determines if the data access priority indicator is set. If the data access priority indicator is set, the method executes a command selected from the group consisting of writing information to the target logical block array range, and returning to the requestor information read from the target logical block array range.
Type:
Application
Filed:
January 4, 2008
Publication date:
July 9, 2009
Applicant:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
NILS HAUSTEIN, Craig Anthony Klein, Karl Allen Nielsen, Ulf Troppens, Daniel James Winarski
Abstract: In order to provide a system (100) for managing memory space (22), the system comprising—at least one central processing unit (10) for executing at least one first task (50) and at least one second task (60),—at least one memory unit (20), in particular at least one cache,—being connected with the central processing unit (10) and—comprising the memory space (22) being subdividable into—at least one first memory space (52), in particular at least one first cache space,—and at least one second memory space (62), in particular at least one second cache space, at least one determination means (30) for determining whether the first task (50) and/or the second task (60) requires the memory space (22), and—at least one allocation means (40) for allocating the memory space (22) to the respective task, in particular for allocating—the first memory space (52) to the first task (50) and 15 the second memory space (62) to the second task (60), wherein it is possible to maximize the memory space (22) being provided to eac
Type:
Application
Filed:
November 4, 2005
Publication date:
March 26, 2009
Applicant:
KONINKLIJKE PHILIPS ELECTRONICS, N.V.
Inventors:
Clara Maria Otero Perez, Josephus Van Eijndhoven
Abstract: Object-based conflict detection is described in the context of software transactional memory. In one example, a pointer is received for a block of instructions, the block of instructions having allocated objects. The lower bits of the pointer are masked if the pointer is in a small object space to obtain a block header for the block, and a size of the allocated objects is determined using the block header.
Type:
Application
Filed:
November 13, 2008
Publication date:
March 19, 2009
Inventors:
Ben Hertzberg, Bratin Saha, Ali-Reza Adl-Tabatabai
Abstract: In a storage device incorporating a plurality of kinds of disk drives with different interfaces, the controller performs sparing on a disk drive, whose errors that occur during accesses exceed a predetermined number, by swapping it with a spare disk drive that is prepared beforehand.
Abstract: A storage controller which uses the same buffer to store data elements retrieved from different secondary storage units. In an embodiment, the controller retrieves location descriptors ahead of when data is available for storing in a target memory. Each location descriptor indicates the memory locations at which data received from a secondary storage is to be stored. Only a subset of the location descriptors may be retrieved and stored ahead when processing each request. Due to such retrieval and storing of limited number of location descriptors, the size of a buffer used by the storage controller may be reduced. Due to retrieval of the location descriptors ahead, unneeded buffering of the data elements within the storage controller is avoided, reducing the latency in writing the data into the main memory, thus improving performance.
Abstract: A processing system includes initialization software that is executable by a processor to identify one or more memory page sizes supported by the processing system. The supported memory page sizes that are identified by the initialization software are stored in one or more memory page size identification registers. Individual bits of the one or more memory page size identification registers may be respectively associated with a memory page size. Whether a memory page size is supported by the processing system may be determined by checking the logic state of the individual bit corresponding to the memory page size.
Abstract: A method and systems are described for providing remote storage via a removable memory device. The method includes intercepting a file write operation associated with storing a first file to the device and a file read operation associated with retrieving a second file from the device. In response to intercepting the write operation, contacting a server based on information included on the device to identify a storage location, storing a representation of the file on the device including an identifier for identifying the storage location, and providing for sending data provided by the write operation to the server for storage at the identified storage location. The method includes, in response to intercepting the file read operation, extracting an identifier for identifying a storage location on a server from a representation of the file stored on the device and providing for retrieving data from the identified storage location on the server.
Abstract: A mechanism is disclosed for storing one or more chunk-specific sets of executable instructions at one or more predetermined offsets within chunks of a chunked heap. The mechanism provides for storing a chunk-specific set of executable instructions within a portion of a chunk, where the set of executable instructions begins at a predetermined offset within the range of virtual memory addresses allocated to the chunk. The set of executable instructions, when executed, is operable to perform one or more operations that are specific to the chunk.
Abstract: A method, system, and computer software product for operating a collection of memory cells. Each memory cell in the collection of memory cells is configured to store a binary multi-bit value delimited by characteristic parameter bands. In one embodiment, a transforming unit transforms an original collection of data to a transformed collection of data using a reversible mathematical operator. The original collection of data has binary multi-bit values arbitrarily distributed across the binary multi-bit values assigned to the characteristic parameter bands and the transformed collection of data has binary multi-bit values substantially uniformly distributed across the binary multi-bit values assigned to the characteristic parameter bands.
Abstract: A data storage system and associated method are provided wherein a policy engine continuously collects qualitative information about a network load to the data storage system in order to dynamically characterize the load and continuously correlates the load characterization to the content of a command queue of transfer requests for writeback commands and host read commands, selectively limiting the content with respect to writeback commands to only those transfer requests for writeback data that are selected on a physical zone basis of a plurality of predefined physical zones of a storage media.
Type:
Application
Filed:
June 29, 2007
Publication date:
January 1, 2009
Applicant:
SEAGATE TECHNOLOGY LLC
Inventors:
Clark Edward Lubbers, Robert Michael Lester
Abstract: An interconnect for an integrated circuit communicating transactions between initiator Intellectual Property (IP) cores and multiple target IP cores coupled to the interconnect is generally described. The interconnect routes the transactions between the target IP cores and initiator IP cores in the integrated circuit. A first aggregate target of the target IP cores includes two or more memory channels that are interleaved in an address space for the first aggregate target in the address map. Each memory channel is divided up in defined memory interleave segments and then interleaved with memory interleave segments from other memory channels. An address map is divided up into two or more regions. Each interleaved memory interleave segment is assigned to at least one of those regions and populates the address space for that region, and parameters associated with the regions and memory interleave segments are configurable.
Type:
Application
Filed:
June 24, 2008
Publication date:
December 25, 2008
Applicant:
Sonics, Inc.
Inventors:
Drew E. Wingard, Chien-Chun Chou, Stephen W. Hamilton, Ian Andrew Swarbrick, Vida Vakilotojar
Abstract: A method of implementing virtualization involves an improved approach to resource management. A virtualizing subsystem is capable of creating separate environments that logically isolate applications from each other. Some of the separate environments share physical resources including physical memory. When a separate environment is configured, properties for the separate environment are defined. Configuring a separate environment may include specifying a physical memory usage cap for the separate environment. A global resource capping background service enforces physical memory caps on any separate environments that have specified physical memory caps.
Type:
Application
Filed:
June 19, 2007
Publication date:
December 25, 2008
Applicant:
SUN MICROSYSTEMS, INC.
Inventors:
Gerald A. Jelinek, Daniel B. Price, David S. Comay, Stephen Frances Lawrence
Abstract: A result value, such as a parity value, for a set of corresponding data elements from a plurality of storage devices is determined using a commutative operation. When accessing the set of corresponding data elements from a plurality of storage devices, a dual access can be performed for the storage device accessed last for the set of corresponding data elements so as to also obtain a data element from the last-accessed storage device for the next parity calculation. As a result, the number of storage device accesses can be reduced compared to conventional systems whereby a single access is performed for each storage device to obtain a single data element from the storage device.
Abstract: A system and method for dynamic storage based on performance throttling. The method comprises providing an array of storage devices coupled to a computing device. The method comprises determining a status of a system condition, such as ambient temperature. The method comprises throttling the operating speed of one or more storage devices in the array based on the status of the system condition. The method comprises determining relative frequency of access to data to be stored by the computing device in the array of storage devices. The method comprises optimizing storage of data by the computing device in the array of storage devices based at least in part on 1) relative frequency of access to data and 2) which of the one or more storage devices are throttled.
Abstract: A memory card is structured to support a variety of applications by dividing a storage region into a plurality of sub storage regions, each sub storage region being assigned a particular data format associated with each of a plurality of application programs stored in a controller of the memory card. The data stored in each of the sub storage regions co-exists compatibly in the memory card. This allows for a multiplicity of applications, which can be made available through the use of a single memory card.
Abstract: A multiple computer system incorporating redundancy is disclosed. Data to be stored (A, B, C) is distributed (A1, A2, A3, . . . B1, B2, B3, . . . C1, C2, C3, . . . ) amongst a multiplicity of computers (M1, M2, . . . Mn). A parity form (P[A], P[B], . . . ) of the stored data is created by use of a reversible encoding process. The parity form data is preferably cycled amongst the various computers. In the event of failure of one of the computers the lost data can be re-generated.
Abstract: A computer system for partitioning the columns of a matrix A. The computer system includes a processor and a memory unit coupled to the processor. Program code in the memory unit, when executed by the processor, implements the method. Matrix A is provided in a memory device and has n columns and m rows; wherein n is an integer of at least 3; and wherein m is an integer of at least 1. The n columns is partitioned into a closed group of p clusters, p being a positive integer of at least 2 and less than n. The partitioning includes an affinity-based merging of clusters of pairs of clusters of the matrix A based on an affinity between the clusters in each pair of clusters being merged. Each cluster consists of one or more columns of matrix A. The p clusters are stored in a computer-readable storage device.
Abstract: A hardware memory architecture or arrangement suited for multi-processor systems or arrays is disclosed. In one aspect, the memory arrangement includes at least one memory queue between a functional unit (e.g., computation unit) and at least one memory device, which the functional unit accesses (for write and/or read access).
Type:
Application
Filed:
December 28, 2007
Publication date:
June 12, 2008
Applicant:
Interuniversitair Microelektronica Centrum (IMEC) vzw
Abstract: A method, system, and program for facilitating non-contiguous allocation of a chunked object within a Java heap without changing the manner in which a Java Virtual Manager allocates objects within the heap are provided. According to one embodiment, a chunking controller within a broker layer detects a large object, where a large object is one that the size of the allocation of the large object within a memory heap exceeds a maximum contiguous free space within the Java heap. The broker layer operates atop the Java Virtual Manager to facilitate communication and business processes between heterogeneous systems. The chunking controller describes the large object by an underlying array of the large object divided into multiple pieces of a size not exceeding the maximum contiguous free space.
Type:
Application
Filed:
February 14, 2008
Publication date:
June 5, 2008
Applicant:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
PHANI GOPAL V. ACHANTA, ROBERT TOD DIMPSEY, HARSHAL HEMENDRA SHAH
Abstract: A method and system for reclaiming memory space occupied by replicated memory of a multiple computer system utilizing a replicated shared memory (RSM) system or a hybrid or partial RSM system is disclosed. The memory is reclaimed on those computers not using the memory even though one (or more) other computers may still be referring to their local replica of that memory. Instead of utilizing a general background memory clean-up routine, a specific memory deletion action (177A) is provided. Thus memory deletion, or clean up, instead of being carried out at a deferred time, but still in the background as in the prior art, is not deferred and is carried out in the foreground under specific program control.
Abstract: Systems and methods are provided including a backup system architecture for performing backup operations. In one implementation, a method is provided. A backup process is initialized on a device. An initial backup is performed for the device including storing data from the device on a first storage device. The stored data has a format corresponding to a file system structure of the device.
Type:
Application
Filed:
August 4, 2006
Publication date:
May 29, 2008
Inventors:
Pavel Cisler, Steve Ko, Peter McInerney, Robert Ulrich, Eric Weiss
Abstract: A system and method of allocating contiguous real memory in a data processing system. A memory controller within system memory receives a request from a data processing system component for a contiguous block of memory during operation of the data processing system. In response to receiving the request, the memory controller selects a candidate contiguous block of memory. Then, after temporarily restricting access to the candidate contiguous block of memory, the memory controller identifies a set of frames currently in use within the candidate contiguous block of memory, relocates the set of frames, and allocates the candidate block of memory for exclusive use by the requesting data processing component. The allocation of contiguous real memory occurs dynamically during the operation of the data processing system.
Abstract: A system, related hard disk drive (HDD) and method are disclosed in which a firmware download to the HDD is accomplished by receiving it from the host and storing it to a first region of a disk in the HDD. The value of a download flag is set once the firmware download is complete. After the system performs an OFF/ON power cycle, it checks the value of the download flag and changes a Logical Block Address mapping a second region of a non-user data region of the disk storing a DOS boot program. The firmware download is transferred from the first region to a third region of the disk or a non-volatile memory device following execution of a boot procedure by the host using the DOS boot program.
Abstract: The present invention takes advantage of unused storage space within the ESS cells to provide for the efficient and cost effective storage of downloadable content. Specifically, the system of the present invention generally includes a download grid manager that communicates with the ESS cells. Content to be replicated to the ESS cells, and characteristics corresponding thereto, are received on the download grid manager from a content owner (or the like). Based on the characteristics, a storage policy, and storage information previously received from the ESS cells, the download grid manager will replicate the downloadable content to unused storage space within the ESS cells.
Type:
Application
Filed:
September 28, 2007
Publication date:
January 24, 2008
Inventors:
Irwin Boutboul, Moon Kim, Dikran Meliksetian, Robert Oesterlin, Anthony Ravinsky
Abstract: Data streams are stored in a non-structured arrangement in which information is defined by references in data streams identifying data elements in related data streams. A first data stream is placed in a frozen state such that the information contained the data stream is unmodifiable, and a first data element is identified within the first data stream, the first data element containing the same information as a second data element within a second data stream. The first data element is removed from the first data stream and a reference to the second data element is inserted.
Abstract: The invention classifies volumes (e.g., file systems or LUNs) of a data storage system according to application requirements and allocates space for the volumes on storage devices (e.g., hard disk drives) accordingly. A person such as an IT administrator configures the volumes specifying size, type (e.g., file system or SAN LUN), and priority (e.g., high, medium, low, or archive). The host schedules I/O requests to the storage devices in priority queues using the volume definition to match the application requirements and reduce storage seek time between volumes of different priorities. The host also allocates high performance bands of the storage devices to high performance applications and lower performance bands to lower performance applications. In this manner, the data storage system places data on the band of the storage device that best supports its performance needs.
Type:
Application
Filed:
August 29, 2007
Publication date:
December 27, 2007
Inventors:
Michael Brewer, David Burton, Michael Workman