Patents Examined by Kaushik Patel
-
Patent number: 7143251Abstract: A system and method are disclosed for processing a data stream. A data segment is received. It is determined whether the data segment has been previously stored. In the event that the data segment is determined not to have been previously stored, a unique identifier for specifying the data segment in a representation of the data stream is generated.Type: GrantFiled: June 30, 2003Date of Patent: November 28, 2006Assignee: Data Domain, Inc.Inventor: R. Hugo Patterson
-
Patent number: 7139865Abstract: A LIFO type data storage device of 2N depth, N being an integer, includes two random access memories each having at least 2N?1 locations for storing data. A controller controls the reading and writing of data in one or the other of the two memories, or the direct transmission of data to multiplexing means. Outputs of the two memories are also connected to the multiplexing means and the output of the device is connected to the multiplexing means via a sampler.Type: GrantFiled: September 24, 2003Date of Patent: November 21, 2006Assignee: STMicroelectronics S.A.Inventor: Pascal Urard
-
Patent number: 7139879Abstract: A system and method of improving fault-based multi-page pre-fetches are provided. When a request to read data randomly from a file is received, a determination is made as to whether previous data has been read from memory (i.e., RAM) or from a storage device. If the data has been read from memory, an attempt is made to read the present requested data from memory. If the data is in memory it is provided to the requester. If the data is not in memory, a page fault occurs. If the requested data has a range that spans more than one page, the entire range is read in by a page fault handler. If previous data has not been read from memory, it will be assumed that the present requested data is not in memory. Hence, the present requested data will be loaded into memory. Loading random data that spans a range of more than one page all at once into memory inhibits the system from pre-fetching on the range due to fault-based sequential data accesses.Type: GrantFiled: July 24, 2003Date of Patent: November 21, 2006Assignee: International Business Machinces CorporationInventor: Zachary Merlynn Loafman
-
Patent number: 7127558Abstract: According to the present invention, it is possible to enhance a performance in accessing a storage system, without performing data migration process between the storages constituting the storage system. A virtualization controller 2 connecting a host computer 1 and a storage 3 constituting the storage system controls a plurality of access paths (path such as port 6-backplane 9-port 6, port 6-backplane 9-storage controller 7-port 6) provided within the virtualization controller. It further performs a control so that an optimum access path is selected for switching, out of the plurality of access paths, based on setting information from a management server 4 by a system administrator and various monitoring results detected by the virtualization controller itself.Type: GrantFiled: January 2, 2004Date of Patent: October 24, 2006Assignee: Hitachi, Ltd.Inventors: Kiyoshi Honda, Norio Shimozono
-
Patent number: 7120779Abstract: A data processing system 2 is provided supporting address offset generating instructions which encode bits of an address offset value using previously redundant bits in a legacy instruction encoding whilst maintaining backwards compatibility with that legacy encoding.Type: GrantFiled: January 28, 2004Date of Patent: October 10, 2006Assignee: ARM LimitedInventor: David James Seal
-
Patent number: 7111142Abstract: Data storage systems and methods for writing data into a memory component and reading data from the memory component are disclosed. By utilizing a high-speed data controller, the systems and methods transfer data at a fast data transfer rate. In one implementation, the memory component comprises a memory controller for managing data within the memory component. The memory controller comprises the high-speed data controller and a data manager. The data manager comprises a compression/decompression engine that compresses and decompresses data and a storage device interface.Type: GrantFiled: September 13, 2002Date of Patent: September 19, 2006Assignee: Seagate Technology LLCInventors: Andrew M. Spencer, Tracy Ann Sauerwein
-
Patent number: 7100002Abstract: A port independent data transaction interface for multi-port devices is provided. The port independent data transaction interface includes a command channel that receives command data and a source id. The source id indicates a source device that transmitted the command data. In addition, a data-in channel is included that receives write data and a write source id. Similar to the source id, the write source id indicates a source device that transmitted the write data. The port independent data transaction interface further includes a data-out channel that provides read data and a read id. The read id indicates a source device that transmitted a read command corresponding to the read data. The port independent data transaction interface utilizes the source id to associate command data with corresponding write data and read data.Type: GrantFiled: September 16, 2003Date of Patent: August 29, 2006Assignee: Denali Software, Inc.Inventors: Steven Shrader, Samitinjoy Pal, Anne Espinoza, Michael McKeon
-
Patent number: 7099993Abstract: A multi-level caching scheme for use in managing the storage of data on a data storage device is disclosed. The data is received by the data storage device as part of a write command issued by the sending interface and specifying one or more particular location(s) on the data storage device to which the data is/are to be stored. The data storage device utilizes a first level (L1) and a second level (L2) of cache memory to temporarily store the received data prior to commission to the specified storage location(s). In this embodiment, the data storage device first sends the data to the L1 cache memory, and subsequently thereafter, the data storage device transfers the data from the L1 cache memory to the L2 cache memory. Eventually, the data storage device transfers the data from the L2 cache memory to the specified storage location(s).Type: GrantFiled: September 24, 2003Date of Patent: August 29, 2006Assignee: Seagate Technology LLCInventor: Stanton M. Keeler
-
Patent number: 7096321Abstract: A method, system, and program storage medium for adaptively managing pages in a cache memory included within a system having a variable workload, comprising arranging a cache memory included within a system into a circular buffer; maintaining a pointer that rotates around the circular buffer; maintaining a bit for each page in the circular buffer, wherein a bit value 0 indicates that the page was not accessed by the system since a last time that the pointer traversed over the page, and a hit value 1 indicates that the page has been accessed since the last time the pointer traversed over the page; and dynamically controlling a distribution of a number of pages in the cache memory that are marked with bit 0 in response to a variable workload in order to increase a hit ratio of the cache memory.Type: GrantFiled: October 21, 2003Date of Patent: August 22, 2006Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Patent number: 7093100Abstract: Method and apparatus for increasing the number of real memory addresses accessible through a translational look-aside buffer (TLB) by a multi thread CPU. The buffer entries include a virtual address, a real address and a special mode bit indicating whether the address represents one of a plurality of threads being processed by the CPU. If the special mode bit is set, the real address associated with the virtual address higher order bits are concatenated with the thread identification number being processed to obtain a real address. Buffer entries containing no special mode bit, or special mode bit set to 0, are processed by using the full length of the real address associated with the virtual address stored in the look-aside buffer (TLB).Type: GrantFiled: November 14, 2003Date of Patent: August 15, 2006Assignee: International Business Machines CorporationInventors: Jeffrey Todd Bridges, Les M. DeBruyne, Robert L. Goldiez, Michael S. McIlvaine, Thomas A. Sartorius, Rodney Wayne Smith
-
Patent number: 7089361Abstract: Methods, apparatus, and program product are disclosed for use in a computer system to provide for dynamic allocation of a directory memory in a node memory controller in which one or more coherent multiprocessor nodes comprise the computer system. The directory memory in a node is partitioned between a snoop directory portion and a remote memory directory portion. During a predetermined time interval, snoop directory entry refills and remote memory directory entry refills are accumulated. After the time interval has elapsed, a ratio of the snoop directory entry refills to the number of remote memory directory entry refills is computed. The ratio is compared to a desired ratio. Respondent to a difference between the ratio and the desired ratio, adjustments are made to the allocation of the memory directory between the snoop directory and the remote memory directory.Type: GrantFiled: August 7, 2003Date of Patent: August 8, 2006Assignee: International Business Machines CorporationInventor: John Michael Borkenhagen
-
Patent number: 7089373Abstract: A method and an apparatus are provided for enhancing lock acquisition in a multiprocessor system. A lock-load instruction is sent from a first processor to a cache. In response, a reservation flag for the first processor is set, and lock data is sent to the first processor. The lock data is placed in target and shadow registers of the first processor. Upon a determination that the lock is taken, the lock-load instruction is resent from the first processor to the cache. Upon a determination that the reservation flag is still set for the first processor, a status-quo signal is sent to the first processor without resending the lock data to the first processor. In response, the lock data is copied from the shadow register to the target register.Type: GrantFiled: June 12, 2003Date of Patent: August 8, 2006Assignee: International Business Machines CorporationInventors: Michael Norman Day, Roy Moonseuk Kim, Mark Richard Nutter, Yasukichi Okawa, Thuong Quang Truong
-
Patent number: 7020762Abstract: A system and method for a processor to determine a memory page management implementation used by a memory controller without necessarily having direct access to the circuits or registers of the memory controller is disclosed. In one embodiment, a matrix of counters correspond to potential page management implementations and numbers of pages per block. The counters may be incremented or decremented depending upon whether the corresponding page management implementations and numbers of pages predict a page boundary whenever a long access latency is observed. The counter with the largest value after a period of time may correspond to the actual page management implementation and number of pages per block.Type: GrantFiled: December 24, 2002Date of Patent: March 28, 2006Assignee: Intel CorporationInventors: Eric A. Sprangle, Anwar Q. Rohillah