Patents Examined by Kaushik Patel
  • Patent number: 7143251
    Abstract: A system and method are disclosed for processing a data stream. A data segment is received. It is determined whether the data segment has been previously stored. In the event that the data segment is determined not to have been previously stored, a unique identifier for specifying the data segment in a representation of the data stream is generated.
    Type: Grant
    Filed: June 30, 2003
    Date of Patent: November 28, 2006
    Assignee: Data Domain, Inc.
    Inventor: R. Hugo Patterson
  • Patent number: 7139865
    Abstract: A LIFO type data storage device of 2N depth, N being an integer, includes two random access memories each having at least 2N?1 locations for storing data. A controller controls the reading and writing of data in one or the other of the two memories, or the direct transmission of data to multiplexing means. Outputs of the two memories are also connected to the multiplexing means and the output of the device is connected to the multiplexing means via a sampler.
    Type: Grant
    Filed: September 24, 2003
    Date of Patent: November 21, 2006
    Assignee: STMicroelectronics S.A.
    Inventor: Pascal Urard
  • Patent number: 7139879
    Abstract: A system and method of improving fault-based multi-page pre-fetches are provided. When a request to read data randomly from a file is received, a determination is made as to whether previous data has been read from memory (i.e., RAM) or from a storage device. If the data has been read from memory, an attempt is made to read the present requested data from memory. If the data is in memory it is provided to the requester. If the data is not in memory, a page fault occurs. If the requested data has a range that spans more than one page, the entire range is read in by a page fault handler. If previous data has not been read from memory, it will be assumed that the present requested data is not in memory. Hence, the present requested data will be loaded into memory. Loading random data that spans a range of more than one page all at once into memory inhibits the system from pre-fetching on the range due to fault-based sequential data accesses.
    Type: Grant
    Filed: July 24, 2003
    Date of Patent: November 21, 2006
    Assignee: International Business Machinces Corporation
    Inventor: Zachary Merlynn Loafman
  • Patent number: 7127558
    Abstract: According to the present invention, it is possible to enhance a performance in accessing a storage system, without performing data migration process between the storages constituting the storage system. A virtualization controller 2 connecting a host computer 1 and a storage 3 constituting the storage system controls a plurality of access paths (path such as port 6-backplane 9-port 6, port 6-backplane 9-storage controller 7-port 6) provided within the virtualization controller. It further performs a control so that an optimum access path is selected for switching, out of the plurality of access paths, based on setting information from a management server 4 by a system administrator and various monitoring results detected by the virtualization controller itself.
    Type: Grant
    Filed: January 2, 2004
    Date of Patent: October 24, 2006
    Assignee: Hitachi, Ltd.
    Inventors: Kiyoshi Honda, Norio Shimozono
  • Patent number: 7120779
    Abstract: A data processing system 2 is provided supporting address offset generating instructions which encode bits of an address offset value using previously redundant bits in a legacy instruction encoding whilst maintaining backwards compatibility with that legacy encoding.
    Type: Grant
    Filed: January 28, 2004
    Date of Patent: October 10, 2006
    Assignee: ARM Limited
    Inventor: David James Seal
  • Patent number: 7111142
    Abstract: Data storage systems and methods for writing data into a memory component and reading data from the memory component are disclosed. By utilizing a high-speed data controller, the systems and methods transfer data at a fast data transfer rate. In one implementation, the memory component comprises a memory controller for managing data within the memory component. The memory controller comprises the high-speed data controller and a data manager. The data manager comprises a compression/decompression engine that compresses and decompresses data and a storage device interface.
    Type: Grant
    Filed: September 13, 2002
    Date of Patent: September 19, 2006
    Assignee: Seagate Technology LLC
    Inventors: Andrew M. Spencer, Tracy Ann Sauerwein
  • Patent number: 7099993
    Abstract: A multi-level caching scheme for use in managing the storage of data on a data storage device is disclosed. The data is received by the data storage device as part of a write command issued by the sending interface and specifying one or more particular location(s) on the data storage device to which the data is/are to be stored. The data storage device utilizes a first level (L1) and a second level (L2) of cache memory to temporarily store the received data prior to commission to the specified storage location(s). In this embodiment, the data storage device first sends the data to the L1 cache memory, and subsequently thereafter, the data storage device transfers the data from the L1 cache memory to the L2 cache memory. Eventually, the data storage device transfers the data from the L2 cache memory to the specified storage location(s).
    Type: Grant
    Filed: September 24, 2003
    Date of Patent: August 29, 2006
    Assignee: Seagate Technology LLC
    Inventor: Stanton M. Keeler
  • Patent number: 7100002
    Abstract: A port independent data transaction interface for multi-port devices is provided. The port independent data transaction interface includes a command channel that receives command data and a source id. The source id indicates a source device that transmitted the command data. In addition, a data-in channel is included that receives write data and a write source id. Similar to the source id, the write source id indicates a source device that transmitted the write data. The port independent data transaction interface further includes a data-out channel that provides read data and a read id. The read id indicates a source device that transmitted a read command corresponding to the read data. The port independent data transaction interface utilizes the source id to associate command data with corresponding write data and read data.
    Type: Grant
    Filed: September 16, 2003
    Date of Patent: August 29, 2006
    Assignee: Denali Software, Inc.
    Inventors: Steven Shrader, Samitinjoy Pal, Anne Espinoza, Michael McKeon
  • Patent number: 7096321
    Abstract: A method, system, and program storage medium for adaptively managing pages in a cache memory included within a system having a variable workload, comprising arranging a cache memory included within a system into a circular buffer; maintaining a pointer that rotates around the circular buffer; maintaining a bit for each page in the circular buffer, wherein a bit value 0 indicates that the page was not accessed by the system since a last time that the pointer traversed over the page, and a hit value 1 indicates that the page has been accessed since the last time the pointer traversed over the page; and dynamically controlling a distribution of a number of pages in the cache memory that are marked with bit 0 in response to a variable workload in order to increase a hit ratio of the cache memory.
    Type: Grant
    Filed: October 21, 2003
    Date of Patent: August 22, 2006
    Assignee: International Business Machines Corporation
    Inventor: Dharmendra S. Modha
  • Patent number: 7093100
    Abstract: Method and apparatus for increasing the number of real memory addresses accessible through a translational look-aside buffer (TLB) by a multi thread CPU. The buffer entries include a virtual address, a real address and a special mode bit indicating whether the address represents one of a plurality of threads being processed by the CPU. If the special mode bit is set, the real address associated with the virtual address higher order bits are concatenated with the thread identification number being processed to obtain a real address. Buffer entries containing no special mode bit, or special mode bit set to 0, are processed by using the full length of the real address associated with the virtual address stored in the look-aside buffer (TLB).
    Type: Grant
    Filed: November 14, 2003
    Date of Patent: August 15, 2006
    Assignee: International Business Machines Corporation
    Inventors: Jeffrey Todd Bridges, Les M. DeBruyne, Robert L. Goldiez, Michael S. McIlvaine, Thomas A. Sartorius, Rodney Wayne Smith
  • Patent number: 7089361
    Abstract: Methods, apparatus, and program product are disclosed for use in a computer system to provide for dynamic allocation of a directory memory in a node memory controller in which one or more coherent multiprocessor nodes comprise the computer system. The directory memory in a node is partitioned between a snoop directory portion and a remote memory directory portion. During a predetermined time interval, snoop directory entry refills and remote memory directory entry refills are accumulated. After the time interval has elapsed, a ratio of the snoop directory entry refills to the number of remote memory directory entry refills is computed. The ratio is compared to a desired ratio. Respondent to a difference between the ratio and the desired ratio, adjustments are made to the allocation of the memory directory between the snoop directory and the remote memory directory.
    Type: Grant
    Filed: August 7, 2003
    Date of Patent: August 8, 2006
    Assignee: International Business Machines Corporation
    Inventor: John Michael Borkenhagen
  • Patent number: 7089373
    Abstract: A method and an apparatus are provided for enhancing lock acquisition in a multiprocessor system. A lock-load instruction is sent from a first processor to a cache. In response, a reservation flag for the first processor is set, and lock data is sent to the first processor. The lock data is placed in target and shadow registers of the first processor. Upon a determination that the lock is taken, the lock-load instruction is resent from the first processor to the cache. Upon a determination that the reservation flag is still set for the first processor, a status-quo signal is sent to the first processor without resending the lock data to the first processor. In response, the lock data is copied from the shadow register to the target register.
    Type: Grant
    Filed: June 12, 2003
    Date of Patent: August 8, 2006
    Assignee: International Business Machines Corporation
    Inventors: Michael Norman Day, Roy Moonseuk Kim, Mark Richard Nutter, Yasukichi Okawa, Thuong Quang Truong
  • Patent number: 7020762
    Abstract: A system and method for a processor to determine a memory page management implementation used by a memory controller without necessarily having direct access to the circuits or registers of the memory controller is disclosed. In one embodiment, a matrix of counters correspond to potential page management implementations and numbers of pages per block. The counters may be incremented or decremented depending upon whether the corresponding page management implementations and numbers of pages predict a page boundary whenever a long access latency is observed. The counter with the largest value after a period of time may correspond to the actual page management implementation and number of pages per block.
    Type: Grant
    Filed: December 24, 2002
    Date of Patent: March 28, 2006
    Assignee: Intel Corporation
    Inventors: Eric A. Sprangle, Anwar Q. Rohillah