Patents Examined by Victor W. Wang
  • Patent number: 8010764
    Abstract: A method and system for decreasing power consumption in memory arrays having usage-driven power management provides decreased power consumption in the memory array of a processing system. Per-page usage information is gathered on memory by a memory controller and periodically evaluated by software. The software distinguishes between more frequently accessed pages and less frequently accessed pages by analyzing the gathered usage information and periodically migrates physical memory pages in order to group less frequently accessed pages and more frequently access pages in separately power-managed memory ranks. When used in conjunction with a usage-driven power management mechanism, the ranks containing the less frequently accessed pages can enter deeper power-saving states and/or any power-saving state for longer periods.
    Type: Grant
    Filed: July 7, 2005
    Date of Patent: August 30, 2011
    Assignee: International Business Machines Corporation
    Inventors: Thomas Walter Keller, Jr., Charles R. Lefurgy, Hai Huang
  • Patent number: 7991947
    Abstract: A multi-priority encoder includes a plurality of interconnected, single-priority encoders arranged in descending priority order. The multi-priority encoder includes circuitry for blocking a match output by a lower level single-priority encoder if a higher level single-priority encoder outputs a match output. Match data is received from a content addressable memory, and the priority encoder includes address encoding circuitry for outputting the address locations of each highest priority match line flagged by the highest priority indicator. Each single-priority encoder includes a highest priority indicator which has a plurality of indicator segments, each indicator segment being associated with a match line input.
    Type: Grant
    Filed: December 30, 2002
    Date of Patent: August 2, 2011
    Assignee: Micron Technology, Inc.
    Inventor: Zvi Regev
  • Patent number: 7958305
    Abstract: This invention is a system and method for managing one or more data storage networks using a new architecture. A method for handling logical to physical mapping is included in one embodiment with the new architecture. A method for handling errors is included in another embodiment with the new architecture.
    Type: Grant
    Filed: May 14, 2010
    Date of Patent: June 7, 2011
    Assignee: EMC Corporation
    Inventors: Fernando Oliveira, Bradford B. Glade, Jeffrey A. Brown, Peter J. McCann, David Harvey, James A. Wentworth, III, Walter M. Caritj, Matthew Waxman, Lee W. VanTine
  • Patent number: 7930514
    Abstract: A method, system, and computer program product for implementing a dual-addressable cache is provided. The method includes adding fields for indirect indices to each congruence class provided in a cache directory. The cache directory is indexed by primary addresses. In response to a request for a primary address based upon a known secondary address corresponding to the primary address, the method also includes generating an index for the secondary address, and inserting or updating one of the indirect indices into one of the fields for a congruence class relating to the secondary address. The indirect index is assigned a value of a virtual index corresponding to the primary address. The method further includes searching congruence classes of each of the indirect indices for the secondary address.
    Type: Grant
    Filed: February 9, 2005
    Date of Patent: April 19, 2011
    Assignee: International Business Machines Corporation
    Inventors: Norbert Hagspiel, Erwin Pfeffer, Bruce A. Wagar
  • Patent number: 7930467
    Abstract: A method of converting a hybrid hard disk drive (HDD) to a normal HDD when a system is powered on depending on whether the total number of defective blocks in a non-volatile cache (NVC) exceeds a predetermined threshold. The method of converting a hard disk drive (HDD) from a hybrid HDD to a normal HDD where the HDD has a normal hard disk and a non-volatile cache includes the steps of determining whether a mode conversion flag is enabled during a power-on period. When the mode conversion flag is enabled, operating the HDD as a normal HDD. When the mode conversion flag is disabled, determining whether an operating mode of the HDD is a normal mode or a hybrid mode. When the operating mode of the HDD is in the normal mode, the HDD operates as a normal HDD. A determination is made when the HDD is in the hybrid mode as to whether the total number of defective blocks in the non-volatile cache is greater than a predetermined threshold.
    Type: Grant
    Filed: November 7, 2007
    Date of Patent: April 19, 2011
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hye-jeong Nam, Jae-sung Lee
  • Patent number: 7925820
    Abstract: A nonvolatile semiconductor memory device and a program method are provided in an embodiment. Data is scanned to search data bits to be selectively programmed. The searched data bits are simultaneously programmed according to a predetermined number. Since data scanning and programming are conducted using a pipeline processing, an average time required for programming data is effectively shortened.
    Type: Grant
    Filed: December 5, 2006
    Date of Patent: April 12, 2011
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Jae-Woo Im
  • Patent number: 7865667
    Abstract: In one embodiment, a processor is provided. The processor includes at least two cores, where each of the cores include a first level cache memory. Each of the cores are multi-threaded. In another embodiment, each of the cores includes four threads. In another embodiment a crossbar is included. A plurality of cache bank memories in communication with the at cores through the crossbar is provided. Each of the plurality of cache bank memories are in communication with a main memory interface. In another embodiment a buffer switch core in communication with each of the plurality of cache bank memories is also included. A server and a method for optimizing the utilization of a multithreaded processor core are also provided.
    Type: Grant
    Filed: March 14, 2007
    Date of Patent: January 4, 2011
    Assignee: Oracle America, Inc.
    Inventors: Leslie D. Kohn, KunIe A. Olukotun, Michael K. Wong
  • Patent number: 7853760
    Abstract: A method for managing a memory system for large data volumes includes providing a central memory management system comprising a memory management interface between applications and a memory of a programmed computer, maintaining a global priority list of data buffers allocated by the applications, storing decompressed data of the data buffers into a cache which is managed by the central memory management system using a separate priority list, and accessing the decompressed data of the data buffers in the cache.
    Type: Grant
    Filed: July 10, 2007
    Date of Patent: December 14, 2010
    Assignee: Siemens Corporation
    Inventors: Gianluca Paladini, Thomas Moeller
  • Patent number: 7809888
    Abstract: A caching technique involves receiving a cache request to move data into a cache (or a particular cache level of a cache hierarchy), and generating a comparison between content of the data and content of other data already stored within the cache. The caching technique further involves providing a caching response based on the comparison between the content of the data and the content of the other data already stored within the cache. The caching response includes refraining from moving the data into the cache when the comparison indicates that the content of the data is already stored within the cache. The caching response includes moving the data into the cache when the comparison indicates that the content of the data is not already stored within the cache. Such a technique is capable of eliminating data redundancies within a cache (or within a particular cache level of a cache hierarchy).
    Type: Grant
    Filed: June 22, 2005
    Date of Patent: October 5, 2010
    Assignee: EMC Corporation
    Inventors: Roy Clark, John Harwood, James Theodore Compton
  • Patent number: 7802066
    Abstract: An efficient memory management method for handling large data volumes, comprising a memory management interface between a plurality of applications and a physical memory, determining a priority list of buffers accessed by the plurality of applications, providing efficient disk paging based on the priority list, ensuring sufficient physical memory is available, sharing managed data buffers among a plurality of applications, mapping and unmapping data buffers in virtual memory efficiently to overcome the limits of virtual address space.
    Type: Grant
    Filed: February 8, 2006
    Date of Patent: September 21, 2010
    Assignee: Siemens Medical Solutions USA, Inc.
    Inventors: Gianluca Paladini, Thomas Moeller
  • Patent number: 7739448
    Abstract: This invention is a system and method for managing one or more data storage networks using a new architecture. A method for handling logical to physical mapping is included in one embodiment with the new architecture. A method for handling errors is included in another embodiment with the new architecture.
    Type: Grant
    Filed: March 15, 2007
    Date of Patent: June 15, 2010
    Assignee: EMC Corporation
    Inventors: Fernando Oliveira, Bradford B. Glade, Jeffrey A. Brown, Peter J. McCann, David Harvey, James A. Wentworth, III, Walter M. Caritj, Matthew Waxman, Lee W. VanTine
  • Patent number: 7721061
    Abstract: An embodiment of a method of predicting response time for a storage request begins with a first step of a computing entity storing a training data set. The training data set comprises past performance observations for past storage requests of a storage array. Each past performance observation comprises an observed response time and a feature vector for a particular past storage request. The feature vector includes characteristics that are available external to the storage array. In a second step, the computing entity forms a response time forecaster from the training data set. In the third step, the computing entity applies the response time forecaster to a pending feature vector for a pending storage request to obtain a predicted response time for the pending storage request.
    Type: Grant
    Filed: June 22, 2005
    Date of Patent: May 18, 2010
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Terence P. Kelly, Ira Cohen, Moises Goldszmidt, Kimberly K. Keeton
  • Patent number: 7617367
    Abstract: A memory system including a first memory subsystem having a buffer device with a first port and a second port, one or more memory devices coupled to the buffer device via the second port, and a first two-on-one link for coupling to a memory controller for providing communication between the buffer device and the memory controller. The first two-on-one link is coupled to the first port of the buffer device. The first memory subsystem is configured to transfer data between at least one memory device of the one or more memory devices and the memory controller via the buffer device. The first two-on-one link includes up to two transceivers connected to a single link, with at least one of the up to two transceivers consisting of any one of two or more transmitters for transmitting signals or two or more receivers for receiving signals.
    Type: Grant
    Filed: June 27, 2006
    Date of Patent: November 10, 2009
    Assignee: International Business Machines Corporation
    Inventors: John E. Campbell, Kevin C. Gower
  • Patent number: 7610465
    Abstract: Method and related apparatus for data migration of a disk array. While striping and migrating data of a source disk of the disk array, data stripes are grouped into different zones; after completely writing data stripes of a given zone to disks of the disk array, data stripes of next zone are written to disks of the disk array and the given zone. Because data stripes of the next zone will be distributed to various disks, only some of the data stripes will overwrite data stripes of the given zone. Therefore, the next zone can contain more data stripes than the given zone while keeping migration integration. In addition, by zones containing increasing number of data stripes, migration progress can be managed with ease and high efficiency achieving better data throughput.
    Type: Grant
    Filed: July 11, 2005
    Date of Patent: October 27, 2009
    Assignee: VIA Technologies Inc.
    Inventors: Guoyu Hu, Xingliang Zou
  • Patent number: 7603528
    Abstract: Verification operations are utilized to effectively verify multiple associated write operations. A verification operation may be initiated after the issuance of a plurality of write operations that initiate the storage of data to a memory storage device, and may be configured to verify only a subset of the data written to the memory storage device by the plurality of write operations. As a result, verification operations are not required to be performed after each write operation, and consequently, the number of verification operations, and thus the processing and communication bandwidth consumed thereby, can be substantially reduced.
    Type: Grant
    Filed: October 8, 2004
    Date of Patent: October 13, 2009
    Assignee: International Business Machines Corporation
    Inventors: William Hugh Cochran, William Paul Hovis, Paul Rudrud
  • Patent number: 7577794
    Abstract: Methods and apparatus for reducing the amount of latency involved when accessing, by a remote device, data residing in a cache of a processor are provided. For some embodiments, virtual channels may be utilized to conduct request/response transactions between the remote device and processor that satisfy a set of associated coherency rules.
    Type: Grant
    Filed: October 8, 2004
    Date of Patent: August 18, 2009
    Assignee: International Business Machines Corporation
    Inventors: Bruce L. Beukema, Russell D. Hoover, Jon K. Kriegel, Eric O. Mejdrich, Sandra S. Woodward
  • Patent number: 7526611
    Abstract: Exemplary embodiments include a multiprocessor system including: a plurality of processors in operable communication with an address manager and an memory controller; and a unified cache in operable communication with the address manager, wherein the unified cache includes: a plurality of cache addresses; a cache data corresponding to each cache address; a data mask corresponding to each cache data; a plurality of cache agents corresponding to each cache address; and a cache state corresponding to each cache agent.
    Type: Grant
    Filed: March 22, 2006
    Date of Patent: April 28, 2009
    Assignee: International Business Machines Corporation
    Inventors: Anh-Tuan Nguyen Hoang, Christopher Tung Phan
  • Patent number: 7475190
    Abstract: Methods for quickly accessing data residing in a cache of one processor, by another processor, while avoiding lengthy accesses to main memory are provided. A portion of the cache may be placed in a lock set mode by the processor in which it resides. While in the lock set mode, this portion of the cache may be accessed directly by another processor without lengthy “backing” writes of the accessed data to main memory.
    Type: Grant
    Filed: October 8, 2004
    Date of Patent: January 6, 2009
    Assignee: International Business Machines Corporation
    Inventors: Russell D. Hoover, Eric O. Mejdrich, Sandra S. Woodward
  • Patent number: 7467260
    Abstract: An apparatus and method is disclosed for flushing a cache in a computing system. In a multinode computing system a cache in a first node may contain modified data in an address space of a second node. The cache in the first node must be purged prior to shutting down the first node. The computing system uses a random class replacement scheme for the cache. A cache flush routine sets a cache flush mode in a class replace select mechanism, overriding the random class replacement scheme. With the random class replacement scheme overridden, a minimum number of fetches will flush all the cache lines in the cache, each fetch loading the cache with a cache line not already in the cache. No additional delay penalty is incurred in a critical path through which fetches and stores to the cache must pass.
    Type: Grant
    Filed: October 8, 2004
    Date of Patent: December 16, 2008
    Assignee: International Business Machines Corporation
    Inventors: Duane Arlyn Averill, John Michael Borkenhagen, Philip Rogers Hillier, III
  • Patent number: 7401190
    Abstract: Methods and systems for operating computing devices are described. In one embodiment, a small amount of static RAM (SRAM) is incorporated into an automotive computing device. The SRAM is battery-backed to provide a non-volatile memory space in which critical data, e.g. the object store, can be maintained in the event of a power loss.
    Type: Grant
    Filed: November 16, 2005
    Date of Patent: July 15, 2008
    Assignee: Microsoft, Corporation
    Inventors: Richard Dennis Beckert, Sharon Drasnin, Ronald Otto Radko