Patents Examined by Arpan Savla
  • Patent number: 7783852
    Abstract: Allocation of memory is optimized across multiple pools of memory, based on minimizing the time it takes to successfully retrieve a given data item from each of the multiple pools. First data is generated that indicates a hit rate per pool size for each of multiple memory pools. In an embodiment, the generating step includes continuously monitoring attempts to access, or retrieve a data item from, each of the memory pools. The first data is converted to second data that accounts for a cost of a miss with respect to each of the memory pools. In an embodiment, the second data accounts for the cost of a miss in terms of time. How much of the memory to allocate to each of the memory pools is determined, based on the second data. In an embodiment, the steps of converting and determining are automatically performed, on a periodic basis.
    Type: Grant
    Filed: December 23, 2003
    Date of Patent: August 24, 2010
    Assignee: Oracle International Corporation
    Inventors: Tirthankar Lahiri, Poojan Kumar, Brian Hirano, Arvind Nithrakashyap, Kant Patel, Kiran Goyal, Juan R. Loaiza
  • Patent number: 7743205
    Abstract: A system and method for use in an automated data storage cartridge library defines cartridges for use with an external host computer (“open” cartridges), and cartridges for use only internal to the library (“closed” cartridges). Cartridges may be “virtualized” by storing data from them on disk or closed cartridges, and then “realized” by writing data to physical cartridges. Virtual cartridges may be logically exported from one library to another. When new cartridges are introduced to the library, they may be designated with one of multiple designations or uses.
    Type: Grant
    Filed: September 22, 2004
    Date of Patent: June 22, 2010
    Assignee: Quantum Corporation
    Inventors: Barry Massey, Don Doerner, Stephen Moore, John Rockenfeller, Jeff Leuschner, Doug Burling, Roderick B. Wideman
  • Patent number: 7689771
    Abstract: According to one embodiment, a method of coherency management in a data processing system includes holding a cache line in an upper level cache memory in an exclusive ownership coherency state and thereafter removing the cache line from the upper level cache memory and transmitting a castout request for the cache line from the upper level cache memory to a lower level cache memory. The castout request includes an indication of a shared ownership coherency state. In response to the castout request, the cache line is placed in the lower level cache memory in a coherency state determined in accordance with the castout request.
    Type: Grant
    Filed: September 19, 2006
    Date of Patent: March 30, 2010
    Assignee: International Business Machines Corporation
    Inventors: James S. Fields, Jr., Guy L. Guthrie, William J. Starke, Derek E. Williams
  • Patent number: 7647470
    Abstract: A memory device and controlling method for nonvolatile memory are provided. The memory device and the controlling method for a nonvolatile memory are provided by which, where a file management system wherein there is a tendency that lower logic addresses are used frequently like the MS-DOS is adopted, physical blocks of a flash memory are used in an averaged fashion and the life of the flash memory can be elongated thereby.
    Type: Grant
    Filed: August 19, 2005
    Date of Patent: January 12, 2010
    Assignee: Sony Corporation
    Inventors: Junko Sasaki, Kenichi Nakanishi, Nobuhiro Kaneko
  • Patent number: 7620777
    Abstract: A method, apparatus, and computer instructions for providing hardware assistance to prefetch data during execution of code by a process or in the data processing system. In response to loading of an instruction in the code into a cache, a determination is made, by the processor unit, as to whether metadata for a prefetch is present for the instruction. In response to metadata being present for the instruction, selectively prefetching data, from within a data structure using the metadata, into the cache in a processor.
    Type: Grant
    Filed: April 28, 2009
    Date of Patent: November 17, 2009
    Assignee: International Business Machines Corporation
    Inventors: Robert Tod Dimpsey, Frank Eliot Levine, Robert John Urquhart
  • Patent number: 7613889
    Abstract: Provided are a method, system, and program for migrating source data to target data. A write request is received to write application data to source data not yet migrated to the target data. Information is generated for the write request indicating the source data to which the application data is written. The application data is written to the source data. A request is received to migrate source data to target data and indication is returned to retry the request to migrate in response to determining that the requested source data to migrate overlaps source data indicated in the generated information for one write request.
    Type: Grant
    Filed: June 10, 2004
    Date of Patent: November 3, 2009
    Assignee: International Business Machines Corporation
    Inventors: Christopher John Stakutis, William Robert Haselton
  • Patent number: 7610432
    Abstract: A tape library apparatus comprising a plurality of FC drives. A host computer and a fiber channel switch portion are connected with an optical fiber cable through respective fiber channel interfaces. The fiber channel switch portion and FC drives are connected with respective optical fiber cables through respective fiber channel interfaces. A controlling portion and the FC drives are connected with respective -232C cables. Alias WWNNs and alias WWPNs of the FC drives are assigned by the controlling portion through respective RS-232C cables. Data reproduced by the FC drives and data supplied thereto are transmitted to and received from the host computer through, for example, respective optical fiber cables.
    Type: Grant
    Filed: March 2, 2004
    Date of Patent: October 27, 2009
    Assignee: Sony Corporation
    Inventor: Yasunori Azuma
  • Patent number: 7565504
    Abstract: The disclosed embodiments may relate to memory window access, which may include a memory window and protection domain associated with a process. The memory window access setting or bit may also allow a plurality of memory windows to be associated with a protection domain for a process. The memory window access setting or bit may allow access to the memory window to be for the queue pairs in a certain protection domain or a designated queue pair.
    Type: Grant
    Filed: March 27, 2003
    Date of Patent: July 21, 2009
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: David J. Garcia, Jeffrey R. Hilland, Paul R. Culley
  • Patent number: 7555629
    Abstract: A memory card comprises a memory controller connected to a non-volatile memory module. The memory controller comprises a first circuit adapted to convert a first external address into a first internal address using a program stored in an internal memory. The memory controller further comprises a hardware accelerator adapted to generate a second internal address based on the first internal and external addresses.
    Type: Grant
    Filed: December 30, 2005
    Date of Patent: June 30, 2009
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kyong-Ae Kim, Jong-Yeol Park, Dong-Hee Lee
  • Patent number: 7526616
    Abstract: A method, apparatus, and computer instructions for providing hardware assistance to prefetch data during execution of code by a process or in the data processing system. In response to loading of an instruction in the code into a cache, a determination is made, by the processor unit, as to whether metadata for a prefetch is present for the instruction. In response to metadata being present for the instruction, selectively prefetching data, from within a data structure using the metadata, into the cache in a processor.
    Type: Grant
    Filed: March 22, 2004
    Date of Patent: April 28, 2009
    Assignee: International Business Machines Corporation
    Inventors: Robert Tod Dimpsey, Frank Eliot Levine, Robert John Urquhart
  • Patent number: 7523268
    Abstract: A cache, system and method for reducing the number of rejected snoop requests. An incoming snoop request is entered in the first available latch in a pipeline of latches in a stall/reorder unit if the stall/reorder unit is not full. The entered snoop request is dispatched to a selector upon entering a bottom latch in the pipeline. The stall/reorder unit is not informed as to whether the dispatched snoop request is accepted by an arbitration mechanism for several clock cycles after the dispatch occurred. A copy of the dispatched snoop request is stored in a top latch in an overrun pipeline of latches in the first unit upon dispatching the snoop request. By maintaining information about the snoop request, the snoop request may be dispatched again to the selector in case the dispatched snoop request was rejected thereby increasing the chance that the snoop request will ultimately be accepted.
    Type: Grant
    Filed: May 4, 2008
    Date of Patent: April 21, 2009
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, William J. Starke, Derek E. Williams
  • Patent number: 7493446
    Abstract: A method and processor system that substantially eliminates data bus operations when completing updates of an entire cache line with a full store queue entry. The store queue within a processor chip is designed with a series of AND gates connecting individual bits of the byte enable bits of a corresponding entry. The AND output is fed to the STQ controller and signals when the entry is full. When full entries are selected for dispatch to the RC machines, the RC machine is signaled that the entry updates the entire cache line. The RC machine obtains write permission to the line, and then the RC machine overwrites the entire cache line. Because the entire cache line is overwritten, the data of the cache line is not retrieved when the request for the cache line misses at the cache or when data goes state before write permission is obtained by the RC machine.
    Type: Grant
    Filed: February 21, 2008
    Date of Patent: February 17, 2009
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, Guy Lynn Guthrie, Hugh Shen, Derek Edward Williams
  • Patent number: 7484046
    Abstract: A cache, system and method for reducing the number of rejected snoop requests. A “stall/reorder unit” in a cache receives a snoop request from an interconnect. Information, such as the address, of the snoop request is stored in a queue of the stall/reorder unit. The stall/reorder unit forwards the snoop request to a selector which also receives a request from a processor. An arbitration mechanism selects either the snoop request or the request from the processor. If the snoop request is denied by the arbitration mechanism, information, e.g., address, about the snoop request may be maintained in the stall/reorder unit. The request may be later resent to the selector. This process may be repeated up to “n” clock cycles. By providing the snoop request additional opportunities (n clock cycles) to be accepted by the arbitration mechanism, fewer snoop requests may ultimately be denied.
    Type: Grant
    Filed: December 5, 2007
    Date of Patent: January 27, 2009
    Assignee: International Business Machines Corporation
    Inventors: Benjiman L. Goodman, Guy L. Guthrie, William J. Starke, Jeffrey A. Stuecheli, Derek E. Williams
  • Patent number: 7457920
    Abstract: The proposed system and associated algorithm when implemented improves the processor cache miss rates and overall cache efficiency in multi-core environments in which multiple CPU's share a single cache structure (as an example). The cache efficiency will be improved by tracking CPU core loading patterns such as miss rate and minimum cache line load threshold levels. Using this information along with existing cache eviction method such as LRU, results in determining which cache line from which CPU is evicted from the shared cache when a capacity conflict arises. This methodology allows one to dynamically allocate shared cache entries to each core within the socket based on the particular core's frequency of shared cache usage.
    Type: Grant
    Filed: January 26, 2008
    Date of Patent: November 25, 2008
    Assignee: International Business Machines Corporation
    Inventors: Marcus Lathan Kornegay, Ngan Ngoc Pham
  • Patent number: 7437521
    Abstract: A method and apparatus to provide specifiable ordering between and among vector and scalar operations within a single streaming processor (SSP) via a local synchronization (Lsync) instruction that operates within a relaxed memory consistency model. Various aspects of that relaxed memory consistency model are described. Further, a combined memory synchronization and barrier synchronization (Msync) for a multistreaming processor (MSP) system is described. Also, a global synchronization (Gsync) instruction provides synchronization even outside a single MSP system is described. Advantageously, the pipeline or queue of pending memory requests does not need to be drained before the synchronization operation, nor is it required to refrain from determining addresses for and inserting subsequent memory accesses into the pipeline.
    Type: Grant
    Filed: August 18, 2003
    Date of Patent: October 14, 2008
    Assignee: Cray Inc.
    Inventors: Steven L. Scott, Gregory J. Faanes, Brick Stephenson, William T. Moore, Jr., James R. Kohn
  • Patent number: 7386682
    Abstract: A cache, system and method for reducing the number of rejected snoop requests. An incoming snoop request is entered in the first available latch in a pipeline of latches in a stall/reorder unit if the stall/reorder unit is not full. The entered snoop request is dispatched to a selector upon entering a bottom latch in the pipeline. The stall/reorder unit is not informed as to whether the dispatched snoop request is accepted by an arbitration mechanism for several clock cycles after the dispatch occurred. A copy of the dispatched snoop request is stored in a top latch in an overrun pipeline of latches in the first unit upon dispatching the snoop request. By maintaining information about the snoop request, the snoop request may be dispatched again to the selector in case the dispatched snoop request was rejected thereby increasing the chance that the snoop request will ultimately be accepted.
    Type: Grant
    Filed: February 11, 2005
    Date of Patent: June 10, 2008
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, William J. Starke, Derek E. Williams
  • Patent number: 7386681
    Abstract: A cache, system and method for reducing the number of rejected snoop requests. A “stall/reorder unit” in a cache receives a snoop request from an interconnect. The snoop request is entered in the first available latch of the stall/reorder unit unless the stall/reorder unit is full in which case the new snoop request is transmitted to a second unit configured to transmit a request to retry resending the new snoop request. Snoop requests have a higher priority than requests from processors and snoop requests are selected by the arbitration mechanism over processor requests unless the arbitration mechanism requests otherwise (“stall request”) to the stall/reorder unit. By snoop requests having a higher priority than processor requests, the number of snoop requests rejected is reduced. By having the arbitration mechanism issue a stall request, the processor will not be starved.
    Type: Grant
    Filed: February 11, 2005
    Date of Patent: June 10, 2008
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, William J. Starke, Derek E. Williams
  • Patent number: 7360021
    Abstract: A method and processor system that substantially eliminates data bus operations when completing updates of an entire cache line with a full store queue entry. The store queue within a processor chip is designed with a series of AND gates connecting individual bits of the byte enable bits of a corresponding entry. The AND output is fed to the STQ controller and signals when the entry is full. When full entries are selected for dispatch to the RC machines, the RC machine is signaled that the entry updates the entire cache line. The RC machine obtains write permission to the line, and then the RC machine overwrites the entire cache line. Because the entire cache line is overwritten, the data of the cache line is not retrieved when the request for the cache line misses at the cache or when data goes state before write permission is obtained by the RC machine.
    Type: Grant
    Filed: April 15, 2004
    Date of Patent: April 15, 2008
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, Guy Lynn Guthrie, Hugh Shen, Derek Edward Williams
  • Patent number: 7350019
    Abstract: The configuration of a CAM device can be set in various manners depending on a system in which CAM having different configurations is needed. The CAM device includes a CAM array including a plurality of physical banks, a logical bank-physical bank converter for setting the assignment between logical banks and physical banks, and for outputting a control signal to set the configuration of a physical bank assigned to the logical bank, depending on a logical bank signal indicating a logical bank to be searched, a priority circuit for outputting search results in accordance with predetermined priority, and a cascade circuit for performing a logical operation on the search results output from the priority circuit of the present CAM device and a search results supplied from a higher-order CAM device, and transmitting the results of the logical operation to a lower-order CAM device.
    Type: Grant
    Filed: March 2, 2004
    Date of Patent: March 25, 2008
    Assignee: Kawasaki Microelectronics, Inc.
    Inventors: Yoshinori Wakimoto, Yoshihiro Ishida
  • Patent number: 7340568
    Abstract: A cache, system and method for reducing the number of rejected snoop requests. A “stall/reorder unit” in a cache receives a snoop request from an interconnect. Information, such as the address, of the snoop request is stored in a queue of the stall/reorder unit. The stall/reorder unit forwards the snoop request to a selector which also receives a request from a processor. An arbitration mechanism selects either the snoop request or the request from the processor. If the snoop request is denied by the arbitration mechanism, information, e.g., address, about the snoop request may be maintained in the stall/reorder unit. The request may be later resent to the selector. This process may be repeated up to “n” clock cycles. By providing the snoop request additional opportunities (n clock cycles) to be accepted by the arbitration mechanism, fewer snoop requests may ultimately be denied.
    Type: Grant
    Filed: February 11, 2005
    Date of Patent: March 4, 2008
    Assignee: International Business Machines Corporation
    Inventors: Benjiman L. Goodman, Guy L. Guthrie, William J. Starke, Jeffrey A. Stuecheli, Derek E. Williams