Multiple Caches Patents (Class 711/119)
  • Patent number: 8745332
    Abstract: Provided a computer program product, system, and method for cache management of tracks in a first cache and a second cache for a storage. The first cache maintains modified and unmodified tracks in the storage subject to Input/Output (I/O) requests. Modified and unmodified tracks are demoted from the first cache. The modified and the unmodified tracks demoted from the first cache are promoted to the second cache. The unmodified tracks demoted from the second cache are discarded. The modified tracks in the second cache that are at proximate physical locations on the storage device are grouped and the grouped modified tracks are destaged from the second cache to the storage device.
    Type: Grant
    Filed: April 25, 2012
    Date of Patent: June 3, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Binny S. Gill, Lokesh M. Gupta, Matthew J. Kalos
  • Patent number: 8745325
    Abstract: Provided are a computer program product, system, and method for using an attribute of a write request to determine where to cache data in a storage system having multiple caches including non-volatile storage cache in a sequential access storage device. Received modified tracks are cached in the non-volatile storage device integrated with the sequential access storage device in response to determining to cache the modified tracks. A write request having modified tracks is received. A determination is made as to whether an attribute of the received write request satisfies a condition. The received modified tracks for the write request are cached in the non-volatile storage device in response to determining that the determined attribute does not satisfy the condition. A destage request is added to a request queue for the received write request having the determined attribute not satisfying the condition.
    Type: Grant
    Filed: May 17, 2012
    Date of Patent: June 3, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Binny S. Gill, Lokesh M. Gupta, Matthew J. Kalos
  • Publication number: 20140149669
    Abstract: In one example embodiment of the inventive concepts, a cache memory system includes a main cache memory including a nonvolatile random access memory, the main cache memory configured to exchange data with an external device and store the exchange data, each exchanged data includes less significant bit (LSB) data and more significant bit (MSB) data. The cache memory system further includes a sub-cache memory including a random access memory, the sub-cache memory configured to store LSB data of at least a portion of data stored at the main cache memory, wherein the main cache memory and the sub-cache memory are formed of a single-level cache memory.
    Type: Application
    Filed: November 21, 2013
    Publication date: May 29, 2014
    Inventors: Sungyeum KIM, Hyeokman KWON, Youngjun KWON, Kiyoung CHOI, Junwhan AHN
  • Publication number: 20140149668
    Abstract: Attributes of access requests can be used to distinguish one set of access requests from another set of access requests. The prefetcher can determine a pattern for each set of access requests and then prefetch cache lines accordingly. In an embodiment in which there are multiple caches, a prefetcher can determine a destination for prefetched cache lines associated with a respective set of access requests. For example, the prefetcher can prefetch one set of cache lines into one cache, and another set of cache lines into another cache. Also, the prefetcher can determine a prefetch distance for each set of access requests. For example, the prefetch distances for the sets of access requests can be different.
    Type: Application
    Filed: November 27, 2012
    Publication date: May 29, 2014
    Applicant: NVIDIA CORPORATION
    Inventor: Anurag Chaudhary
  • Patent number: 8732402
    Abstract: Provided is a method for managing track discard requests. A backup copy of a track in a cache is maintained in a cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. If a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent to the cache backup device indicating the tracks indicated in the queued predetermined number of track discard requests to instruct the cache backup device to discard the tracks indicated in the discard multiple tracks message. If a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode.
    Type: Grant
    Filed: March 6, 2013
    Date of Patent: May 20, 2014
    Assignee: International Business Machines Corporation
    Inventors: Kevin J. Ash, Michael T. Benhase, Lokesh M. Gupta, Kenneth W. Todd
  • Publication number: 20140136783
    Abstract: Embodiments of the invention relate a hybrid hardware and software implementation of transactional memory accesses in a computer system. A processor including a transactional cache and a regular cache is utilized in a computer system that includes a policy manager to select one of a first mode (a hardware mode) or a second mode (a software mode) to implement transactional memory accesses. In the hardware mode the transactional cache is utilized to perform read and write memory operations and in the software mode the regular cache is utilized to perform read and write memory operations.
    Type: Application
    Filed: March 15, 2013
    Publication date: May 15, 2014
    Inventors: Sanjeev KUMAR, Christopher J. Hughes, Partha Kundu, Anthony Nguyen
  • Patent number: 8725950
    Abstract: A processor includes multiple processor core units, each including a processor core and a cache memory. Victim lines evicted from a first processor core unit's cache may be stored in another processor core unit's cache, rather than written back to system memory. If the victim line is later requested by the first processor core unit, the victim line is retrieved from the other processor core unit's cache. The processor has low latency data transfers between processor core units. The processor transfers victim lines directly between processor core units' caches or utilizes a victim cache to temporarily store victim lines while searching for their destinations. The processor evaluates cache priority rules to determine whether victim lines are discarded, written back to system memory, or stored in other processor core units' caches. Cache priority rules can be based on cache coherency data, load balancing schemes, and architectural characteristics of the processor.
    Type: Grant
    Filed: June 30, 2010
    Date of Patent: May 13, 2014
    Assignee: MIPS Technologies, Inc.
    Inventor: Sanjay Vishin
  • Publication number: 20140129772
    Abstract: A processor transfers prefetch requests from their targeted cache to another cache in a memory hierarchy based on a fullness of a miss address buffer (MAB) or based on confidence levels of the prefetch requests. Each cache in the memory hierarchy is assigned a number of slots at the MAB. In response to determining the fullness of the slots assigned to a cache is above a threshold when a prefetch request to the cache is received, the processor transfers the prefetch request to the next lower level cache in the memory hierarchy. In response, the data targeted by the access request is prefetched to the next lower level cache in the memory hierarchy, and is therefore available for subsequent provision to the cache. In addition, the processor can transfer a prefetch request to lower level caches based on a confidence level of a prefetch request.
    Type: Application
    Filed: November 6, 2012
    Publication date: May 8, 2014
    Applicant: Advanced Micro Devices, Inc.
    Inventors: John Kalamatianos, Ravindra Nath Bhargava, Ramkumar Jayaseelan
  • Patent number: 8719508
    Abstract: Parallel computing environments, where threads executing in neighboring processors may access the same set of data, may be designed and configured to share one or more levels of cache memory. Before a processor forwards a request for data to a higher level of cache memory following a cache miss, the processor may determine whether a neighboring processor has the data stored in a local cache memory. If so, the processor may forward the request to the neighboring processor to retrieve the data. Because access to the cache memories for the two processors is shared, the effective size of the memory is increased. This may advantageously decrease cache misses for each level of shared cache memory without increasing the individual size of the caches on the processor chip.
    Type: Grant
    Filed: December 10, 2012
    Date of Patent: May 6, 2014
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Robert A. Shearer
  • Patent number: 8719507
    Abstract: Parallel computing environments, where threads executing in neighboring processors may access the same set of data, may be designed and configured to share one or more levels of cache memory. Before a processor forwards a request for data to a higher level of cache memory following a cache miss, the processor may determine whether a neighboring processor has the data stored in a local cache memory. If so, the processor may forward the request to the neighboring processor to retrieve the data. Because access to the cache memories for the two processors is shared, the effective size of the memory is increased. This may advantageously decrease cache misses for each level of shared cache memory without increasing the individual size of the caches on the processor chip.
    Type: Grant
    Filed: January 4, 2012
    Date of Patent: May 6, 2014
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Robert A. Shearer
  • Publication number: 20140122804
    Abstract: Methods for memory block protection and memory devices are disclosed. One such method for memory block protection includes programming protection data to protection bytes diagonally across different word lines of a particular memory block (e.g., Boot ROM). The protection data can be retrieved by an erase verify operation that can be performed at power-up of the memory device.
    Type: Application
    Filed: January 9, 2014
    Publication date: May 1, 2014
    Applicant: Micron Technology, Inc.
    Inventor: Kirubakaran PERIYANNAN
  • Patent number: 8713263
    Abstract: The present invention provides a method and apparatus for supporting embodiments of an out-of-order load/store queue structure. One embodiment of the apparatus includes a first queue for storing memory operations adapted to be executed out-of-order with respect to other memory operations. The apparatus also includes one or more additional queues for storing memory operation in response to completion of a memory operation. The embodiment of the apparatus is configured to remove the memory operation from the first queue in response to the completion.
    Type: Grant
    Filed: November 1, 2010
    Date of Patent: April 29, 2014
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Christopher D. Bryant
  • Publication number: 20140115255
    Abstract: It is provided a storage system, comprising a storage device for storing data and at least one controller for controlling reading/writing of the data from/to the storage device. The at least one controller each includes a first cache memory for temporarily storing the data read from the storage device by file access, and a second cache memory for temporarily storing the data to be read/written from/to the storage device by block access. The processor reads the requested data from the storage device in the case where data requested by a file read request received from a host computer is not stored in the first cache memory, stores the data read from the storage device in the first cache memory without storing the data in the second cache memory, and transfers the data stored in the first cache memory to the host computer that has issued the file read request.
    Type: Application
    Filed: October 19, 2012
    Publication date: April 24, 2014
    Applicant: Hitachi, Ltd.
    Inventors: Masanori Takada, Akira Yamamoto, Hiroshi Hirayama
  • Patent number: 8706966
    Abstract: A system and method are provided for adaptively configuring L2 cache memory usage in a system of microprocessors. A system-on-chip (SoC) is provided with a plurality of n selectively enabled processor cores and a plurality of n L2 cache memories. The method associates each L2 cache with a corresponding processor core, and shares the n L2 caches between enabled processor cores. More explicitly, associating each L2 cache with the corresponding processor core means connecting each processor core to its L2 cache using an L2 data/address bus. Sharing the n L2 caches with enabled processors means connecting each processor core to each L2 cache via a data/address bus mesh with dedicated point-to-point connections.
    Type: Grant
    Filed: May 24, 2011
    Date of Patent: April 22, 2014
    Assignee: Applied Micro Circuits Corporation
    Inventors: Waseem Saify Kraipak, George Bendak
  • Publication number: 20140108734
    Abstract: A processor includes a first processing unit and a first level cache associated with the first processing unit and operable to store data for use by the first processing unit used during normal operation of the first processing unit. The first processing unit is operable to store first architectural state data for the first processing unit in the first level cache responsive to receiving a power down signal. A method for controlling power to processor including a hierarchy of cache levels includes storing first architectural state data for a first processing unit of the processor in a first level of the cache hierarchy responsive to receiving a power down signal and flushing contents of the first level including the first architectural state data to a first lower level of the cache hierarchy prior to powering down the first level of the cache hierarchy and the first processing unit.
    Type: Application
    Filed: October 17, 2012
    Publication date: April 17, 2014
    Inventors: Paul Edward Kitchin, William L. Walker
  • Patent number: 8700735
    Abstract: Disclosed in some examples is a method of caching by storing data in a first cache specific to a first geographic area and accessible only by a first application in the first geographic area; storing data in a second cache specific to a second geographic area and accessible by a plurality of applications in the second geographic area including the first application and a second application, the second geographic area being larger than and encompassing at least part of the first geographic area; responsive to a miss in the first cache for data, contacting the second cache and searching for the data in the second cache; and responsive to a hit for the data in the second cache, sending the data to a first application, wherein the data was placed in the second cache by a second application.
    Type: Grant
    Filed: November 19, 2012
    Date of Patent: April 15, 2014
    Assignee: Zynga Inc.
    Inventors: Scott Dale, Nathan Brown, Michael Arieh Luxton
  • Patent number: 8700854
    Abstract: Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache.
    Type: Grant
    Filed: May 21, 2012
    Date of Patent: April 15, 2014
    Assignee: International Business Machines Corporation
    Inventors: Kevin J. Ash, Michael T. Benhase, Lokesh M. Gupta, Matthew J. Kalos, Keneth W. Todd
  • Patent number: 8700862
    Abstract: A compression status bit cache provides on-chip availability of compression status bits used to determine how many bits are needed to access a potentially compressed block of memory. A backing store residing in a reserved region of attached memory provides storage for a complete set of compression status bits used to represent compression status of an arbitrarily large number of blocks residing in attached memory. Physical address remapping (“swizzling”) used to distribute memory access patterns over a plurality of physical memory devices is partially replicated by the compression status bit cache to efficiently integrate allocation and access of the backing store data with other user data.
    Type: Grant
    Filed: December 3, 2008
    Date of Patent: April 15, 2014
    Assignee: Nvidia Corporation
    Inventors: David B. Glasco, Peter B. Holmqvist, George R. Lynch, Patrick R. Marchand, Karan Mehra, James Roberts
  • Patent number: 8694730
    Abstract: A binary tree based multi-level cache system for multi-core processors and its two possible implementations LogN and LogN+1 models maintaining a true pyramid is described.
    Type: Grant
    Filed: March 4, 2011
    Date of Patent: April 8, 2014
    Inventor: Muhammad Ali Ismail
  • Patent number: 8688913
    Abstract: For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. Unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache. The unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes.
    Type: Grant
    Filed: November 1, 2011
    Date of Patent: April 1, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Stephen L. Blinick, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao-Yu Hu, Matthew J. Kalos, Ioannis Koltsidas, Roman A. Pletka
  • Patent number: 8688900
    Abstract: Provided is a method for managing cache memory to cache data units in at least one storage device. A cache controller is coupled to at least two flash bricks, each comprising a flash memory. Metadata indicates a mapping of the data units to the flash bricks caching the data units, wherein the metadata is used to determine the flash bricks on which the cache controller caches received data units. The metadata is updated to indicate the flash brick having the flash memory on which data units are cached.
    Type: Grant
    Filed: February 4, 2013
    Date of Patent: April 1, 2014
    Assignee: International Business Machines Corporation
    Inventors: Evangelos S. Eleftheriou, Robert Haas, Xiao-Yu Hu, Roman A. Pletka
  • Patent number: 8688914
    Abstract: For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage.
    Type: Grant
    Filed: November 1, 2011
    Date of Patent: April 1, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Stephen L. Blinick, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao-Yu Hu, Matthew J. Kalos, Ioannis Koltsidas, Karl A. Nielsen, Roman A. Pletka
  • Patent number: 8688897
    Abstract: Provided are a system, method, and computer program product for managing cache memory to cache data units in at least one storage device. A cache controller is coupled to at least two flash bricks, each comprising a flash memory. Metadata indicates a mapping of the data units to the flash bricks caching the data units, wherein the metadata is used to determine the flash bricks on which the cache controller caches received data units. The metadata is updated to indicate the flash brick having the flash memory on which data units are cached.
    Type: Grant
    Filed: April 5, 2011
    Date of Patent: April 1, 2014
    Assignee: International Business Machines Corporation
    Inventors: Evangelos S. Eleftheriou, Robert Haas, Xiao-Yu Hu, Roman A. Pletka
  • Patent number: 8683132
    Abstract: A memory controller for prefetching data for a processor, or CPU, of a computer system. The memory controller functions by interfacing the processor to system memory via a system memory bus. A prefetch cache is included in the memory controller. The prefetch cache includes a short-term storage portion and a long-term storage portion. The prefetch cache is configured to access system memory to retrieve and store a plurality of sequential cache lines subsequent to a processor access to system memory.
    Type: Grant
    Filed: September 29, 2003
    Date of Patent: March 25, 2014
    Assignee: NVIDIA Corporation
    Inventor: Radoslav Danilak
  • Patent number: 8683465
    Abstract: A cache image including only cache entries with valid durations of at least a configured deployment date for a virtual machine image is prepared via an application server for the virtual machine image. The virtual machine image is deployed to at least one other application server as a virtual machine with the cache image including only the cache entries with the valid durations of at least the configured deployment date for the virtual machine image.
    Type: Grant
    Filed: December 18, 2009
    Date of Patent: March 25, 2014
    Assignee: International Business Machines Corporation
    Inventors: Erik J. Burckart, Andrew J. Ivory, Todd E. Kaplinger, Aaron K. Shook
  • Patent number: 8683140
    Abstract: A method of processing store requests in a data processing system includes enqueuing a store request in a store queue of a cache memory of the data processing system. The store request identifies a target memory block by a target address and specifies store data. While the store request and a barrier request older than the store request are enqueued in the store queue, a read-claim machine of the cache memory is dispatched to acquire coherence ownership of target memory block of the store request. After coherence ownership of the target memory block is acquired and the barrier request has been retired from the store queue, a cache array of the cache memory is updated with the store data.
    Type: Grant
    Filed: April 26, 2012
    Date of Patent: March 25, 2014
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, William J. Starke, Derek E. Williams
  • Publication number: 20140082284
    Abstract: The present disclosure relates to a device for controlling the access to a cache structure comprising multiple cache sets during the execution of at least one computer program, the device comprising a module for generating seed values during the execution of the at least one computer program; a parametric hash function module for generating a cache set identifier to access the cache structure, the identifier being generated by combining a seed value generated by the module for generating seed values and predetermined bits of an address to access a main memory associated to the cache structure.
    Type: Application
    Filed: September 13, 2013
    Publication date: March 20, 2014
    Applicant: BARCELONA SUPERCOMPUTING CENTER - CENTRO NACIONAL DE SUPERCOMPUTACION
    Inventors: JAIME ABELLA FERRER, EDUARDO QUIÑONES MORENO, FRANCISCO JAVIER CAZORLA ALMEIDA
  • Publication number: 20140082390
    Abstract: Embodiments of the disclosure include a cache array having a plurality of cache sets grouped into a plurality of subsets. The cache array also includes a read line configured to receive a read signal for the cache array and a set selection line configured to receive a set selection signal. The set selection signal indicates that the read signal corresponds to one of the plurality subsets of the cache array. The read line and the set selection line are operatively coupled to the plurality of cache sets and based on the set selection signal the subset that corresponds to the set selection signal is switched.
    Type: Application
    Filed: September 18, 2012
    Publication date: March 20, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Paul A. Bunce, John D. Davis, Diana M. Henderson, Jigar J. Vora
  • Publication number: 20140082286
    Abstract: A method and apparatus for determining data to be prefetched based on previous cache miss history is disclosed. In one embodiment, a processor includes a first cache memory and a controller circuit. The controller circuit is configured to load data from a first address into the first cache memory responsive to a cache miss corresponding to the first address. The controller circuit is further configured to determine, responsive to a cache miss for the first address, if a previous cache miss occurred at a second address. Responsive to determining that the previous cache miss occurred at the second address, the controller circuit is configured to load data from a second address into the first cache.
    Type: Application
    Filed: September 18, 2012
    Publication date: March 20, 2014
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventor: Yuan C. Chou
  • Publication number: 20140082285
    Abstract: Data storage and management systems can be interconnected as clustered systems to distribute data and operational loading. Further, independent clustered storage systems can be associated to form peered clusters. As provided herein, methods and systems for creating and managing intercluster relationships between independent clustered storage systems, allowing the respective independent clustered storage systems to exchange data and distribute management operations between each other while mitigating administrator involvement. Cluster introduction information is provided on a network interface of one or more nodes in a cluster, and intercluster relationships are created between peer clusters. A relationship can be created by initiating contact with a peer using a logical interface, and respective peers retrieving the introduction information provided on the network interface.
    Type: Application
    Filed: November 22, 2013
    Publication date: March 20, 2014
    Applicant: NetApp Inc.
    Inventor: Steven M. Ewing
  • Patent number: 8676506
    Abstract: Methods and systems for identifying missing signage are described herein. The method includes generating a route from an origin to a destination, the route having a plurality of maneuvers. The method further includes receiving missing signage information from a first device, the missing signage information relating to one or more maneuvers of the plurality of maneuvers, and providing the missing signage information and at least one of the one or more related maneuvers to a second device.
    Type: Grant
    Filed: November 15, 2011
    Date of Patent: March 18, 2014
    Assignee: Google Inc.
    Inventor: Daniel M. LaLiberte
  • Patent number: 8671232
    Abstract: A system and method for dynamically migrating stash transactions include first and second processing cores, an input/output memory management unit (IOMMU), an IOMMU mapping table, an input/output (I/O) device, a stash transaction migration management unit (STMMU), and an operating system (OS) scheduler. The first core executes a first thread associated with a frame manager. The OS scheduler migrates the first thread from the first core to the second core and generates pre-empt notifiers to indicate scheduling-out and scheduling-in of the first thread from the first core and to the second core. The STMMU uses the pre-empt notifiers to enable dynamic stash transaction migration.
    Type: Grant
    Filed: March 7, 2013
    Date of Patent: March 11, 2014
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Vakul Garg, Varun Sethi
  • Patent number: 8671245
    Abstract: In an exemplary computer system having one or more masters configured to the same slave memory using a protocol, such as the AMBA AXI protocol, a master provides an ID field to the memory as part of a data request, where the ID field has a line ID sub-field that represents a line ID value that uniquely identifies a particular cache line (or subset of cache lines) in the master, where the memory returns the line ID value back to the master along with the retrieved data. The master uses the line ID value to identify the cache line into which the retrieved data is to be stored. In this way, the master does not need to maintain a queue of address buffers to retain the addresses for data requests currently being processed, where the size of the queue limits the number of parallel in-service data requests by the master.
    Type: Grant
    Filed: December 27, 2010
    Date of Patent: March 11, 2014
    Assignee: LSI Corporation
    Inventor: Eran Dosh
  • Publication number: 20140068191
    Abstract: A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space. The computational device synchronously discards tracks from the first type of cache, and asynchronously discards tracks from the second type of cache.
    Type: Application
    Filed: November 6, 2013
    Publication date: March 6, 2014
    Inventors: Michael T. Benhase, Lokesh M. Gupta
  • Publication number: 20140067913
    Abstract: A system is provided in which two sets of content are cached in a corresponding two caches—a current cache and a next cache. A client renders content in the current cache and uses the next cache to define the expiration for the content in the current cache as well as provide the replacement content when the current content expires. When a client application renders the content in the current cache, the application checks whether the expiration for the current cache has been reached according to the expiration defined by the content in the next cache (which is not being rendered). If the expiration has been reached, the content in the next cache is moved to the current cache and rendered. New content can then be downloaded to fill the next cache and define the expiration for the content formerly in the next cache but now in the current cache.
    Type: Application
    Filed: September 6, 2012
    Publication date: March 6, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Kyle Matthew von Haden, Ryan Patrick Heaney, Neculai Blendea
  • Patent number: 8667238
    Abstract: For selecting an input/output tape volume cache (TVC), a history module maintains access history instances for a plurality of clusters, each cluster comprising a TVC. A request module receives an access request for a logical volume wherein an instance of the logical volume is stored on each of the plurality of clusters and each instance of the logical volume is synchronized with each other instance of the logical volume. An adjustment module weights the access history instances in favor of recent access history instances. A calculation module calculates an affinity of the logical volume instance stored on each cluster of the plurality of clusters. A selection module selects a cluster TVC with a highest logical volume affinity as the TVC for the logical volume.
    Type: Grant
    Filed: February 23, 2012
    Date of Patent: March 4, 2014
    Assignee: International Business Machines Corporation
    Inventors: Thirumale N. Niranjan, Joseph M. Swingler
  • Patent number: 8661214
    Abstract: The present invention appropriately processes a write command issued during data migration processing, and completes the data migration processing promptly. A copy control part 7, in a case where write data targeted at a migration-source volume 3A has been received from a host 2 during data migration processing, selects and executes one of a synchronous copy process 6A and an asynchronous copy process 6B based on either any one or multiple pieces of prescribed information 7A through 7D. This enables write command processing according to a process mode complying a storage system condition.
    Type: Grant
    Filed: September 21, 2011
    Date of Patent: February 25, 2014
    Assignee: Hitachi, Ltd.
    Inventors: Haruaki Watanabe, Hidenori Suzuki
  • Publication number: 20140052914
    Abstract: A multi-ported memory that supports multiple read and write accesses is described. The multi-ported memory may include a number of read/write ports that is greater than the number of read/write ports of each memory bank of the multi-ported memory. The multi-ported memory allows for read operation(s) and write operation(s) to be received during the same clock cycle. In the event that an incoming write operation is blocked by read operation(s), data for that write operation may be stored in one of a plurality of cache banks included in the multi-port memory. The cache banks are accessible to both write and read operations. In the event than the write operation is not blocked by read operation(s), a determination is made as to whether data for that incoming write operation is stored in the memory bank targeted by that incoming write operation or in one of the cache banks.
    Type: Application
    Filed: December 17, 2012
    Publication date: February 20, 2014
    Applicant: Broadcom Corporation
    Inventors: Weihuang Wang, Chien-Hsien Wu
  • Publication number: 20140052916
    Abstract: A processing network comprising a cache configured to store copies of memory data as a plurality of cache lines, a cache controller configured to receive data requests from a plurality of cache agents, and designate at least one of the cache agents as an owner of a first of the cache lines, and a directory configured to store cache ownership designations of the first cache line, and wherein the directory is encoded to support substantially simultaneous ownership of the first cache line by a plurality but less than all of the cache agents. Also disclosed is a method comprising receiving coherent transactions from a plurality of cache agents, and storing ownership designations of a plurality of cache lines by the cache agents in a directory, wherein the directory is configured to support storage of substantially simultaneous ownership designations for a plurality but less than all of the cache agents.
    Type: Application
    Filed: July 29, 2013
    Publication date: February 20, 2014
    Applicant: Futurewei Technologies, Inc.
    Inventors: Iulin Lih, Naxin Zhang, Chenghong He, Hongbo Shi
  • Publication number: 20140052915
    Abstract: An information processing apparatus includes a plurality of cache memories, a plurality of processors configured to respectively access the plurality of cache memories, and a memory, in which each of the plurality of processors executes a program to function as a cache processing unit configured to perform cache processing including at least one of transfer to the memory and discard with respect to all the pieces of data stored in the cache memory.
    Type: Application
    Filed: July 12, 2013
    Publication date: February 20, 2014
    Inventors: Takamori YAMAGUCHI, Taichi SHIMOYASHIKI, Kazunori YAMAMOTO
  • Publication number: 20140052913
    Abstract: A multi-ported memory that supports multiple read and write accesses is described herein. The multi-ported memory may include a number of read/write ports that is greater than the number of read/write ports of each memory bank of the multi-ported memory. The multi-ported memory allows for at least one read operation and at least one write operation to be received during the same clock cycle. In the event that an incoming write operation is blocked by the at least one read operation, data for that incoming write operation may be stored in a cache included in the multi-port memory. That cache is accessible to both write operations and read operations. In the event than the incoming write operation is not blocked by the at least one read operation, data for that incoming write operation is stored in the memory bank targeted by that incoming write operation.
    Type: Application
    Filed: October 19, 2012
    Publication date: February 20, 2014
    Applicant: BROADCOM CORPORATION
    Inventors: Weihuang Wang, Chien-Hsien Wu
  • Patent number: 8656104
    Abstract: A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship. The point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a first cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device. A read request is generated to read the source track in the source storage following the sending of the prefetch request. The read source track is copied to a corresponding target track in the target storage.
    Type: Grant
    Filed: May 1, 2012
    Date of Patent: February 18, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Lokesh M. Gupta
  • Patent number: 8656128
    Abstract: A first SMP computer has first and second processing units and a first system memory pool, a second SMP computer has third and fourth processing units and a second system memory pool, and a third SMP computer has at least fifth and sixth processing units and third, fourth and fifth system memory pools. The fourth system memory pool is inaccessible to the third, fourth and sixth processing units and accessible to at least the second and fifth processing units, and the fifth system memory pool is inaccessible to the first, second and sixth processing units and accessible to at least the fourth and fifth processing units. A first interconnect couples the second processing unit for load-store coherent, ordered access to the fourth system memory pool, and a second interconnect couples the fourth processing unit for load-store coherent, ordered access to the fifth system memory pool.
    Type: Grant
    Filed: August 30, 2012
    Date of Patent: February 18, 2014
    Assignee: International Business Machines Corporation
    Inventors: Guy L Guthrie, Charles F. Marino, William J. Starke, Derek E. Williams
  • Patent number: 8656129
    Abstract: An aggregate symmetric multiprocessor (SMP) data processing system includes a first SMP computer including at least first and second processing units and a first system memory pool and a second SMP computer including at least third and fourth processing units and second and third system memory pools. The second system memory pool is a restricted access memory pool inaccessible to the fourth processing unit and accessible to at least the second and third processing units, and the third system memory pool is accessible to both the third and fourth processing units. An interconnect couples the second processing unit in the first SMP computer for load-store coherent, ordered access to the second system memory pool in the second SMP computer, such that the second processing unit in the first SMP computer and the second system memory pool in the second SMP computer form a synthetic third SMP computer.
    Type: Grant
    Filed: August 30, 2012
    Date of Patent: February 18, 2014
    Assignee: International Business Machines Corporation
    Inventor: William J. Starke
  • Publication number: 20140047183
    Abstract: In one embodiment, a computer system includes a cache having one or more memory locations associated with one or more computing systems, one or more cache managers, each cache manager associated with a portion of the cache, a metadata service communicatively linked with the cache managers, a configuration manager communicatively linked with the cache managers and the metadata service, and a data store.
    Type: Application
    Filed: August 7, 2012
    Publication date: February 13, 2014
    Applicant: DELL PRODUCTS L.P.
    Inventors: Gaurav Chawla, Ranjit Pandit
  • Publication number: 20140047263
    Abstract: Synchronous local and cross-site switchover and switchback operations of a node in a disaster recovery (DR) group are described. In one embodiment, during switchover, a takeover node receives a failover request and responsively identifies a first partner node in a first cluster and a second partner node in a second cluster. The first partner node and the takeover node form a first high-availability (HA) group and the second partner node and a third partner node in the second cluster form a second HA group. The first and second HA groups form the DR group and share a storage fabric. The takeover node synchronously restores client access requests associated with a failed partner node at the takeover node.
    Type: Application
    Filed: August 8, 2012
    Publication date: February 13, 2014
    Inventors: Susan Coatney, Thomas B. Bolt, Laurent Lambert, Vaiapuri Ramasubramaniam, Chaitanya Patel, Sreelatha Reddy, Hrishikesh Keremane, Harihara Kadayam
  • Patent number: 8650363
    Abstract: A memory subsystem includes a volatile memory, a nonvolatile memory, and a controller including logic to interface the volatile memory to an external system. The volatile memory is addressable for reading and writing by the external system. The memory subsystem includes a power controller with logic to detect when power from the external system to at least one of the volatile and nonvolatile memories and to the controller fails. When external system power fails, backup power is provided to at least one of the volatile and nonvolatile memories and to the controller for long enough to enable the controller to back up data from the volatile memory to the nonvolatile memory.
    Type: Grant
    Filed: May 27, 2012
    Date of Patent: February 11, 2014
    Assignee: AgigA Tech
    Inventor: Ronald H Sartore
  • Patent number: 8645612
    Abstract: According to one embodiment, an information processing device includes an OS and a virtual machine switching section. The OS accesses a hardware resource including a nonvolatile semiconductor memory and a semiconductor memory used as a cache memory of the nonvolatile semiconductor memory. The virtual machine switching section switches a virtual machine in exection from a first virtual machine to a second virtual machine while a cache process is executed, when cache miss in a process executed by the first virtual machine is detected.
    Type: Grant
    Filed: March 31, 2011
    Date of Patent: February 4, 2014
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Atsushi Kunimatsu, Goh Uemura, Tsutomu Owa
  • Publication number: 20140032846
    Abstract: Systems and methods for supporting a plurality of load and store accesses of a cache are disclosed. Responsive to a request of a plurality of requests to access a block of a plurality of blocks of a load cache, the block of the load cache and a logically and physically paired block of a store coalescing cache are accessed in parallel. The data that is accessed from the block of the load cache is overwritten by the data that is accessed from the block of the store coalescing cache by merging on a per byte basis. Access is provided to the merged data.
    Type: Application
    Filed: July 30, 2012
    Publication date: January 30, 2014
    Applicant: Soft Machines, Inc.
    Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
  • Publication number: 20140025891
    Abstract: One embodiment sets forth a technique for ensuring relaxed coherency between different caches. Two different execution units may be configured to access different caches that may store one or more cache lines corresponding to the same memory address. During time periods between memory barrier instructions relaxed coherency is maintained between the different caches. More specifically, writes to a cache line in a first cache that corresponds to a particular memory address are not necessarily propagated to a cache line in a second cache before the second cache receives a read or write request that also corresponds to the particular memory address. Therefore, the first cache and the second are not necessarily coherent during time periods of relaxed coherency. Execution of a memory barrier instruction ensures that the different caches will be coherent before a new period of relaxed coherency begins.
    Type: Application
    Filed: July 20, 2012
    Publication date: January 23, 2014
    Inventors: Joel James MCCORMACK, Rajesh KOTA, Olivier GIROUX, Emmett M. KILGARIFF