Multiple Caches Patents (Class 711/119)
  • Publication number: 20140281232
    Abstract: Methods, systems and software for inserting prefetches into software applications or programs are described. A baseline program is analyzed to identify target instructions for which prefetching may be beneficial using various pattern analyses. Optionally, a cost/benefit analysis can be performed to determine if it is worthwhile to insert prefetches for the target instructions.
    Type: Application
    Filed: March 14, 2014
    Publication date: September 18, 2014
    Applicant: Hagersten Optimization AB
    Inventors: Ernst Erik Hagersten, Muneeb Anwar Khan
  • Patent number: 8838899
    Abstract: One or more of the present techniques provide a compute engine buffer configured to maneuver data and increase the efficiency of a compute engine. One such compute engine buffer is connected to a compute engine which performs operations on operands retrieved from the buffer, and stores results of the operations to the buffer. Such a compute engine buffer includes a compute buffer having storage units which may be electrically connected or isolated, based on the size of the operands to be stored and the configuration of the compute engine. The compute engine buffer further includes a data buffer, which may be a simple buffer. Operands may be copied to the data buffer before being copied to the compute buffer, which may save additional clock cycles for the compute engine, further increasing the compute engine efficiency.
    Type: Grant
    Filed: August 6, 2013
    Date of Patent: September 16, 2014
    Assignee: Micron Technology, Inc.
    Inventor: Robert Walker
  • Publication number: 20140258610
    Abstract: The invention may be embodied in a cache memory volume windows data storage system to enable cache memory rebuilds in response to power-on-reset (POR) events. To handle POR events occurring while a flush from the cache memory to the permanent memory is taking place, the storage controller maintains duplicate copy of a volume window bitmap and a volume mark register while a portion of the cache memory unavailable due to the flush event. The second copy of the volume bit map and volume mark register concatenation are used to account for the case where a POR event occurs while the flush is in process. The firmware uses the peer drives and the applicable cache rebuild protocol (e.g., RAID) to rebuild the data for all volume windows that contain data that may have become corrupted due to a POR event occurring during cache memory flush events are in progress.
    Type: Application
    Filed: March 8, 2013
    Publication date: September 11, 2014
    Applicant: LSI CORPORATION
    Inventor: Kapil Sundrani
  • Patent number: 8832415
    Abstract: A multiprocessor system includes nodes. Each node includes a data path that includes a core, a TLB, and a first level cache implementing disambiguation. The system also includes at least one second level cache and a main memory. For thread memory access requests, the core uses an address associated with an instruction format of the core. The first level cache uses an address format related to the size of the main memory plus an offset corresponding to hardware thread meta data. The second level cache uses a physical main memory address plus software thread meta data to store the memory access request. The second level cache accesses the main memory using the physical address with neither the offset nor the thread meta data after resolving speculation. In short, this system includes mapping of a virtual address to a different physical addresses for value disambiguation for different threads.
    Type: Grant
    Filed: January 4, 2011
    Date of Patent: September 9, 2014
    Assignee: International Business Machines Corporation
    Inventors: Alan Gala, Martin Ohmacht
  • Patent number: 8832414
    Abstract: Technologies are generally described herein for determining a profitability of direct fetching in a multicore processor. The multicore processor may include a first and a second tile. The first tile may include a first core and a first cache. The second tile may include a second core, a second cache, and a fetch location pointer register (FLPR). The multicore processor may migrate a thread executing on the first core to the second core. The multicore processor may store a location of the first cache in the FLPR. The multicore processor may execute the thread on the second core. The multicore processor may identify a cache miss for a block in the second cache. The multicore processor may determine whether a profitability of direct fetching of the block indicates direct fetching or directory-based fetching. The multicore processor may perform direct fetching or directory-based fetching based on the determination.
    Type: Grant
    Filed: March 24, 2011
    Date of Patent: September 9, 2014
    Assignee: Empire Technology Development LLC
    Inventor: Yan Solihin
  • Patent number: 8832377
    Abstract: Information is maintained on strides configured in a second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks. A determination is made of tracks to demote from a first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are to a second stride in the second cache having an occupancy count indicating the stride is empty. A determination is made of a target stride in the second cache based on the occupancy counts of the strides in the second cache. A determination is made of at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache. The target stride is populated with the valid tracks from the source strides.
    Type: Grant
    Filed: February 27, 2013
    Date of Patent: September 9, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Lokesh M. Gupta
  • Patent number: 8825956
    Abstract: Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride.
    Type: Grant
    Filed: February 27, 2013
    Date of Patent: September 2, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Lokesh M. Gupta
  • Patent number: 8825952
    Abstract: Provided are a computer program product, system, and method for handling high priority requests in a sequential access storage device. Received modified tracks for write requests are cached in a non-volatile storage device integrated with the sequential access storage device. A destage request is added to a request queue for a received write request having modified tracks for the sequential access storage medium cached in the non-volatile storage device. A read request indicting a priority is received. A determination is made of a priority of the read request as having a first priority or a second priority. The read request is added to the request queue in response to determining that the determined priority is the first priority. The read request is processed at a higher priority than the read and destage requests in the request queue in response to determining that the determined priority is the second priority.
    Type: Grant
    Filed: May 23, 2011
    Date of Patent: September 2, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Lokesh M. Gupta
  • Patent number: 8825953
    Abstract: Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride.
    Type: Grant
    Filed: January 17, 2012
    Date of Patent: September 2, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Lokesh M. Gupta
  • Patent number: 8825957
    Abstract: Information is maintained on strides configured in a second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks. A determination is made of tracks to demote from a first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are to a second stride in the second cache having an occupancy count indicating the stride is empty. A determination is made of a target stride in the second cache based on the occupancy counts of the strides in the second cache. A determination is made of at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache. The target stride is populated with the valid tracks from the source strides.
    Type: Grant
    Filed: January 17, 2012
    Date of Patent: September 2, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Lokesh M. Gupta
  • Publication number: 20140237185
    Abstract: Technologies are generally described for methods, systems, and devices effective to implement one-cacheable multi-core architectures. In one example, a multi-core processor that includes a first and second tile may be configured to implement a one-cacheable architecture. The second tile may be configured to generate a request for a data block. The first tile may be configured to receive the request for the data block, and determine that the requested data block is part of a group of data blocks identified as one-cacheable. The first tile may further determine that the requested data block is stored in a first cache in the first tile. The first tile may send the data block from the first cache in the first tile to the second tile, and invalidate the data blocks of the group of data blocks in the first cache in the first tile.
    Type: Application
    Filed: February 21, 2013
    Publication date: August 21, 2014
    Applicant: EMPIRE TECHNOLOGY DEVELOPMENT, LLC
    Inventor: Yan Solihin
  • Patent number: 8812802
    Abstract: A memory subsystem includes a volatile memory, a nonvolatile memory, and a controller including logic to interface the volatile memory to an external system. The volatile memory is addressable for reading and writing by the external system. The memory subsystem includes a power controller with logic to detect when power from the external system to at least one of the volatile and nonvolatile memories and to the controller fails. When external system power fails, backup power is provided to at least one of the volatile and nonvolatile memories and to the controller for long enough to enable the controller to back up data from the volatile memory to the nonvolatile memory.
    Type: Grant
    Filed: May 28, 2012
    Date of Patent: August 19, 2014
    Assignee: AgigA Tech, Inc.
    Inventor: Ronald H Sartore
  • Patent number: 8812785
    Abstract: Provided are a computer program product, system, and method for managing track discard requests to include in discard track messages. A backup copy of a track in a cache is maintained in the cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. In response to detecting that a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent indicating the tracks indicated in the queued predetermined number of track discard requests to the cache backup device instructing the cache backup device to discard the tracks indicated in the discard multiple tracks message. In response to determining a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode.
    Type: Grant
    Filed: May 23, 2011
    Date of Patent: August 19, 2014
    Assignee: International Business Machines Corporation
    Inventors: Kevin J. Ash, Michael T. Benhase, Lokesh M. Gupta, Kenneth W. Todd
  • Patent number: 8812796
    Abstract: Private or shared read-only memory regions. One embodiment may be practiced in a computing environment including a plurality of agents. A method includes acts for declaring one or more memory regions private to a particular agent or shared read only amongst agents by having software utilize processor level instructions to specify to hardware the private or shared read only memory address regions. The method includes an agent executing a processor level instruction to specify one or more memory regions as private to the agent or shared read-only amongst a plurality of agents. As a result of an agent executing a processor level instruction to specify one or more memory regions as private to the agent or shared read-only amongst a plurality of agents, a hardware component monitoring the one or more memory regions for conflicting accesses or prevents conflicting accesses on the one or more memory regions.
    Type: Grant
    Filed: June 26, 2009
    Date of Patent: August 19, 2014
    Assignee: Microsoft Corporation
    Inventors: Jan Gray, David Callahan, Burton Jordan Smith, Gad Sheaffer, Ali-Reza Adl-Tabatabai
  • Publication number: 20140229676
    Abstract: System and techniques for rebuilding a redundant secondary storage cache including a first storage device and a second storage device are described. A metadata entry indicative of a validity of a portion of information stored by a first storage cache device and associated with a region of a primary storage device is received. When the validity of the portion of information associated with the region of the primary storage device is established, a region lock is requested on the region of the primary storage device associated with the portion of information stored by the first storage cache device. Then, the portion of information and the corresponding metadata entry associated with the region of the primary storage device is copied from the first cache storage device to a second storage cache device to rebuild the second storage cache device.
    Type: Application
    Filed: February 11, 2013
    Publication date: August 14, 2014
    Applicant: LSI Corporation
    Inventors: Sujan Biswas, Karimulla Sheik, Sumanesh Samanta, Debal K. Mridha, Naga S. Vadalamani
  • Patent number: 8806138
    Abstract: Data values are cached by dynamically determining the dependencies of computation nodes on input parameters and on other results of computation nodes. Cache data structures are maintained for computation nodes. When a node accesses a parameter, the parameter and its current value are added to the node's cache data structure. The cache data structure stores the result value of the computation node. When one computation node calls another node, the parameters and parameter values accessed by the second computation node may be added to the first and second computation nodes' cache data structures. When a computation node is called with parameter values, the cache data structure of the computation node is searched for a cached result value corresponding to at least a portion of the parameter values. If a cached result value is not found, the computation node is executed to determine and optionally cache the result value.
    Type: Grant
    Filed: March 30, 2012
    Date of Patent: August 12, 2014
    Assignee: Pixar
    Inventor: Christopher Colby
  • Patent number: 8806133
    Abstract: Protecting computers against cache poisoning, including a cache-entity table configured to maintain a plurality of associations between a plurality of data caches and a plurality of entities, where each of the caches is associated with a different one of the entities, and a cache manager configured to receive data that is associated with any of the entities and store the received data in any of the caches that the cache-entity table indicates is associated with the entity, and receive a data request that is associated with any of the entities and retrieve the requested data from any of the caches that the cache-entity table indicates is associated with the requesting entity, where any of the cache-entity table and cache manager are implemented in either of computer hardware and computer software embodied in a computer-readable medium.
    Type: Grant
    Filed: September 14, 2009
    Date of Patent: August 12, 2014
    Assignee: International Business Machines Corporation
    Inventors: Roee Hay, Adi Sharabani
  • Publication number: 20140223072
    Abstract: A data storage system includes two tiers of caching memory. Cached data is organized into cache windows, and the cache windows are organized into a plurality of priority queues. Cache windows are moved between priority queues on the basis of a threshold data access frequency; only when both a cache window is flagged for promotion and a cache window is flagged for demotion will a swap occur.
    Type: Application
    Filed: February 7, 2013
    Publication date: August 7, 2014
    Applicant: LSI CORPORATION
    Inventors: Vinay Bangalore Shivashankaraiah, Mark Ish
  • Patent number: 8799578
    Abstract: Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache.
    Type: Grant
    Filed: May 23, 2011
    Date of Patent: August 5, 2014
    Assignee: International Business Machines Corporation
    Inventors: Kevin J. Ash, Michael T. Benhase, Lokesh M. Gupta, Matthew J. Kalos, Keneth W. Todd
  • Patent number: 8799583
    Abstract: A method and central processing unit supporting atomic access of shared data by a sequence of memory access operations. A processor status flag is reset. A processor executes, subsequent to the setting of the processor status flag, a sequence of program instructions with instructions accessing a subset of shared data contained within its local cache. During execution of the sequence of program instructions and in response to a modification by another processor of the subset of shared data, the processor status flag is set. Subsequent to the executing the sequence of program instructions and based upon the state of the processor status flag, either a first program processing or a second program processing is executed. In some examples the first program processing includes storing results data into the local cache and the second program processing includes discarding the results data.
    Type: Grant
    Filed: May 25, 2010
    Date of Patent: August 5, 2014
    Assignee: International Business Machines Corporation
    Inventors: Mark S. Farrell, Jonathan T. Hsieh, Christian Jacobi, Timothy J. Slegel
  • Patent number: 8799569
    Abstract: Various method and system embodiments for facilitating catalog sharing in multiprocessor systems use multiple ECS cache structures to which catalogs are assigned based on an attribute such as SMS storage class or a high level qualifier (HLQ) (e.g. an N-to-1 mapping) or each individual catalog (e.g. a 1-to-1 mapping). When maintenance is performed on an ECS shared catalog, the multiple ECS cache structure requires only those catalogs associated with a particular ECS cache structure be disconnected. Any catalogs in the structure that are not involved in or affected by the maintenance may be temporarily or permanently moved to a different ECS cache structure. As a result, VVDS sharing is only required for those catalogs on which maintenance is being performed or that remain associated with that ECS cache structure during maintenance. This reduces I/O activity to the DASD, and results in a significant overall performance improvement.
    Type: Grant
    Filed: April 17, 2012
    Date of Patent: August 5, 2014
    Assignee: International Business Machines Corporation
    Inventors: Eric J. Harris, Franklin Emmert McCune, David Charles Reed, Max Douglas Smith
  • Patent number: 8799396
    Abstract: Network cache systems are used to improve network performance and reduce network traffic. An improved network cache system that uses a centralized shared cache system is disclosed. Each cache device that shares the centralized shared cache system maintains its own catalog, database or metadata index of the content stored on the centralized shared cache system. When one of the cache devices that shares the centralized shared cache system stores a new content resource to the centralized shared cache system, that cache device transmits a broadcast message to all of the peer cache devices. The other cache devices that receive the broadcast message will then update their own local catalog, database or metadata index of the centralized share cache system with the information about the new content resource.
    Type: Grant
    Filed: February 4, 2008
    Date of Patent: August 5, 2014
    Assignee: Cisco Technology, Inc.
    Inventor: Theodore Robert Grevers, Jr.
  • Publication number: 20140215156
    Abstract: Provided are a prioritized dual caching method and apparatus. The dual caching apparatus includes a content cache unit configured to store a content cache separated into a first (premium) cache and a second (general) cache, a pointer storage unit configured to store a pointer for variably separating the first and second caches, a threshold value storage unit configured to store a first threshold value and a second threshold value that is less than the first threshold value, and a cache policy execution unit configured to receive a request for content, manage a request count value for the content, and execute a cache policy based on results of comparing the request count value to the first threshold value and the second threshold value and whether there is requested content in the content cache.
    Type: Application
    Filed: March 18, 2013
    Publication date: July 31, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventor: Jong Geun PARK
  • Patent number: 8793436
    Abstract: Provided a computer program product, system, and method for cache management of tracks in a first cache and a second cache for a storage. The first cache maintains modified and unmodified tracks in the storage subject to Input/Output (I/O) requests. Modified and unmodified tracks are demoted from the first cache. The modified and the unmodified tracks demoted from the first cache are promoted to the second cache. The unmodified tracks demoted from the second cache are discarded. The modified tracks in the second cache that are at proximate physical locations on the storage device are grouped and the grouped modified tracks are destaged from the second cache to the storage device.
    Type: Grant
    Filed: May 23, 2011
    Date of Patent: July 29, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Binny S. Gill, Lokesh M. Gupta, Matthew J. Kalos
  • Patent number: 8793437
    Abstract: A cache memory system using temporal locality information and a data storage method are provided. The cache memory system including: a main cache which stores data accessed by a central processing unit; an extended cache which stores the data if the data is evicted from the main cache; and a separation cache which stores the data of the extended cache when the data of the extended cache is evicted from the extended cache and temporal locality information corresponding to the data of the extended cache satisfies a predetermined condition.
    Type: Grant
    Filed: August 8, 2007
    Date of Patent: July 29, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jong Myon Kim, Soojung Ryu, Dong-Hoon Yoo, Dong Kwan Suh, Jeongwook Kim
  • Publication number: 20140208021
    Abstract: For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, a Solid State Device (SSD) tier is variably shared between the lower-speed cache and the managed tiered levels of storage such that the managed tiered levels of storage are operational on large data segments, and the lower-speed cache is allocated with the large data segments, yet operates with data segments of a smaller size than the large data segments and within the large data segments.
    Type: Application
    Filed: November 7, 2013
    Publication date: July 24, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. BENHASE, Lokesh M. GUPTA, Karl A. NIELSEN
  • Publication number: 20140208030
    Abstract: An information processing apparatus including a plurality of mutually connected system boards, wherein each of the system boards includes: a plurality of processors; and a plurality of memories each of which stores data and directory information corresponding to the data, and corresponds to any one of the processors, and wherein each of the plurality of processors, upon receiving a read request for data stored in a memory corresponding to the own processor from another processor, performs an exclusive logical sum operation on identification information included in the read request and identifying the another processor and a check bit included in the directory information and identifying a processor which holds target data of the read request, increments a count value included in the directory information and indicating the number of processors which hold the target data, and sets presence information included in the directory information and indicating a system board which includes the another processor.
    Type: Application
    Filed: March 20, 2014
    Publication date: July 24, 2014
    Applicant: FUJITSU LIMITED
    Inventors: Hideki SAKATA, Go SUGIZAKI, Naoya ISHIMURA
  • Publication number: 20140208029
    Abstract: For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, and at a time in which at least one data segment is to be migrated from one level to another level of the tiered levels of storage, a data migration mechanism is initiated by copying data resident in the lower-speed cache corresponding to the at least one data segment to be migrated to a target on the another level, and reading remaining data, not previously copied from the lower-speed cache, from a source on the one level, and writing the remaining data to the target.
    Type: Application
    Filed: January 22, 2013
    Publication date: July 24, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. BENHASE, Lokesh M. GUPTA
  • Publication number: 20140208018
    Abstract: For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to use a Solid State Drive (SSD) portion of the tiered levels of storage, clumped hot ones of the groups of data segments are migrated to use the SSD portion while using the lower-speed cache for a remaining portion of the clumped hot ones, and sparsely hot ones of the groups of data segments are migrated to use the lower-speed cache while using a lower one of the tiered levels of storage for a remaining portion of the sparsely hot ones.
    Type: Application
    Filed: January 22, 2013
    Publication date: July 24, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. BENHASE, Lokesh M. GUPTA, Cheng-Chung SONG
  • Publication number: 20140208017
    Abstract: For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, a Solid State Device (SSD) tier is variably shared between the lower-speed cache and the managed tiered levels of storage such that the managed tiered levels of storage are operational on large data segments, and the lower-speed cache is allocated with the large data segments, yet operates with data segments of a smaller size than the large data segments and within the large data segments.
    Type: Application
    Filed: January 22, 2013
    Publication date: July 24, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. BENHASE, Lokesh M. GUPTA, Karl A. NIELSEN
  • Publication number: 20140208020
    Abstract: For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to utilize a Solid State Drive (SSD) portion of the tiered levels of storage, while sparsely hot ones of the groups of data segments are migrated to utilize the lower-speed cache.
    Type: Application
    Filed: November 7, 2013
    Publication date: July 24, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. BENHASE, Lokesh M. GUPTA, Cheng-Chung SONG
  • Patent number: 8788742
    Abstract: Provided are a computer program product, system, and method for using an attribute of a write request to determine where to cache data in a storage system having multiple caches including non-volatile storage cache in a sequential access storage device. Received modified tracks are cached in the non-volatile storage device integrated with the sequential access storage device in response to determining to cache the modified tracks. A write request having modified tracks is received. A determination is made as to whether an attribute of the received write request satisfies a condition. The received modified tracks for the write request are cached in the non-volatile storage device in response to determining that the determined attribute does not satisfy the condition. A destage request is added to a request queue for the received write request having the determined attribute not satisfying the condition.
    Type: Grant
    Filed: May 23, 2011
    Date of Patent: July 22, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Binny S. Gill, Lokesh M. Gupta, Matthew J. Kalos
  • Publication number: 20140201443
    Abstract: In various embodiments, the present disclosure provides a system comprising a first plurality of processing cores, ones of the first plurality of processing cores coupled to a respective core interface module among a first plurality of core interface modules, the first plurality of core interface modules configured to be coupled to form in a first ring network of processing cores; a second plurality of processing cores, ones of the second plurality of processing cores coupled to a respective core interface module among a second plurality of core interface modules, the second plurality of core interface modules configured to be coupled to form a second ring network of processing cores; a first global interface module to form an interface between the first ring network and a third ring network; and a second global interface module to form an interface between the second ring network and the third ring network.
    Type: Application
    Filed: January 22, 2014
    Publication date: July 17, 2014
    Applicant: Marvell World Trade Ltd.
    Inventors: Eitan Joshua, Shaul Chapman, Erez Amit, Moshe Raz, Husam Khshaiboun
  • Publication number: 20140201444
    Abstract: In various embodiments, the present disclosure provides a system comprising a first plurality of processing cores, ones of the first plurality of processing cores coupled to a respective core interface module among a first plurality of core interface modules, the first plurality of core interface modules configured to be coupled to form in a first ring network of processing cores; a second plurality of processing cores, ones of the second plurality of processing cores coupled to a respective core interface module among a second plurality of core interface modules, the second plurality of core interface modules configured to be coupled to form a second ring network of processing cores; a first global interface module to form an interface between the first ring network and a third ring network; and a second global interface module to form an interface between the second ring network and the third ring network.
    Type: Application
    Filed: January 22, 2014
    Publication date: July 17, 2014
    Applicant: Marvell World Trade Ltd.
    Inventors: Eitan Joshua, Shaul Chapman, Erez Amit, Moshe Raz, Amit Shmilovich
  • Publication number: 20140201442
    Abstract: Systems and techniques for continuously writing to a secondary storage cache are described. A data storage region of a secondary storage cache is divided into a first cache region and a second cache region. A data storage threshold for the first cache region is determined. Data is stored in the first cache region until the data storage threshold is met. Then, additional data is stored in the second cache region while the data stored in the first cache region is written back to a primary storage device.
    Type: Application
    Filed: January 15, 2013
    Publication date: July 17, 2014
    Applicant: LSI CORPORATION
    Inventors: Jeevanandham Rajasekaran, Ankit Sihare
  • Publication number: 20140201445
    Abstract: In various embodiments, the present disclosure provides a system comprising a first plurality of processing cores, ones of the first plurality of processing cores coupled to a respective core interface module among a first plurality of core interface modules, the first plurality of core interface modules configured to be coupled to form in a first ring network of processing cores; a second plurality of processing cores, ones of the second plurality of processing cores coupled to a respective core interface module among a second plurality of core interface modules, the second plurality of core interface modules configured to be coupled to form a second ring network of processing cores; a first global interface module to form an interface between the first ring network and a third ring network; and a second global interface module to form an interface between the second ring network and the third ring network.
    Type: Application
    Filed: January 22, 2014
    Publication date: July 17, 2014
    Applicant: Marvell World Trade Ltd.
    Inventors: Eitan Joshua, Erez Amit, Shaul Chapman, Sujat Jamil, Frank O'Bleness
  • Patent number: 8782345
    Abstract: Subject matter disclosed herein relates to sub-block accessible cache memory.
    Type: Grant
    Filed: August 5, 2013
    Date of Patent: July 15, 2014
    Assignee: Micron Technology, Inc.
    Inventors: Giuseppe Ferrari, Procolo Carannante, Angelo Di Sena, Fabio Salvati, Anna Sorgente
  • Patent number: 8782435
    Abstract: A processor comprising: an instruction processing pipeline, configured to receive a sequence of instructions for execution, said sequence comprising at least one instruction including a flow control instruction which terminates the sequence; a hash generator, configured to generate a hash associated with execution of the sequence of instructions; a memory configured to securely receive a reference signature corresponding to a hash of a verified corresponding sequence of instructions; verification logic configured to determine a correspondence between the hash and the reference signature; and authorization logic configured to selectively produce a signal, in dependence on a degree of correspondence of the hash with the reference signature.
    Type: Grant
    Filed: July 15, 2011
    Date of Patent: July 15, 2014
    Assignee: The Research Foundation for The State University of New York
    Inventor: Kanad Ghose
  • Patent number: 8782434
    Abstract: A pipelined processor comprising a cache memory system, fetching instructions for execution from a portion of said cache memory system, an instruction commencing processing before a digital signature of the cache line that contained the instruction is verified against a reference signature of the cache line, the verification being done at the point of decoding, dispatching, or committing execution of the instruction, the reference signature being stored in an encrypted form in the processor's memory, and the key for decrypting the said reference signature being stored in a secure storage location. The instruction processing proceeds when the two signatures exactly match and, where further instruction processing is suspended or processing modified on a mismatch of the two said signatures.
    Type: Grant
    Filed: July 15, 2011
    Date of Patent: July 15, 2014
    Assignee: The Research Foundation for the State University of New York
    Inventor: Kanad Ghose
  • Publication number: 20140195729
    Abstract: A method for minimizing soft error rates within caches by configuring a cache with certain sections to correspond to bitcell topologies that are more resistant to soft errors and then using these sections to store modified data.
    Type: Application
    Filed: January 8, 2013
    Publication date: July 10, 2014
    Inventors: Andrew C. Russell, Ravindraraj Ramaraju
  • Publication number: 20140195737
    Abstract: Techniques are disclosed related to flushing one or more data caches. In one embodiment an apparatus includes a processing element, a first cache associated with the processing element, and a circuit configured to copy modified data from the first cache to a second cache in response to determining an activity level of the processing element. In this embodiment, the apparatus is configured to alter a power state of the first cache after the circuit copies the modified data. The first cache may be at a lower level in a memory hierarchy relative to the second cache. In one embodiment, the circuit is also configured to copy data from the second cache to a third cache or a memory after a particular time interval. In some embodiments, the circuit is configured to copy data while one or more pipeline elements of the apparatus are in a low-power state.
    Type: Application
    Filed: January 4, 2013
    Publication date: July 10, 2014
    Applicant: APPLE INC.
    Inventors: Brian P. Lilly, Gerard R. Williams, III
  • Publication number: 20140189238
    Abstract: A virtually tagged cache may be configured to index virtual address entries in the cache into lockable sets based on a page offset value. When a memory operation misses on the virtually tagged cache, only the one set of virtual address entries with the same page offset may be locked. Thereafter, this general lock may be released and only an address stored in the physical tag array matching the physical address and a virtual address in the virtual tag array corresponding to the matching address stored in the physical tag array may be locked to reduce the amount and duration of locked addresses. The machine may be stalled only if a particular memory address request hits and/or tries to access one or more entries in a locked set. Devices, systems, methods, and computer readable media are provided.
    Type: Application
    Filed: December 28, 2012
    Publication date: July 3, 2014
    Inventors: Li-Gao ZEI, Fernando LATORRE, Steffen KOSINSKI, Jaroslaw TOPP, Varun MOHANDRU, Lutz NAETHKE
  • Publication number: 20140189204
    Abstract: An information processing apparatus comprises a plurality types of cache memories having different characteristics, decides on a type of cache memory to be used as a data cache destination based on the access characteristics of cache-target data, and caches the data in the cache memory of the decided type.
    Type: Application
    Filed: December 28, 2012
    Publication date: July 3, 2014
    Applicant: Hitachi, Ltd.
    Inventors: Sadahiro Sugimoto, Akira Yamamoto, Shigeo Homma
  • Publication number: 20140181389
    Abstract: Data caching methods and systems are provided. The data cache method loads data into an installation cache and a cache (simultaneously or serially) and returns data from the installation cache when the data has not completely loaded into the cache. The data cache system includes a processor, a memory coupled to the processor, a cache coupled to the processor and the memory and an installation cache coupled to the processor and the memory. The system is configured to load data from the memory into the installation cache and the cache (simultaneously or serially) and return data from the installation cache to the processor when the data has not completely loaded into the cache.
    Type: Application
    Filed: December 21, 2012
    Publication date: June 26, 2014
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventors: Matthew R. Poremba, Gabriel H. Loh
  • Patent number: 8762664
    Abstract: A method and apparatus for replicating instances of cache nodes in a cluster is described. In one embodiment, the number of available cache nodes in the cluster is determined. Available cache nodes from the cluster are selected based on a parameter. An instance of a cache node is replicated to only one of the selected cache nodes in the cluster.
    Type: Grant
    Filed: August 30, 2007
    Date of Patent: June 24, 2014
    Assignee: Red Hat, Inc.
    Inventors: Manik Ram Surtani, Brian Edward Stansberry
  • Patent number: 8762647
    Abstract: According to one embodiment, a multicore processor system includes: a memory region, and a multicore processor that includes plural cores, a first cache, and a second cache shared between the plural cores. The memory region permits first state in which exclusive use by using the first and second cache is granted to one core, second state in which exclusive use by using the second cache is granted to one core group, and third state in which use by using neither the first cache nor the second cache is granted to all core groups. A kernel unit writes back a first cache to the second cache when a transition of the memory region from the first state to the second state is made, and writes back a second cache to the memory region when a transition of the memory region from the second state to the third state is made.
    Type: Grant
    Filed: September 21, 2011
    Date of Patent: June 24, 2014
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Akira Yokosawa
  • Publication number: 20140164702
    Abstract: An embodiment provides a virtual address cache memory including: a TLB virtual page memory configured to, when a rewrite to a TLB occurs, rewrite entry data; a data memory configured to hold cache data using a virtual page tag or a page offset as a cache index; a cache state memory configured to hold a cache state for the cache data stored in the data memory, in association with the cache index; a first physical address memory configured to, when the rewrite to the TLB occurs, rewrite a held physical address; and a second physical address memory configured to, when the cache data is written to the data memory after the occurrence of the rewrite to the TLB, rewrite a held physical address.
    Type: Application
    Filed: November 26, 2013
    Publication date: June 12, 2014
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Kenta Yasufuku, Shigeaki Iwasa, Yasuhiko Kurosawa, Hiroo Hayashi, Seiji Maeda, Mitsuo Saito
  • Publication number: 20140164701
    Abstract: Disclosed is a computer system (100) comprising a processor unit (110) adapted to run a virtual machine in a first operating mode; a cache (120) accessible to the processor unit, said cache including a cache controller (122); and a memory (140) accessible to the cache controller for storing an image of said virtual machine; wherein the processor unit is adapted to create a log (200) in the memory prior to running the virtual machine in said first operating mode; the cache controller is adapted to transfer a modified cache line from the cache to the memory; and write only the memory address of the transferred modified cache line in the log; and the processor unit is further adapted to update a further image of the virtual machine in a different memory location, e.g. on another computer system, by retrieving the memory addresses stored in the log, retrieve the modified cache lines from the memory addresses and update the further image with said modifications.
    Type: Application
    Filed: February 27, 2013
    Publication date: June 12, 2014
    Inventor: INTERNATIONAL BUSINESS MACHINES CORPORATION
  • Publication number: 20140164700
    Abstract: A system and method of detecting cache inconsistencies among distributed data centers is described. Key-based sampling captures a complete history of a key for comparing cache values across data centers. In one phase of a cache inconsistency detection algorithm, a log of operations performed on a sampled key is compared in reverse chronological order for inconsistent cache values. In another phase, a log of operations performed on a candidate key having inconsistent cache values as identified in the previous phase is evaluated in near real time in forward chronological order for inconsistent cache values. In a confirmation phase, a real time comparison of actual cache values stored in the data centers is performed on the candidate keys identified by both the previous phases as having inconsistent cache values. An alert is issued that identifies the data centers in which the inconsistent cache values were reported.
    Type: Application
    Filed: December 10, 2012
    Publication date: June 12, 2014
    Applicant: Facebook, Inc.
    Inventor: Xiaojun Liang
  • Publication number: 20140156931
    Abstract: A method and apparatus for state encoding of cache lines is described. Some embodiments of the method and apparatus support probing, in response to a first probe of a cache line in a first cache, a copy of the cache line in a second cache when the cache line is stale and the cache line is associated with a copy of the cache line stored in the second cache that can bypass notification of the first cache in response to modifying the copy of the cache line.
    Type: Application
    Filed: December 5, 2012
    Publication date: June 5, 2014
    Inventor: Robert Krick