Cache Flushing Patents (Class 711/135)
  • Patent number: 9021209
    Abstract: A processing node tracks probe activity level associated with its cache. The processing node and/or processing system further predicts an idle duration. If the probe activity level increases above a threshold probe activity level, and the idle duration prediction is above a threshold idle duration threshold, the processing node flushes its cache to prevent probes to the cache. If the probe activity level is above the threshold probe activity level but the predicted idle duration is too short, the performance state of the processing node is increased above its current performance state to provide enhanced performance capability in responding to the probe requests.
    Type: Grant
    Filed: February 8, 2010
    Date of Patent: April 28, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Alexander Branover, Maurice B. Steinman
  • Patent number: 9021208
    Abstract: An information processing device includes a memory and a processor coupled to the memory, wherein the processor executes a process comprising selecting data included in a same file as deletion target data from the memory when deleting the data cached in the memory at the caching from the memory and deleting the deletion target data and the data selected at the selecting, from the memory.
    Type: Grant
    Filed: January 25, 2013
    Date of Patent: April 28, 2015
    Assignee: Fujitsu Limited
    Inventors: Akira Ochi, Yasuo Koike, Toshiyuki Maeda, Tomonori Furuta, Fumiaki Itou, Tadahiro Miyaji, Kazuhisa Fujita
  • Publication number: 20150113224
    Abstract: Atomic write operations for storage devices are implemented by maintaining the data that would be overwritten in the cache until the write operation completes. After the write operation completes, including generating any related metadata, a checkpoint is created. After the checkpoint is created, the old data is discarded and the new data becomes the current data for the affected storage locations. If an interruption occurs prior to the creation of the checkpoint, the old data is recovered and any new is discarded. If an interruption occurs after the creation of the checkpoint, any remaining old data is discarded and the new data becomes the current data. Write logs that indicate the locations affected by in progress write operation are used in some implementations. If neither all of the new data nor all of the old data is recoverable, a predetermined pattern can be written into the affected locations.
    Type: Application
    Filed: January 24, 2014
    Publication date: April 23, 2015
    Applicant: NetApp, Inc.
    Inventors: Greg William Achilles, Gordon Hulpieu, Donald Roman Humlicek, Martin Oree Parrish, Kent Prosch, Alan Stewart
  • Patent number: 9015420
    Abstract: A method of operating a memory system is provided. The method includes a controller that regulates read and write access to one or more FLASH memory devices that are employed for random access memory applications. A buffer component operates in conjunction with the controller to regulate read and write access to the one or more FLASH devices. Wear leveling components along with read and write processing components are provided to facilitate efficient operations of the FLASH memory devices.
    Type: Grant
    Filed: May 5, 2014
    Date of Patent: April 21, 2015
    Assignee: Spansion LLC
    Inventor: Tzungren Tzeng
  • Patent number: 9015421
    Abstract: A memory system includes a first, second and third storing area included in a volatile semiconductor memory, and a controller that allocates the storage area of the nonvolatile semiconductor memory to the second storing area and the third storing area in a logical block unit associated with one or more blocks. First and second management units respectively manage the second and third storing areas. The second management unit has a size larger than that of the first management unit. When flushing data from the first to the second or third storing areas, the controller collects, from at least one of the first, second and third storing areas, data other than the data to be flushed and controls the flushing of the data such that a total of the data is a natural number times as large as the block unit as much as possible.
    Type: Grant
    Filed: November 1, 2013
    Date of Patent: April 21, 2015
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Junji Yano, Hidenori Matsuzaki, Kosuke Hatsuda
  • Patent number: 9009408
    Abstract: This invention handles write request cache misses. The cache controller stores write data, sends a read request to external memory for a corresponding cache line, merges the write data with data returned from the external memory and stores merged data in the cache. The cache controller includes buffers with plural entries storing the write address, the write data, the position of the write data within a cache line and unique identification number. This stored data enables the cache controller to proceed to servicing other access requests while waiting for response from the external memory.
    Type: Grant
    Filed: September 26, 2011
    Date of Patent: April 14, 2015
    Assignee: Texas Instruments Incorporated
    Inventors: Abhijeet Ashok Chachad, Raguram Damodaran, David Matthew Thompson
  • Patent number: 9009444
    Abstract: A method, computer program product, and computing system for receiving a reservation for a LUN from Host A, wherein the LUN is defined within a data array. A lock for the LUN is defined as Host A. A write request is received for the LUN from Host B. The lock for the LUN is defined as Transitioning A to B. The write request is delayed for a defined period of time.
    Type: Grant
    Filed: September 29, 2012
    Date of Patent: April 14, 2015
    Assignee: EMC Corporation
    Inventors: Philip Derbeko, Arieh Don, Anat Eyal, Kevin F. Martin, Richard A. Trabing
  • Patent number: 9009413
    Abstract: A processor includes a processor core including an execution unit to execute instructions, and a cache memory. The cache memory includes a controller to update each of a plurality of stale indicators in response to a lazy flush instruction. Each stale indicator is associated with respective data, and each updated stale indicator is to indicate that the respective data is stale. The cache memory also includes a plurality of cache lines. Each cache line is to store corresponding data and a foreground tag that includes a respective virtual address associated with the corresponding data, and that includes the associated stale indicator. Other embodiments are described as claimed.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: April 14, 2015
    Assignee: Intel Corporation
    Inventors: Varun K. Mohandru, Fernando Latorre, Niranjan L. Cooray, Pedro Lopez, Naveen Neelakantam, Li-Gao Zei, Rami May, Jaroslaw Topp, Thomas Gaertner
  • Patent number: 9009436
    Abstract: A method and system are disclosed herein for performing operations on a parallel programming unit in a memory system. The parallel programming unit includes multiple physical structures (such as memory cells in a row) in the memory system that are configured to be operated on in parallel. The method and system perform a first operation on the parallel programming unit, the first operation operating on only part of the parallel programming unit and not operating on a remainder of the parallel programming unit, set a pointer to indicate at least one physical structure in the remainder of the parallel programming unit, and perform a second operation using the pointer to operate on no more than the remainder of the parallel programming unit. In this way, the method and system may realign programming to the parallel programming unit when partial writes to the parallel programming unit occur.
    Type: Grant
    Filed: August 9, 2011
    Date of Patent: April 14, 2015
    Assignee: SanDisk Technologies, Inc.
    Inventor: Nicholas James Thomas
  • Publication number: 20150100737
    Abstract: According to at least one example embodiment, a method and corresponding apparatus for conditionally storing data include initiating an atomic sequence by executing, by a core processor, an instruction/operation designed to initiate an atomic sequence. Executing the instruction designed to initiate the atomic sequence includes loading content associated with a memory location into a first cache memory, and maintaining an indication of the memory location and a copy of the corresponding content loaded. A conditional storing operation is then performed, the conditional storing operation includes a compare-and-swap operation, executed by a controller associated with a second cache memory, based on the maintained copy of the content and the indication of the memory location.
    Type: Application
    Filed: October 3, 2013
    Publication date: April 9, 2015
    Applicant: Cavium, Inc.
    Inventors: Richard E. Kessler, David H. Asher, Michael Sean Bertone, Shubhendu S. Mukherjee, Wilson P. Snyder, II, John M. Perveiler, Christopher J. Comis
  • Publication number: 20150100738
    Abstract: A translation lookaside buffer (TLB) of a computing device is a cache of virtual to physical memory address translations. A TLB flush promotion threshold value indicates when all entries of the TLB are to be flushed rather than individual entries of the TLB. The TLB flush promotion threshold value is determined dynamically by the computing device by determining an amount of time it takes to flush and repopulate all entries of the TLB. A determination is then made as to the number of TLB entries that can be individually flushed and repopulated in that same amount of time. The threshold value is set based on (e.g., equal to) the number of TLB entries that can be individually flushed and repopulated in that amount of time.
    Type: Application
    Filed: October 9, 2013
    Publication date: April 9, 2015
    Applicant: Microsoft Corporation
    Inventor: Landy Wang
  • Publication number: 20150100739
    Abstract: A method, a system and a computer-readable medium for writing to a non-volatile cache memory are provided. The method maintains a write count associated with a set of memory locations. The method then selects a cache replacement policy based on the value of the write count and selecting a block within the set for writing data using the selected cache replacement policy. The selected cache replacement policy can introduce a randomized selection.
    Type: Application
    Filed: March 28, 2014
    Publication date: April 9, 2015
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Zhe WANG, Yuan Xie, Yi Xu, Junli Gu, Ting Cao
  • Patent number: 9003127
    Abstract: Embodiments relate to storing data to a system memory. An aspect includes accessing successive entries of a cache directory having a plurality of directory entries by a stepper engine, where access to the cache directory is given a lower priority than other cache operations. It is determined that a specific directory entry in the cache directory has a change line state that indicates it is modified. A store operation is performed to send a copy of the specific corresponding cache entry to the system memory as part of a cache management function. The specific directory entry is updated to indicate that the change line state is unmodified.
    Type: Grant
    Filed: November 21, 2013
    Date of Patent: April 7, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blake, Timothy C. Bronson, Hieu T. Huynh, Kenneth D. Klapproth, Pak-Kin Mak, Vesselina K. Papazova
  • Patent number: 9003118
    Abstract: In some embodiments, a method for controlling a cache having a volatile memory and a non-volatile memory during a power up sequence is provided. The method includes receiving, at a controller configured to control the cache and a storage device associated with the cache, a signal indicating whether the non-volatile memory includes dirty data copied from the volatile memory to the non-volatile memory during a power down sequence, the dirty data including data that has not been stored in the storage device. In response to the received signal, the dirty data is restored from the non-volatile memory to the volatile memory, and flushed from the volatile memory to the storage device.
    Type: Grant
    Filed: January 9, 2009
    Date of Patent: April 7, 2015
    Assignee: Dell Products L.P.
    Inventors: Jacob Cherian, Marcelo Saraiva, Shane Chiasson, Gary Kotzur, Douglas Huang, Anand Nunna, William Lynn
  • Patent number: 9003159
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, can perform data caching. In some implementations, a method and system include receiving information that includes a logical address, allocating a physical page in a non-volatile memory structure, mapping the logical address to a physical address of the physical page, and writing, based on the physical address, data to the non-volatile memory structure to cache information associated with the logical address. The logical address can include an identifier of a data storage device and a logical page number.
    Type: Grant
    Filed: October 5, 2010
    Date of Patent: April 7, 2015
    Assignee: Marvell World Trade Ltd.
    Inventors: Shekhar S. Deshkar, Sandeep Karmarkar, Arvind Pruthi, Ram Kishore Johri
  • Publication number: 20150095584
    Abstract: A computerized method for dynamic consistency management of server side cache management units in a distributed cache, comprising: updating a server side cache management unit by a client; assigning each of a plurality of server side cache management units to one of a plurality of propagation topology groups according to an analysis of a plurality of cache usage measurements thereof, each of said propagation topology groups is associated with a different write request propagation scheme; and managing client update notifications of members of each of said propagation topology groups according to the respective said different write request propagation scheme which is associated therewith.
    Type: Application
    Filed: September 30, 2013
    Publication date: April 2, 2015
    Applicant: International Business Machines Corporation
    Inventors: Gregory Chockler, Guy Laden, Eli Luboshitz, Roie Melamed, Benjamin M. Parees, Yoav Tock
  • Publication number: 20150095576
    Abstract: Updates to nonvolatile memory pages are mirrored so that certain features of a computer system, such as live migration of applications, fault tolerance, and high availability, will be available even when nonvolatile memory is local to the computer system. Mirroring may be carried out when a cache flush instruction is executed to flush contents of the cache into nonvolatile memory. In addition, mirroring may be carried out asynchronously with respect to execution of the cache flush instruction by retrieving content that is to be mirrored from the nonvolatile memory using memory addresses of the nonvolatile memory corresponding to target memory addresses of the cache flush instruction.
    Type: Application
    Filed: September 30, 2013
    Publication date: April 2, 2015
    Applicant: VMware, Inc.
    Inventors: Pratap SUBRAHMANYAM, Rajesh VENKATASUBRAMANIAN
  • Publication number: 20150095578
    Abstract: Instructions and logic provide memory fence and store functionality. Some embodiments include a processor having a cache to store cache coherent data in cache lines for one or more memory addresses of a primary storage. A decode stage of the processor decodes an instruction specifying a source data operand, one or more memory addresses as destination operands, and a memory fence type. Responsive to the decoded instruction, one or more execution units may enforce the memory fence type, then store data from the source data operand to the one or more memory addresses, and ensure that the stored data has been committed to primary storage. For some embodiments, the primary storage may comprise persistent memory. For some embodiments, cache lines corresponding to the memory addresses may be flushed, or marked for persistent write back to primary storage. Alternatively the cache may be bypassed, e.g. by performing a streaming vector store.
    Type: Application
    Filed: September 27, 2013
    Publication date: April 2, 2015
    Inventors: Kshitij Doshi, Thomas Willhalm
  • Publication number: 20150095585
    Abstract: Updates to nonvolatile memory pages are mirrored so that certain features of a computer system, such as live migration of applications, fault tolerance, and high availability, will be available even when nonvolatile memory is local to the computer system. Mirroring may be carried out when a cache flush instruction is executed to flush contents of the cache into nonvolatile memory. In addition, mirroring may be carried out asynchronously with respect to execution of the cache flush instruction by retrieving content that is to be mirrored from the nonvolatile memory using memory addresses of the nonvolatile memory corresponding to target memory addresses of the cache flush instruction.
    Type: Application
    Filed: September 30, 2013
    Publication date: April 2, 2015
    Applicant: VMware, Inc.
    Inventors: Pratap SUBRAHMANYAM, Rajesh VENKATASUBRAMANIAN
  • Patent number: 8996831
    Abstract: Systems and methods for providing object versioning in a storage system may support the logical deletion of stored objects. In response to a delete operation specifying both a user key and a version identifier, the storage system may permanently delete the specified version of an object having the specified key. In response to a delete operation specifying a user key, but not a version identifier, the storage system may create a delete marker object that does not contain object data, and may generate a new version identifier for the delete marker. The delete marker may be stored as the latest object version of the user key, and may be addressable in the storage system using a composite key comprising the user key and the new version identifier. Subsequent attempts to retrieve the user key without specifying a version identifier may return an error, although the object was not actually deleted.
    Type: Grant
    Filed: July 29, 2013
    Date of Patent: March 31, 2015
    Assignee: Amazon Technologies, Inc.
    Inventors: Jason G. McHugh, Praveen Kumar Gattu, Michael A. Ten-Pow, Derek Ernest Denny-Brown, II
  • Patent number: 8996800
    Abstract: Techniques for deduplication of virtual machine files in a virtualized desktop environment are described, including receiving data into a page cache, the data being received from a virtual machine and indicating a write operation, and deduplicating the data in the page cache prior to committing the data to storage, the data being deduplicated in-band and in substantially real-time.
    Type: Grant
    Filed: October 7, 2011
    Date of Patent: March 31, 2015
    Assignee: Atlantis Computing, Inc.
    Inventors: Chetan Venkatesh, Kartikeya Iyer, Shravan Gaonkar, Sagar Shyam Dixit, Vinodh Dorairajan
  • Publication number: 20150089147
    Abstract: A computer system that supports virtualization may maintain multiple address spaces. Each guest operating system employs guest virtual addresses (GVAs), which are translated to guest physical addresses (GPAs). A hypervisor, which manages one or more guest operating systems, translates GPAs to root physical addresses (RPAs). A merged translation lookaside buffer (MTLB) caches translations between the multiple addressing domains, enabling faster address translation and memory access. The MTLB can be logically addressable as multiple different caches, and can be reconfigured to allot different spaces to each logical cache. Further, a collapsed TLB is an additional cache storing collapsed translations derived from the MTLB. Entries in the MTLB, the collapsed TLB, and other caches can be maintained for consistency.
    Type: Application
    Filed: September 26, 2013
    Publication date: March 26, 2015
    Applicant: Cavium, Inc.
    Inventors: Wilson P. Snyder, II, Bryan W. Chin, Shubhendu S. Mukherjee, Michael Bertone, Richard E. Kessler
  • Publication number: 20150089126
    Abstract: In an embodiment, a processor includes a cache data array including a plurality of physical ways, each physical way to store a baseline way and a victim way; a cache tag array including a plurality of tag groups, each tag group associated with a particular physical way and including a first tag associated with the baseline way stored in the particular physical way, and a second tag associated with the victim way stored in the particular physical way; and cache control logic to: select a first baseline way based on a replacement policy, select a first victim way based on an available capacity of a first physical way including the first victim way, and move a first data element from the first baseline way to the first victim way. Other embodiments are described and claimed.
    Type: Application
    Filed: September 25, 2013
    Publication date: March 26, 2015
    Inventors: Sreenivas Subramoney, Jayesh Gaur, Alaa R. Alameldeen
  • Patent number: 8990504
    Abstract: A cache page management method can include paging out a memory page to an input/output controller, paging the memory page from the input/output controller into a real memory, modifying the memory page in the real memory to an updated memory page and purging the memory page paged to the input/output controller.
    Type: Grant
    Filed: July 11, 2011
    Date of Patent: March 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Tara Astigarraga, Michael E. Browne, Joseph Demczar, Eric C. Wieder
  • Patent number: 8990507
    Abstract: Embodiments relate to storing data to a system memory. An aspect includes accessing successive entries of a cache directory having a plurality of directory entries by a stepper engine, where access to the cache directory is given a lower priority than other cache operations. It is determined that a specific directory entry in the cache directory has a change line state that indicates it is modified. A store operation is performed to send a copy of the specific corresponding cache entry to the system memory as part of a cache management function. The specific directory entry is updated to indicate that the change line state is unmodified.
    Type: Grant
    Filed: June 13, 2012
    Date of Patent: March 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blake, Pak-Kin Mak, Timothy C. Bronson, Hieu T. Huynh, Kenneth D. Klapproth, Vesselina K. Papazova
  • Publication number: 20150081979
    Abstract: Techniques for enabling integration between a storage system and a host system that performs write-back caching are provided. In one embodiment, the host system can transmit to the storage system a command indicating that the host system intends to cache, in a write-back cache, writes directed to a range of logical block addresses (LBAs). The host system can further receive from the storage system a response indicating whether the command is accepted or rejected. If the command is accepted, the host system can initiate the caching of writes in the write-back cache.
    Type: Application
    Filed: September 16, 2013
    Publication date: March 19, 2015
    Applicant: VMware, Inc.
    Inventors: Andrew Banta, Erik Cota-Robles
  • Publication number: 20150081980
    Abstract: A method includes storing architectural state data associated with a processing unit in a cache memory using an allocate without fill mode. A system includes a processing unit, a cache memory, and a cache controller. The cache controller is to receive architectural state data associated with the processing unit and store at least a first portion of the architectural state data in the cache memory using a first fill mode responsive to a first value of a fill mode flag and store at least a second portion of the architectural state data in the cache memory using a second fill mode responsive to a second value of a fill mode flag, wherein the first fill mode differs from the second fill mode with respect to whether previous values of the architectural state data are retrieved prior to storing the first or second portions in the cache memory.
    Type: Application
    Filed: September 17, 2013
    Publication date: March 19, 2015
    Applicant: Advanced Micro Devices, Inc.
    Inventor: William L. Walker
  • Publication number: 20150074355
    Abstract: An apparatus includes a memory and a controller. The memory may be configured to implement a cache and store meta-data. The cache generally comprises one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. Each of the plurality of cache-lines is associated with meta-data indicating one or more of a dirty state, an invalid state, and a partially dirty state. The controller is connected to the memory and may be configured to (i) detect an input/output (I/O) operation directed to a file system recovery log area, (ii) mark a corresponding I/O using a predefined hint value, and (iii) pass the corresponding I/O along with the predefined hint value to a caching layer.
    Type: Application
    Filed: October 30, 2013
    Publication date: March 12, 2015
    Applicant: LSI Corporation
    Inventors: Kishore Kaniyar Sampathkumar, Saugata Das Purkayastha
  • Patent number: 8977823
    Abstract: Provided are techniques for handling a store buffer in conjunction with a processor, the store buffer comprising a free list; a merge window; and an evict list; and logic, for, upon receipt of a T_STORE operation, comparing a first address associated with the T_STORE operation with a plurality of addresses associated with previous T_STORE operations, wherein the previous T_STORE operations are part of the same transaction as the T_STORE operation and the entries corresponding to the previous T_STORE operations are stored in the merge window; in response to a match between the first address and a second address, associated with a second T_STORE operation, of the plurality of addresses, merging a first entry corresponding to the first T_STORE operation with a second entry corresponding to the second T_STORE operation; and consolidating results associated with the first T_STORE operation with results associated with the second T_STORE operation.
    Type: Grant
    Filed: September 16, 2012
    Date of Patent: March 10, 2015
    Assignee: International Business Machines Corporation
    Inventors: Khary J. ALexander, Christian Jacobi, Gerrit Koch, Martin Recktenwald, Timothy J. Slegel, Hans-Werner Tast
  • Publication number: 20150067265
    Abstract: A system and method of operation exploit the limited associativity of a single cache set to force observable cache evictions and discover conflicts. Loads are issued to input memory addresses, one at a time, until a cache eviction is detected. After observing a cache eviction on a load from an address, that address is added to a data structure representing the current conflict set. The cache is then flushed, and loads are issued to all addresses in the current conflict set, so that all known conflicting addresses are accessed first, ensuring that the next cache miss will occur on a different conflicting address. The process is repeated, issuing loads from all input memory addresses, incrementally finding conflicting addresses, one by one. Memory addresses that conflict in the cache belong to the same partition, whereas memory addresses belonging to different partitions do not conflict.
    Type: Application
    Filed: September 5, 2014
    Publication date: March 5, 2015
    Applicant: PRIVATECORE, INC.
    Inventors: Carl A. WALDSPURGER, Oded HOROVITZ, Stephen A. WEIS, Sahil RIHAN
  • Publication number: 20150058548
    Abstract: Logically arranged hierarchy or tiered storage may comprise a layer of storage being a faster access storage (e.g. solid state drive (SSD)) and another (e.g., next) layer being a traditional disk (e.g. HDD). In one embodiment, compaction occurs within the higher layer, e.g., until there is no more room and then during the compaction sequence the data may be moved down to the lower layer. In another embodiment, compaction and migration to a lower layer may occur within the higher layer, e.g., based on one or more policies, even if the higher layer is not full. In one embodiment, the data between layers are maintained as disjoint. In one embodiment, the more recent versions are always in the higher layer and the older versions are always in the lower layer.
    Type: Application
    Filed: August 26, 2013
    Publication date: February 26, 2015
    Applicant: International Business Machines Corporation
    Inventors: Liana L. Fong, Wei Tan
  • Patent number: 8966184
    Abstract: An apparatus, system, and method are disclosed for managing eviction of data. A grooming cost module determines a grooming cost for a selected region of a nonvolatile solid-state cache. The grooming cost includes a cost of evicting the selected region of the nonvolatile solid-state cache relative to other regions. A grooming candidate set module adds the selected region to a grooming candidate set in response to the grooming cost satisfying a grooming cost threshold. A low cost module selects a low cost region within the grooming candidate set. A groomer module recovers storage capacity of the low cost region.
    Type: Grant
    Filed: January 31, 2012
    Date of Patent: February 24, 2015
    Assignee: Intelligent Intellectual Property Holdings 2, LLC.
    Inventor: David Atkisson
  • Patent number: 8966180
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: March 1, 2013
    Date of Patent: February 24, 2015
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Publication number: 20150052288
    Abstract: Apparatuses and methods for providing data from a buffer are disclosed herein. An example apparatus may include an array, a buffer, and a memory control unit. The buffer may be coupled to the array and configured to store data. The data may include data intended to be stored in the storage area. The memory control unit may be coupled to the array and the buffer. The memory control unit may be configured to cause the buffer to store the data responsive, at least in part, to a write command and may further be configured to cause the buffer to store the data intended to be stored in the storage area in the storage area of the array responsive, at least in part, to a flush command.
    Type: Application
    Filed: August 14, 2013
    Publication date: February 19, 2015
    Applicant: Micron Technology, Inc.
    Inventors: GRAZIANO MIRICHIGNI, Luca Porzio, Erminio Di Martino, Giacomo Bernardi, Domenico Monteleone, Stefano Zanardi
  • Publication number: 20150052293
    Abstract: A computing device includes a home node controller to couple a home processor socket to the computing device. The home processor socket includes a home core hidden from the computing device and the home core fetches data to a home cache of the home processor socket. The computing device includes a source processor socket including a source core to request for data and the home node controller forwards requested data from the home cache to the source core if the requested data is included on the home cache.
    Type: Application
    Filed: April 30, 2012
    Publication date: February 19, 2015
    Inventors: Blaine D. Gaither, Russ W. Herrell, Craig Warner
  • Publication number: 20150046633
    Abstract: According to one embodiment of the present invention, a cache control method of a storage device is provided, the storage device including: a storage unit that stores data, and a buffer memory having a first cache area and a second cache area serving as a cache of the storage unit. The cache control method according to the embodiment includes: storing data read from the storage unit in the first cache area in response to a read command from a host; moving retried data, on which a read retry has occurred upon the readout from the storage unit, to the second cache area from the first cache area in order that the retried data amount in the second cache area is not more than a predetermined data amount; and transferring data in the first cache area or the second cache area to the host.
    Type: Application
    Filed: January 22, 2014
    Publication date: February 12, 2015
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Tetsuo Kuribayashi, Hironori Kanno, Nobuhiro Sugawara, Yoshinori Inoue, Yasuyuki Nagashima
  • Publication number: 20150046658
    Abstract: A method and information processing system with improved cache organization is provided. Each register capable of accessing memory has associated metadata, which contains the tag, way, and line for a corresponding cache entry, along with a valid bit, allowing a memory access which hits a location in the cache to go directly to the cache's data array, avoiding the need to look up the address in the cache's tag array. When a cache line is evicted, any metadata referring to the line is marked as invalid. By reducing the number of tag lookups performed to access data in a cache's data array, the power that would otherwise be consumed by performing tag lookups is saved, thereby reducing power consumption of the information processing system, and the cache area needed to implement a cache having a desired level of performance may be reduced.
    Type: Application
    Filed: August 8, 2013
    Publication date: February 12, 2015
    Applicant: FREESCALE SEMICONDUCTOR, INC.
    Inventor: Peter J. Wilson
  • Patent number: 8954679
    Abstract: A social data aggregator generates entries of action data describing actions taken by users. A portion of the entries are stored in an action cache to expedite retrieval. To store more recent or relevant entries in the action cache, entries are removed from the action cache based on engagement scores associated with the entries. An engagement score indicates a likelihood of a user requesting content interacting with a notification based on an entry. Entries having the lowest engagement scores or having engagement scores below a threshold are removed from the action cache.
    Type: Grant
    Filed: February 12, 2013
    Date of Patent: February 10, 2015
    Assignee: Facebook, Inc.
    Inventors: Sriya Santhanam, Varun Kacholia, Li Zhang
  • Patent number: 8954674
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: October 8, 2013
    Date of Patent: February 10, 2015
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8949530
    Abstract: Systems and methods are disclosed for improving the performance of cache memory in a computer system by dynamically selecting an index for caching main memory while an application is running. A disclosed example of a memory system includes a cache including a data array, a primary tag array, and at least one secondary tag array. A currently selected index is used to index data bits to the data array and tag bits to the primary tag array. The performance of at least one candidate index is evaluated by indexing tag bits to the secondary tag array, without caching any data using the candidate index while the candidate index is under evaluation. If the candidate index has a better hit rate than the currently selected index, the memory system switches to using the candidate index to cache data.
    Type: Grant
    Filed: August 2, 2011
    Date of Patent: February 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Mvv A. Krishna, Shaul Yifrach
  • Patent number: 8949541
    Abstract: A method for cleaning dirty data in an intermediate cache is disclosed. A dirty data notification, including a memory address and a data class, is transmitted by a level 2 (L2) cache to frame buffer logic when dirty data is stored in the L2 cache. The data classes may include evict first, evict normal and evict last. In one embodiment, data belonging to the evict first data class is raster operations data with little reuse potential. The frame buffer logic uses a notification sorter to organize dirty data notifications, where an entry in the notification sorter stores the DRAM bank page number, a first count of cache lines that have resident dirty data and a second count of cache lines that have resident evict_first dirty data associated with that DRAM bank. The frame buffer logic transmits dirty data associated with an entry when the first count reaches a threshold.
    Type: Grant
    Filed: November 14, 2011
    Date of Patent: February 3, 2015
    Assignee: NVIDIA Corporation
    Inventors: David B. Glasco, Peter B. Holmqvist, George R. Lynch, Patrick R. Marchand, James Roberts, John H. Edmondson
  • Patent number: 8949540
    Abstract: A victim cache line having a data-invalid coherence state is selected for castout from a first lower level cache of a first processing unit. The first processing unit issues on an interconnect fabric a lateral castout (LCO) command identifying the victim cache line to be castout from the first lower level cache, indicating the data-invalid coherence state, and indicating that a lower level cache is an intended destination of the victim cache line. In response to a coherence response to the LCO command indicating success of the LCO command, the victim cache line is removed from the first lower level cache and held in a second lower level cache of a second processing unit in the data-invalid coherence state.
    Type: Grant
    Filed: March 11, 2009
    Date of Patent: February 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hien M. Le, Alvan W. Ng, Michael S. Siegel, Derek E. Williams, Phillip G. Williams
  • Publication number: 20150033255
    Abstract: A scalable and cost-effective solution for implementing a cache in a data processing environment. A sliding window comprises a number of past time slots. For each time slot, a number of request for a data item is counted. A mean request rate for the data item is computed over the sliding window. If the mean request rate is superior to a threshold, the data item is added to cache, or the data item is removed from cache otherwise.
    Type: Application
    Filed: July 22, 2014
    Publication date: January 29, 2015
    Inventors: Christoph NEUMANN, Nicolas Le Scouarnec, Gilles Straub
  • Publication number: 20150026411
    Abstract: A cache controller configured to detect a wait type (i.e., a wait event) associated with an imprecise collision and/or contention event is disclosed. The cache controller is configured to operatively connect to a cache memory device, which is configured to store a plurality of cache lines. The cache controller is configured to detect a wait type due to an imprecise collision and/or collision event associated with a cache line. The cache controller is configured to cause transmission of a broadcast to one or more transaction sources (e.g., broadcast to the transaction sources internal to the cache controller) requesting the cache line indicating the transaction source can employ the cache line.
    Type: Application
    Filed: July 29, 2013
    Publication date: January 22, 2015
    Applicant: LSI Corporation
    Inventors: Gary M. Lippert, Judy M. Gehman, Scott E. Greenfield, Jerome M. Meyer, John M. Nystuen
  • Publication number: 20150026412
    Abstract: One embodiment provides an eviction system for dynamically-sized caching comprising a non-blocking data structure for maintaining one or more data nodes. Each data node corresponds to a data item in a cache. Each data node comprises information relating to a corresponding data item. The eviction system further comprises an eviction module configured for removing a data node from the data structure, and determining whether the data node is a candidate for eviction based on information included in the data node. If the data node is not a candidate for eviction, the eviction module inserts the data node back into the data structure; otherwise the eviction module evicts the data node and a corresponding data item from the system and the cache, respectively. Data nodes of the data structure circulate through the eviction module until a candidate for eviction is determined.
    Type: Application
    Filed: December 30, 2013
    Publication date: January 22, 2015
    Applicant: Samsung Electronics Company, Ltd.
    Inventors: Gage W. Eads, Juan A. Colmenares
  • Publication number: 20150026410
    Abstract: A method and apparatus for calculating a victim way that is always the least recently used way. More specifically, in an m-set, n-way set associative cache, each way a cache set comprises a valid bit that indicates that the way contains valid data. The valid bit is set when a way is written and cleared upon being invalidated, e.g., via a snoop address, The cache system comprises a cache LRU circuit which comprises an LRU logic unit associated with each cache set. The LRU logic unit comprises a FIFO of n-depth (in certain embodiments, the depth corresponds to the number of ways in the cache) and m-width. The FIFO performs push, pop and collapse functions. Each entry in the FIFO contains the encoded way number that was last accessed.
    Type: Application
    Filed: July 17, 2013
    Publication date: January 22, 2015
    Inventors: Thang Q. Nguyen, John D. Coddington, Sanjay R. Deshpande
  • Patent number: 8938586
    Abstract: A memory system includes: a cache memory, a nonvolatile semiconductor memory, and a controller. The controller includes a plurality of management tables that manage data stored in the cache memory and the nonvolatile semiconductor memory using a cluster unit and a track unit. The controller performs data flushing processing from the cache memory to the nonvolatile semiconductor memory when the number of track units registered in the cache memory exceeds a predetermined threshold. Data may be flushed to the nonvolatile memory in different size data units such as a cluster or a track. Data flushing processing may also be performed if a last free way is used when data writing processing is performed on the cache memory managed in a set associative system. The nonvolatile semiconductor memory can be a NAND flash memory.
    Type: Grant
    Filed: February 10, 2009
    Date of Patent: January 20, 2015
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Junji Yano, Kosuke Hatsuda, Hidenori Matsuzaki, Ryoichi Kato
  • Publication number: 20150019822
    Abstract: Nodes in a data storage system having redundant write caches identify when one node fails. A remaining active node stops caching new write operations, and begins flushing cached dirty data. Metadata pertaining to each piece of data flushed from the cache is recorded. Metadata pertaining to new write operations are also recorded a corresponding data flushed immediately when the new write operation involves data in the dirty data cache. When the failed node is restored, the restored node removes all data identified by the metadata from a write cache. Removing such data synchronizes the write cache with all remaining nodes without costly copying operations.
    Type: Application
    Filed: August 15, 2013
    Publication date: January 15, 2015
    Applicant: LSI Corporation
    Inventors: Sumanesh Samanta, Sujan Biswas, Karimulla Sheik, Thanu Anna Skariah, Mohana Rao Goli
  • Patent number: 8935462
    Abstract: For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: January 13, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Stephen L. Blinick, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao-Yu Hu, Matthew J. Kalos, Ioannis Koltsidas, Karl A. Nielsen, Roman A. Pletka
  • Patent number: 8935475
    Abstract: Embodiments of the present invention provides for the execution of threads and/or workitems on multiple processors of a heterogeneous computing system in a manner that they can share data correctly and efficiently. Disclosed method, system, and article of manufacture embodiments include, responsive to an instruction from a sequence of instructions of a work-item, determining an ordering of visibility to other work-items of one or more other data items in relation to a particular data item, and performing at least one cache operation upon at least one of the particular data item or the other data items present in any one or more cache memories in accordance with the determined ordering. The semantics of the instruction includes a memory operation upon the particular data item.
    Type: Grant
    Filed: March 30, 2012
    Date of Patent: January 13, 2015
    Assignees: ATI Technologies ULC, Advanced Micro Devices, Inc.
    Inventors: Anthony Asaro, Kevin Normoyle, Mark Hummel, Norman Rubin, Mark Fowler