Cache Flushing Patents (Class 711/135)
-
Patent number: 8694734Abstract: An invention that expires cached virtual content in a virtual universe is provided. In one embodiment, there is an expiration tool, including an identification component configured to identify virtual content associated with an avatar in the virtual universe; an analysis component configured to analyze a behavior of the avatar in a region of the virtual universe; and an expiration component configured to expire cached virtual content associated with the avatar based on the behavior of the avatar in the region of the virtual universe.Type: GrantFiled: January 31, 2009Date of Patent: April 8, 2014Assignee: International Business Machines CorporationInventors: Ann Corrao, Rick A. Hamilton, II, Brian M. O'Connell, Brian J. Snitzer
-
Publication number: 20140095800Abstract: Methods and apparatuses for releasing the sticky state of cache lines for one or more group IDs. A sticky removal engine walks through the tag memory of a system cache looking for matches with a first group ID which is clearing its cache lines from the system cache. The engine clears the sticky state of each cache line belonging to the first group ID. If the engine receives a release request for a second group ID, the engine records the current index to log its progress through the tag memory. Then, the engine continues its walk through the tag memory looking for matches with either the first or second group ID. The engine wraps around to the start of the tag memory and continues its walk until reaching the recorded index for the second group ID.Type: ApplicationFiled: September 28, 2012Publication date: April 3, 2014Applicant: APPLE INC.Inventors: Sukalpa Biswas, Shinye Shiu, James Wang, Robert Hu
-
Publication number: 20140095771Abstract: A computing system includes a storage device, and a host device configured to flush a plurality of pages to the storage device. The host device includes a write-back (WB) cache configured to store the pages, and a file system module configured to flush pages having first characteristics to the storage device from among the pages stored in the WB cache, and then flush pages having second characteristics which are different from the first characteristics to the storage device from among the pages stored in the WB cache.Type: ApplicationFiled: September 27, 2013Publication date: April 3, 2014Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: CHANG-MAN LEE, JAE-GEUK KIM, CHUL LEE, JOO-YOUNG HWANG
-
Publication number: 20140095794Abstract: A processor is described having cache circuitry and logic circuitry. The logic circuitry is to manage the entry and removal of cache lines from the cache circuitry. The logic circuitry includes storage circuitry and control circuitry. The storage circuitry is to store information identifying a set of cache lines within the cache that are in a modified state. The control circuitry is coupled to the storage circuitry to receive the information from the storage circuitry, responsive to a signal to flush the cache, and determine addresses of the cache therefrom so that the set of cache lines are read from the cache so as to avoid reading cache lines from the cache that are in an invalid or a clean state.Type: ApplicationFiled: September 28, 2012Publication date: April 3, 2014Inventors: Jaideep MOSES, Ravishankar IYER, Ramesh G. ILLIKKAL, Sadagopan SRINIVASAN
-
Publication number: 20140095801Abstract: A system, method, and computer program product for retaining coherent cache contents during deep power-down operations, and reducing the low-power state entry and exit overhead to improve processor energy efficiency and performance. The embodiments flush or clean the Modified-state lines from the cache before entering a deep low-power state, and then implement a deferred snoop strategy while in the powered-down state. Upon existing the powered-down state, the embodiments process the deferred snoops. A small additional cache and a snoop filter (or other cache-tracking structure) may be used along with additional logic to retain cache contents coherently through deep power-down operations, which may span multiple low-power states.Type: ApplicationFiled: September 28, 2012Publication date: April 3, 2014Inventors: Devadatta V. BODAS, Zhong-Ning (George) CAI, John H. CRAWFORD
-
Patent number: 8688913Abstract: For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. Unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache. The unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes.Type: GrantFiled: November 1, 2011Date of Patent: April 1, 2014Assignee: International Business Machines CorporationInventors: Michael T. Benhase, Stephen L. Blinick, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao-Yu Hu, Matthew J. Kalos, Ioannis Koltsidas, Roman A. Pletka
-
Publication number: 20140089596Abstract: A technique for implementing read-copy update in a shared-memory computing system having two or more processors operatively coupled to a shared memory and to associated incoherent caches that cache copies of data stored in the memory. According to example embodiments disclosed herein, cacheline information for data that has been rendered obsolete due to a data update being performed by one of the processors is recorded. The recorded cacheline information is communicated to one or more of the other processors. The one or more other processors use the communicated cacheline information to flush the obsolete data from all incoherent caches that may be caching such data.Type: ApplicationFiled: November 29, 2013Publication date: March 27, 2014Applicant: International Business Machines CorporationInventor: Paul E. McKenney
-
Patent number: 8683131Abstract: A storage device is provided for direct memory access. A controller of the storage device performs a mapping of a window of memory addresses to a logical block addressing (LBA) range of the storage device. Responsive to receiving from a host a write request specifying a write address within the window of memory addresses, the controller initializes a first memory buffer in the storage device and associates the first memory buffer with a first address range within the window of memory addresses such that the write address of the request is within the first address range. The controller writes to the first memory buffer based on the write address. Responsive to the buffer being full the controller persists contents of the first memory buffer to the storage device using logical block addressing based on the mapping.Type: GrantFiled: March 13, 2013Date of Patent: March 25, 2014Assignee: International Business Machines CorporationInventors: Lee D. Cleveland, Andrew D. Walls
-
Publication number: 20140082295Abstract: A network attached storage (NAS) caching appliance, system, and associated method to detect out-of-band accesses to a networked file system.Type: ApplicationFiled: September 18, 2013Publication date: March 20, 2014Applicant: NetApp, Inc.Inventors: Derek Beard, Ghassan Yammine, Greg Dahl
-
Patent number: 8677071Abstract: Techniques are described for controlling processor cache memory within a processor system. Cache occupancy values for each of a plurality of entities executing the processor system can be calculated. A cache replacement algorithm uses the cache occupancy values when making subsequent cache line replacement decisions. In some variations, entities can have occupancy profiles specifying a maximum cache quota and/or a minimum cache quota which can be adjusted to achieve desired performance criteria. Related methods, systems, and articles are also described.Type: GrantFiled: March 25, 2011Date of Patent: March 18, 2014Assignee: Virtualmetrix, Inc.Inventors: Gary Allen Gibson, Valeri Popescu
-
Publication number: 20140075124Abstract: Techniques for conflict detection in hardware transactional memory (HTM) are provided. In one aspect, a method for detecting conflicts in HTM includes the following steps. Conflict detection is performed eagerly by setting read and write bits in a cache as transactions having read and write requests are made. A given one of the transactions is stalled when a conflict is detected whereby more than one of the transactions are accessing data in the cache in a conflicting way. An address of the conflicting data is placed in a predictor. The predictor is queried whenever the write requests are made to determine whether they correspond to entries in the predictor. A copy of the data corresponding to entries in the predictor is placed in a store buffer. The write bits in the cache are set and the copy of the data in the store buffer is merged in at transaction commit.Type: ApplicationFiled: September 7, 2012Publication date: March 13, 2014Applicant: International Business Machines CorporationInventors: Colin B. Blundell, Harold Wade Cain, III, Jose Eduardo Moreira
-
Publication number: 20140068196Abstract: Web objects, such as media files are sent through an adaptation server which includes a transcoder for adapting forwarded objects according to profiles of the receiving destinations, and a cache memory for caching frequently requested objects, including their adapted versions. The probability of additional requests for the same object before the object expires, is assessed by tracking hits. Only objects having experienced hits in excess of a hit threshold are cached, the hit threshold being adaptively adjusted based on the capacity of the cache, and the space required to store cached media files. Expired objects are collected in a list, and may be periodically ejected from the cache, or when the cache is nearly full.Type: ApplicationFiled: August 28, 2012Publication date: March 6, 2014Inventors: Louis BENOIT, Sébastien CÔTÉ, Robert BUCHNAJZER
-
Publication number: 20140068197Abstract: A storage module may be configured to service I/O requests according to different persistence levels. The persistence level of an I/O request may relate to the storage resource(s) used to service the I/O request, the configuration of the storage resource(s), the storage mode of the resources, and so on. In some embodiments, a persistence level may relate to a cache mode of an I/O request. I/O requests pertaining to temporary or disposable data may be serviced using an ephemeral cache mode. An ephemeral cache mode may comprise storing I/O request data in cache storage without writing the data through (or back) to primary storage. Ephemeral cache data may be transferred between hosts in response to virtual machine migration.Type: ApplicationFiled: March 14, 2013Publication date: March 6, 2014Inventors: Vikram Joshi, David Flynn, Yang Luan, Michael F. Brown
-
Publication number: 20140059298Abstract: In one embodiment, a method performed by one or more computing devices includes receiving at a host cache, a first request to prepare a volume of the host cache for creating a snapshot of a cached logical unit number (LUN), the request indicating that a snapshot of the cached LUN will be taken, preparing, in response to the first request, the volume of the host cache for creating the snapshot of the cached LUN depending on a mode of the host cache, receiving, at the host cache, a second request to create the snapshot of the cached LUN, and in response to the second request, creating, at the host cache, the snapshot of the cached LUN.Type: ApplicationFiled: August 24, 2012Publication date: February 27, 2014Applicant: DELL PRODUCTS L.P.Inventors: Marc David Olin, Michael James Klemm
-
Patent number: 8656110Abstract: When multiple pieces of content data are being recorded continuously to a nonvolatile storage device having page cache function, a preparation time before starting next content data recording is reduced. When a cache releasing section of a nonvolatile storage device (1) receives cache releasing from an access device (2), it releases addresses included in one logical block among multiple addresses which are cache objects at the same time. Further, the nonvolatile storage device (1) includes a cache information outputting section which outputs information regarding a time period required for releasing addresses which are cache objects outside, and the access device (2) refers to the information to select the address to be an object of releasing.Type: GrantFiled: August 10, 2010Date of Patent: February 18, 2014Assignee: Panasonic CorporationInventors: Hirokazu So, Takuji Maeda, Masayuki Toyama
-
Publication number: 20140047189Abstract: Determining and using the ideal size of memory to be transferred from high speed memory to a low speed memory may result in speedier saves to the low speed memory and a longer life for the low speed memory.Type: ApplicationFiled: October 18, 2013Publication date: February 13, 2014Applicant: Microsoft CorporationInventors: Michael R. Fortin, Robert L. Reinauer
-
Patent number: 8650363Abstract: A memory subsystem includes a volatile memory, a nonvolatile memory, and a controller including logic to interface the volatile memory to an external system. The volatile memory is addressable for reading and writing by the external system. The memory subsystem includes a power controller with logic to detect when power from the external system to at least one of the volatile and nonvolatile memories and to the controller fails. When external system power fails, backup power is provided to at least one of the volatile and nonvolatile memories and to the controller for long enough to enable the controller to back up data from the volatile memory to the nonvolatile memory.Type: GrantFiled: May 27, 2012Date of Patent: February 11, 2014Assignee: AgigA TechInventor: Ronald H Sartore
-
Publication number: 20140040561Abstract: A method implemented by a computer system comprising a first memory agent and a second memory agent coupled to the first memory agent, wherein the second memory agent has access to a cache comprising a cache line, the method comprising changing a state of the cache line by the second memory agent, and sending a non-snoop message from the second memory agent to the first memory agent via a communication channel assigned to snoop responses, wherein the non-snoop message informs the first memory agent of the state change of the cache line.Type: ApplicationFiled: May 22, 2013Publication date: February 6, 2014Inventors: Iulin Lih, Chenghong He, Hongbo Shi, Naxin Zhang
-
Publication number: 20140040540Abstract: Methods, apparatus, and systems, including computer programs encoded on a computer storage medium, manage metadata for virtual volumes. In some implementations, a method includes: loading into memory at least a portion of metadata for a virtual volume (VV) that spans data extents of different persistent storage devices, wherein the metadata comprises virtual metadata block (VMB) descriptors and virtual metadata blocks (VMBs); mapping an address of the VV to a VMB number and an index of an extent pointer within a VMB identified by the VMB number, wherein the extent pointer indicates an extent within one of the different persistent storage devices; locating a VMB descriptor in the memory based on the VMB number; and locating the identified VMB in the memory or not in the memory based on the located VMB descriptor.Type: ApplicationFiled: October 7, 2013Publication date: February 6, 2014Applicant: Marvell World Trade Ltd.Inventors: Arvind Pruthi, Shailesh P. Parulekar, Mayur Shardul
-
Publication number: 20140040560Abstract: An input/output memory management unit (IOMMU) having an “invalidate all” command available to clear the contents of cache memory is presented. The cache memory provides fast access to address translation data that has been previously obtained by a process. A typical cache memory includes device tables, page tables and interrupt remapping entries. Cache memory data can become stale or be compromised from security breaches or malfunctioning devices. In these circumstances, a rapid approach to clearing cache memory content is provided.Type: ApplicationFiled: July 31, 2012Publication date: February 6, 2014Inventors: Andrew G. Kegel, Mark D. Hummel, Anthony Asaro
-
Publication number: 20140040552Abstract: A method includes storing, with a first programmable processor, shared variable data to cache lines of a first cache of the first processor. The method further includes executing, with the first programmable processor, a store-with-release operation, executing, with a second programmable processor, a load-with-acquire operation, and loading, with the second programmable processor, the value of the shared variable data from a cache of the second programmable processor.Type: ApplicationFiled: August 2, 2013Publication date: February 6, 2014Applicant: QUALCOMM IncorporatedInventors: Bohuslav Rychlik, Tzung Ren Tzeng, Andrew Evan Gruber, Alexei V. Bourd, Colin Christopher Sharp, Eric Demers
-
Patent number: 8645796Abstract: Dynamic pipeline cache error correction includes receiving a request to perform an operation that requires a storage cache slot, the storage cache slot residing in a cache. The dynamic pipeline cache error correction also includes accessing the storage cache slot, determining a cache hit for the storage cache slot, identifying and correcting any correctable soft errors associated with the storage cache slot. The dynamic cache error correction further includes updating the cache with results of corrected data.Type: GrantFiled: June 24, 2010Date of Patent: February 4, 2014Assignee: International Business Machines CorporationInventors: Ekaterina M. Ambroladze, Michael Fee, Edward T. Gerchman, Arthur J. O'Neill, Jr.
-
Publication number: 20140032851Abstract: In general, the disclosure is directed to techniques for choosing which pages to evict from the buffer pool to make room for caching additional pages in the context of a database table scan. A buffer pool is maintained in memory. A fraction of pages of a table to persist in the buffer pool are determined. A random number is generated as a decimal value of 0 to 1 for each page of the table cached in the buffer pool. If the random number generated for a page is less than the fraction, the page is persisted in the buffer pool. If the random number generated for a page is greater than the fraction, the page is included as a candidate for eviction from the buffer pool.Type: ApplicationFiled: July 27, 2012Publication date: January 30, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sam S. Lightstone, Adam J. Storm
-
Publication number: 20140032850Abstract: Embodiments present a virtual disk image to applications such as virtual machines (VMs) executing on a computing device. The virtual disk image corresponds to one or more subparts of binary large objects (blobs) of data stored by a cloud service, and is implemented in a log structured format. Grains of the virtual disk image are cached by the computing device. The computing device caches only a subset of the grains and performs write operations without blocking the applications to reduce storage latency perceived by the applications. Some embodiments enable the applications that lack enterprise class storage to benefit from enterprise class cloud storage services.Type: ApplicationFiled: July 25, 2012Publication date: January 30, 2014Applicant: VMWARE, INC.Inventors: Thomas A. Phelan, Erik Cota-Robles, David William Barry, Adam Back
-
Publication number: 20140032852Abstract: In general, the disclosure is directed to techniques for choosing which pages to evict from the buffer pool to make room for caching additional pages in the context of a database table scan. A buffer pool is maintained in memory. A fraction of pages of a table to persist in the buffer pool are determined. A random number is generated as a decimal value of 0 to 1 for each page of the table cached in the buffer pool. If the random number generated for a page is less than the fraction, the page is persisted in the buffer pool. If the random number generated for a page is greater than the fraction, the page is included as a candidate for eviction from the buffer pool.Type: ApplicationFiled: March 6, 2013Publication date: January 30, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sam S. Lightstone, Adam J. Storm
-
Patent number: 8639873Abstract: A detachable storage device can comprise a ram cache, a device controller, and a storage system. The ram cache may be configured to receive data from a digital device. The device controller may be configured to transfer the data from the ram cache to the storage system. The storage system may be configured to store the data at a predetermined event.Type: GrantFiled: December 21, 2006Date of Patent: January 28, 2014Assignee: Imation Corp.Inventors: David Alexander Jevans, Gil Spencer
-
Patent number: 8635407Abstract: A storage device is provided for direct memory access. A controller of the storage device performs a mapping of a window of memory addresses to a logical block addressing (LBA) range of the storage device. Responsive to receiving from a host a write request specifying a write address within the window of memory addresses, the controller initializes a first memory buffer in the storage device and associates the first memory buffer with a first address range within the window of memory addresses such that the write address of the request is within the first address range. The controller writes to the first memory buffer based on the write address. Responsive to the buffer being full, the controller persists contents of the first memory buffer to the storage device using logical block addressing based on the mapping.Type: GrantFiled: September 30, 2011Date of Patent: January 21, 2014Assignee: International Business Machines CorporationInventors: Lee D. Cleveland, Andrew D. Walls
-
Publication number: 20140019688Abstract: Disclosed herein are systems, methods, and computer readable storage media for a database system using solid state drives as a second level cache. A database system includes random access memory configured to operate as a first level cache, solid state disk drives configured to operate as a persistent second level cache, and hard disk drives configured to operate as disk storage. The database system also includes a cache manager configured to receive a request for a data page and determine whether the data page is in cache or disk storage. If the data page is on disk, or in the second level cache, it is copied to the first level cache. If copying the data page results in an eviction, the evicted data page is copied to the second level cache. At checkpoint, dirty pages stored in the second level cache are flushed in place in the second level cache.Type: ApplicationFiled: July 13, 2012Publication date: January 16, 2014Applicant: iAnywhere SolutionsInventors: Pedram GHODSNIA, Reza SHERKAT, John C. SMIRNIOS, Peter BUMBULIS, Anil K. GOEL
-
Patent number: 8630418Abstract: A system or computer usable program product for managing keys in a computer memory including receiving a request to store a first key to a first key repository, storing the first key to a second key repository in response to the request, and storing the first key from the second key repository to the first key repository within said computer memory based on a predetermined periodicity.Type: GrantFiled: January 5, 2011Date of Patent: January 14, 2014Assignee: International Business Machines CorporationInventors: Bruce A. Rich, Thomas H. Benjamin, John T. Peck
-
Patent number: 8627012Abstract: A method, computer program product, and computing system for receiving, on a cache system, a plurality of data write requests, wherein each data write request identifies a data portion to be written to a data array associated with the cache system. The data portions associated with the data write requests are written to the cache system. The data portions associated with the data write requests are queued until the occurrence of a commit event. Upon the occurrence of the commit event, a consolidated write operation is performed to write the data portions associated with the data write requests to the data array.Type: GrantFiled: December 30, 2011Date of Patent: January 7, 2014Assignee: EMC CorporationInventors: Philip Derbeko, Assaf Natanzon, Anat Eyal, David Erel
-
Patent number: 8615633Abstract: Technologies are generally for maintaining cache coherency within a multi-core processor. A first cache entry to be evicted from a first cache may be identified. The first cache entry may include a block of data and a first tag indicating an owned state. An owner eviction message for the first cache entry may be broadcasted from the first cache. A second cache entry in a second cache may be identified. The second cache entry may include the block of data and a second tag indicating a shared state. The broadcasted owner eviction message may be detected with the second cache. An ownership acceptance message for the second cache entry may be broadcasted from the second cache. The broadcasted ownership acceptance message may be detected with the first cache. The second tag in the second cache entry may be transformed from the shared state to the owned state.Type: GrantFiled: April 23, 2009Date of Patent: December 24, 2013Assignee: Empire Technology Development LLCInventor: Yan Solihin
-
Publication number: 20130339569Abstract: Storage system(s) for providing storing data in physical storage in a recurring manner, method(s) of operating thereof, and corresponding computer program product(s). For example, a possible method can include for each recurrence: generating a snapshot of at least one logical volume; destaging all data corresponding to the snapshot which was accommodated in the cache memory prior to a time of generating the snapshot and which was dirty at the time of generating said snapshot, thus giving rise to destaged data group; and after the destaged data group has been successfully destaged, registering an indication that the snapshot is associated with an order preservation consistency condition for the at least one logical volume, thus giving rise to a consistency snapshot.Type: ApplicationFiled: June 14, 2012Publication date: December 19, 2013Applicant: Infinidat Ltd.Inventors: Yechiel YOCHAI, Michael DORFMAN, Efri ZEIDNER
-
Patent number: 8612721Abstract: According to one embodiment, upon request from an information processor, a semiconductor storage controller writes pieces of data in predetermined units into storage locations in which no data has been written in erased areas within a semiconductor chip's storage area. A third table and a second table which is a subset thereof include physical addresses each indicating a storage location of each of pieces of the data within the semiconductor chip. The first table includes either information specifying a second table entry or information specifying a third table entry. The semiconductor storage controller records the first and the second tables into a volatile memory or records the first table into a volatile memory and the third table into a nonvolatile memory.Type: GrantFiled: March 1, 2011Date of Patent: December 17, 2013Assignee: Kabushiki Kaisha ToshibaInventors: Shigehiro Asano, Shinichi Kanno, Kenichiro Yoshii
-
Publication number: 20130326150Abstract: A cache is maintained with write order numbers that indicate orders of writes into the cache, so that periodic partial flushes of the cache can be executed while maintaining write order consistency. A method of storing data into the cache includes receiving a request to write data into the cache, identifying lines in the cache for storing the data, writing the data into the lines of the cache, storing a write order number, and associating the write order number with the lines of the cache. A method of flushing a cache having cache lines associated with write order numbers includes the steps of identifying lines in the cache that are associated with either a selected write order number or a write order number that is less than the selected write order number, and flushing data stored in the identified lines to a persistent storage.Type: ApplicationFiled: June 5, 2012Publication date: December 5, 2013Applicant: VMware, Inc.Inventors: Thomas A. PHELAN, Erik COTA-ROBLES
-
Publication number: 20130326149Abstract: A method for destaging data from a memory of a storage controller to a striped volume is provided. The method includes determining if a stripe should be destaged from a write cache of the storage controller to the striped volume, destaging a partial stripe if a full stripe write percentage is less than a full stripe write affinity value, and destaging a full stripe if the full stripe write percentage is greater than the full stripe write affinity value. The full stripe write percentage includes a full stripe count divided by the sum of the full stripe count and a partial stripe count. The full stripe count is the number of stripes in the write cache where all chunks of a stripe are dirty. The partial stripe count is the number of stripes where at least one chunk but less than all chunks of the stripe are dirty.Type: ApplicationFiled: May 29, 2012Publication date: December 5, 2013Applicant: DOT HILL SYSTEMS CORPORATIONInventors: Michael David Barrell, Zachary David Traut
-
Patent number: 8601219Abstract: A memory system includes a first storing area included in a volatile semiconductor memory, a second and a third storing area included in a nonvolatile semiconductor memory, a controller that allocates the storage area of the nonvolatile semiconductor memory to the second storing area and the third storing area in a logical block unit associated with one or more blocks. The second storing area is configured to be managed with a first management unit. The third storing area is configured to be managed with a second management unit, a size of the second management unit being larger than a size of the first management unit.Type: GrantFiled: March 12, 2009Date of Patent: December 3, 2013Assignee: Kabushiki Kaisha ToshibaInventors: Junji Yano, Hidenori Matsuzaki, Kosuke Hatsuda
-
Patent number: 8601213Abstract: A system, method, and computer-readable medium that facilitate efficient use of cache memory in a massively parallel processing system are provided. A residency time of a data block to be stored in cache memory or a disk drive is estimated. A metric is calculated for the data block as a function of the residency time. The metric may further be calculated as a function of the data block size. One or more data blocks stored in cache memory are evaluated by comparing a respective metric of the one or more data blocks with the metric of the data block to be stored. A determination is then made to either store the data block on the disk drive or flush the one or more data blocks from the cache memory and store the data block in the cache memory. In this manner, the cache memory may be more efficiently utilized by storing smaller data blocks with lesser residency times by flushing larger data blocks with significant residency times from the cache memory.Type: GrantFiled: November 3, 2008Date of Patent: December 3, 2013Assignee: Teradata US, Inc.Inventors: Douglas Brown, John Mark Morris
-
Patent number: 8595455Abstract: Techniques for maintaining mirrored storage cluster data consistency can employ write-intent logging. The techniques can be scaled to any number of mirror nodes. The techniques can keep track of any outstanding I/Os, data in caches, and data that has gone out of sync between mirrored nodes due to link failures. The techniques can ensure that a power failure on any of the storage nodes does not result in inconsistent data among the storage nodes. The techniques may keep track of outstanding I/Os using a minimal memory foot-print and having a negligible impact on the I/O performance. Properly choosing the granularity of the system for tracking outstanding I/Os can result in a minimal amount of data requiring transfer to synchronize the mirror nodes. The capability to vary the granularity based on physical and logical parameters of the storage volumes may provide performance benefits.Type: GrantFiled: September 23, 2011Date of Patent: November 26, 2013Assignee: American Megatrends, Inc.Inventors: Paresh Chatterjee, Ajit Narayanan, Narayanan Balakrishnan, Raja Jayaraman
-
Patent number: 8583872Abstract: A cache memory having a sector function, operating in accordance with a set associative system, and performing a cache operation to replace data in a cache block in the cache way corresponding to a replacement cache way determined upon an occurrence of a cache miss comprises: storing sector ID information in association with each of the cache ways in the cache block specified by a memory access request; determining, upon the occurrence of the cache miss, replacement way candidates, in accordance with sector ID information attached to the memory access request and the stored sector ID information; selecting and outputting a replacement way from the replacement way candidates; and updating the stored sector ID information in association with each of the cache ways in the cache block specified by the memory access request, to the sector ID information attached to the memory access request.Type: GrantFiled: August 19, 2008Date of Patent: November 12, 2013Assignee: Fujitsu LimitedInventors: Shuji Yamamura, Mikio Hondou, Iwao Yamazaki, Toshio Yoshida
-
Publication number: 20130297880Abstract: Apparatuses, systems, and methods are disclosed for caching data. A method includes directly mapping a logical address of a backing store to a logical address of a non-volatile cache. A method includes mapping, in a logical-to-physical mapping structure, the logical address of the non-volatile cache to a physical location in the non-volatile cache. The physical location may store data associated with the logical address of the backing store. A method includes removing the mapping from the logical-to-physical mapping structure in response to evicting the data from the non-volatile cache so that membership in the logical-to-physical mapping structure denotes storage in the non-volatile cache.Type: ApplicationFiled: June 29, 2013Publication date: November 7, 2013Inventor: David Flynn
-
Publication number: 20130297884Abstract: Various embodiments for improving hash index key lookup caching performance in a computing environment are provided. In one embodiment, for a cached fingerprint map having a plurality of entries corresponding to a plurality of data fingerprints, reference count information is used to determine a length of time to retain the plurality of entries in cache. Those of the plurality of entries having a higher reference counts are retained longer than those having lower reference counts.Type: ApplicationFiled: March 13, 2013Publication date: November 7, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Joseph S. HYDE, II, Subhojit ROY
-
Patent number: 8578097Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.Type: GrantFiled: October 24, 2011Date of Patent: November 5, 2013Assignee: Intel CorporationInventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
-
Patent number: 8578100Abstract: A disk drive is disclosed comprising a head actuated over a disk, a volatile semiconductor memory (VSM), and a command queue. A plurality of write commands received from a host are stored in the command queue, and write data for the write commands is stored in the VSM. A flush time needed to flush the write data from the VSM to the disk is computed, and the write data is flushed from the VSM to a non-volatile memory (NVM) in response to the flush time.Type: GrantFiled: November 8, 2010Date of Patent: November 5, 2013Assignee: Western Digital Technologies, Inc.Inventors: Sang Huynh, Ayberk Ozturk
-
Publication number: 20130290636Abstract: Methods, and apparatus to cause performance of such methods, for managing memory. The methods include requesting a particular unit of data from a first level of memory. If the particular unit of data is not available from the first level of memory, the methods further include determining whether a free unit of data exists in the first level of memory, evicting a unit of data from the first level of memory if a free unit of data does not exist in the first level of memory, and requesting the particular unit of data from a second level of memory. If the particular unit of data is not available from the second level of memory, the methods further include reading the particular unit of data from a third level of memory. The methods still further include writing the particular unit of data to the first level of memory.Type: ApplicationFiled: April 30, 2012Publication date: October 31, 2013Inventors: Qiming Chen, Ren Wu, Meichun Hsu
-
Patent number: 8572409Abstract: For secure non-redundant storage of data, to store a data blocklet (sub-block), one takes a hash of each blocklet. The hash value is used as a key to encrypt the blocklet data. The key is then hashed to encrypt it and the hashed key used in the blocklet index to identify the blocklet. The blocklet index entry also conventionally includes the address of that encrypted blocklet. Unless one has a file representation which is a vector of the hash values, one cannot obtain direct information about the original blocklet from the blocklet index or the blocklet storage. To retrieve data, each original blocklet hash is hashed again to generate the index entry.Type: GrantFiled: September 26, 2008Date of Patent: October 29, 2013Inventor: Stephen P. Spackman
-
Patent number: 8566528Abstract: In an embodiment, a combining write buffer is configured to maintain one or more flush metrics to determine when to transmit write operations from buffer entries. The combining write buffer may be configured to dynamically modify the flush metrics in response to activity in the write buffer, modifying the conditions under which write operations are transmitted from the write buffer to the next lower level of memory. For example, in one implementation, the flush metrics may include categorizing write buffer entries as “collapsed.” A collapsed write buffer entry, and the collapsed write operations therein, may include at least one write operation that has overwritten data that was written by a previous write operation in the buffer entry. In another implementation, the combining write buffer may maintain the threshold of buffer fullness as a flush metric and may adjust it over time based on the actual buffer fullness.Type: GrantFiled: December 10, 2012Date of Patent: October 22, 2013Assignee: Apple Inc.Inventors: Peter J. Bannon, Andrew J. Beaumont-Smith, Ramesh B. Gunna, Wei-han Lien, Brian P. Lilly, Jaidev P. Patwardhan, Shih-Chieh R. Wen, Tse-Yu Yeh
-
Patent number: 8566521Abstract: A computer-implemented method, computer program product, and system are provided for implementing a cache offloader. A current cache memory usage is compared with a memory threshold. Responsive to the current cache memory usage exceeding the memory threshold, cache records are selected to be offloaded. Available memory in a plurality of computer systems is identified and the selected cache records are sent to the identified available memory. Transactional systems are dynamically enabled to use memory cache across multiple connected computer systems on demand eliminating conventional evictor and data miss issues that adversely impact performance.Type: GrantFiled: September 1, 2010Date of Patent: October 22, 2013Assignee: International Business Machines CorporationInventor: Hao Wang
-
Patent number: 8555086Abstract: A non-volatile memory, such as a NAND memory, may be encrypted by reading source blocks, writing to destination blocks, and then erasing the source blocks. As part of the encryption sequence, a power fail recovery procedure, using sequence numbers, is used to reestablish a logical-to-physical translation table for the destination blocks.Type: GrantFiled: June 30, 2008Date of Patent: October 8, 2013Assignee: Intel CorporationInventors: Robert Royer, Sanjeev N. Trika
-
Patent number: 8554961Abstract: Apparatus, methods and computer-code are disclosed where an impending decoupling between a peripheral device and a host is detected. In some embodiments, in response to the detected impending disconnection, a user alert signal is generated. In some embodiments, an ‘onboard detector’ that is associated with housing of the peripheral device and operative to detect the impending disconnection is provided. In some embodiments, the user alert signal is generated in accordance with inter-device data flow between the host and the peripheral device. Exemplary peripheral devices include but are not limited to transient storage devices such as a USB flash drives (UFD).Type: GrantFiled: August 5, 2011Date of Patent: October 8, 2013Assignee: SanDisk IL Ltd.Inventors: Yehuda Hahn, Mordechai Teicher, Itzhak Pomerantz
-
Publication number: 20130262775Abstract: Embodiments of the present invention provides for the execution of threads and/or workitems on multiple processors of a heterogeneous computing system in a manner that they can share data correctly and efficiently. Disclosed method, system, and article of manufacture embodiments include, responsive to an instruction from a sequence of instructions of a work-item, determining an ordering of visibility to other work-items of one or more other data items in relation to a particular data item, and performing at least one cache operation upon at least one of the particular data item or the other data items present in any one or more cache memories in accordance with the determined ordering. The semantics of the instruction includes a memory operation upon the particular data item.Type: ApplicationFiled: March 30, 2012Publication date: October 3, 2013Applicants: ATI Technologies ULC, Advanced Micro Devices, Inc.Inventors: Anthony ASARO, Kevin Normoyle, Mark Hummel, Norman Rubin, Mark Fowler