Write-through Patents (Class 711/142)
  • Patent number: 11429585
    Abstract: In some embodiments, systems, methods, and apparatuses are provided herein useful for managing a plurality of concurrent and nearly concurrent data requests within a computer system. The systems have a main data storage for storing source data, and a high speed, and/or remote data storage for storing computed data. In some embodiments, a combination of data filters and distributed mutex processes are used to eliminate or limit duplicate reads and writes into the high speed data storage units by ensuring only a single service module gets a lock to do the read and update of the cache; and makes it possible for keys to expire and be removed from the data filter. The systems and methods herein have various applications including retail sales environments where the requested data is related to product sales, product availability and the like.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: August 30, 2022
    Assignee: Walmart Apollo, LLC
    Inventors: Gaurav Agrawal, Mingfeng Gong, Deiva Saranya Mandadi, Sandeep Singh, Tuo Shi
  • Patent number: 10932202
    Abstract: Technologies for dynamic multi-core packet processing distribution include a compute device having a distributor core, a direct memory access (DMA) engine, and multiple worker cores. The distributor core writes work data to a distribution buffer. The work data is associated with a packet processing operation. The distributor core may perform a work distribution operation to generate the work data. The work data may be written to a private cache of the distributor core. The distributor core programs the DMA engine to copy the work data from the distribution buffer to a shadow buffer. The DMA engine may copy the work data from one cache line of a shared cache to another cache line of the shared cache. The worker cores access the work data in the shadow buffer. The worker cores may perform the packet processing operation with the work data. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: February 23, 2021
    Assignee: Intel Corporation
    Inventors: Jasvinder Singh, Harry van Haaren, Reshma Pattan, Radu Nicolau
  • Patent number: 10684989
    Abstract: Systems and methods are provided for evicting entries from a file handle cache. In accordance with certain embodiments, a two-stage eviction process is utilized. In a first stage of the eviction process, entries in the file entry cache are analyzed and marked for eviction while a shared lock is maintained on the file handle cache. The shared lock enables the file handle cache to be concurrently accessed by a content serving system to service content requests. In a second stage of the eviction process, entries in the file handle cache that are marked for eviction are removed while an exclusive lock is maintained on the file handle cache. The exclusive lock prevents the content serving system from concurrently accessing the file handle cache to service content requests.
    Type: Grant
    Filed: June 15, 2011
    Date of Patent: June 16, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel Vasquez Lopez, Won Yoo
  • Patent number: 10191666
    Abstract: A method of controlling write parameter selection in a memory device, can include: (i) storing a configuration set number in a configuration register, where the configuration register is accessible by a user via an interface; (ii) receiving a write command from a host via the interface; (iii) comparing the stored configuration set number against set numbers in a register block to determine a match or a mismatch; (iv) downloading configuration bits from a memory array into the register block in response to the mismatch determination; (v) selecting a configuration set corresponding to the stored configuration set number from the register block in response to the match determination; and (vi) using the selected configuration set to perform a write operation on the memory device to execute the write command.
    Type: Grant
    Filed: October 5, 2015
    Date of Patent: January 29, 2019
    Assignee: Adesto Technologies Corporation
    Inventors: Derric Jawaher Herman Lewis, John Dinh, Nathan Gonzales
  • Patent number: 9767032
    Abstract: A cache and/or storage module may be configured to reduce write amplification in a cache storage. Cache layer write amplification (CLWA) may occur due to an over-permissive admission policy. The cache module may be configured to reduce CLWA by configuring admission policies to avoid unnecessary writes. Admission policies may be predicated on access and/or sequentiality metrics. Flash layer write amplification (FLWA) may arise due to the write-once properties of the storage medium. FLWA may be reduced by delegating cache eviction functionality to the underlying storage layer. The cache and storage layers may be configured to communicate coordination information, which may be leveraged to improve the performance of cache and/or storage operations.
    Type: Grant
    Filed: December 5, 2013
    Date of Patent: September 19, 2017
    Assignee: SanDisk Technologies LLC
    Inventors: Nisha Talagala, Ned D. Plasson, Jingpei Yang, Robert Wood, Swaminathan Sundararaman, Gregory N. Gillis
  • Patent number: 9747350
    Abstract: A scalable network of mobile data storage containers that are connected in peer-to-peer networks to archive large data storage capacities. The various embodiments provide a method of extracting a large amount of data from a variety of sources and storing the extracted data in mobile, storage units. The various embodiments provide storage units housed in mobile containers that can store multiple days/weeks of sensor data in the order of petabytes (1024 terabytes). The various embodiments, integrate high performance computing devices into the mobile storage containers that are able to perform critical extraction, pattern, and index processing on the sensor data. The various embodiments, provide a method for the efficient physical transport of the mobile storage containers from current locations to a center analysis location for re-connecting in another peer-to-peer network for integration into a central enterprise data warehouses.
    Type: Grant
    Filed: July 23, 2014
    Date of Patent: August 29, 2017
    Assignee: YottaStor, LLC
    Inventor: Robert John Carlson
  • Patent number: 9645812
    Abstract: A method of updating a headset system firmware and a headset system are provided. The headset system comprises a headset and a base unit, the base unit having a base unit control circuit and being configured to connect to a computer system, the base unit comprises a headset dock to receive the headset. The method comprises the steps of receiving, in the base unit control circuit, a headset system firmware update from the computer system, the headset system firmware update comprising a headset firmware update and/or a base unit firmware update, and updating the base unit control circuit with the base unit firmware update.
    Type: Grant
    Filed: December 13, 2015
    Date of Patent: May 9, 2017
    Assignee: GN Netcom A/S
    Inventor: Morten Proschowsky
  • Patent number: 9286221
    Abstract: A heterogeneous memory system includes a main memory arrangement, a first-level cache, a second-level cache, and a memory management unit (MMU). The first-level cache includes an SRAM arrangement and the second-level cache includes a DRAM arrangement. The MMU is configured and arranged to read first data from the main memory arrangement in response to a stored first value associated with the first data and indicative of a start time. The MMU selects one of the first-level cache or the second-level cache for storage of the first data and stores the first data in the selected one of the first or second-level caches. The MMU reads second data from one of the first-level cache or second-level cache and writes the data to the main memory arrangement in response to a stored second value associated with the second data and indicative of a duration.
    Type: Grant
    Filed: April 7, 2015
    Date of Patent: March 15, 2016
    Assignee: Reniac, Inc.
    Inventors: Prasanna Sundararajan, Chidamber Kulkarni
  • Patent number: 9262325
    Abstract: A heterogeneous memory system includes a network interface card, a main memory arrangement, a first-level cache, and a memory management unit (MMU). The main memory arrangement, first-level cache and the MMU are disposed on the network interface card. The first-level cache includes an SRAM arrangement and a DRAM arrangement. The MMU is configured and arranged to read first data from the main memory arrangement in response to a stored first value associated with the first data and indicative of a start time. The MMU selects one of the SRAM arrangement or the DRAM arrangement for storage of the first data and stores the first data in the selected one of the SRAM arrangement or DRAM arrangement. The MMU reads second data from one of the SRAM arrangement or DRAM arrangement and writes the data to the main memory arrangement in response to a stored second value associated with the second data and indicative of a duration.
    Type: Grant
    Filed: April 7, 2015
    Date of Patent: February 16, 2016
    Assignee: Reniac, Inc.
    Inventors: Prasanna Sundararajan, Chidamber Kulkarni
  • Patent number: 9141549
    Abstract: A controller sets, out of a data range that is specified in a read request from a host device, a predetermined size of a first data range that follows a top portion of the data range and a predetermined size of a second data range that follows the first data range, and after transfer, to the host device, of data corresponding to the first data range from a second storage unit or a third storage unit having smaller data output latency than the first storage unit in which read/write of data is performed is started, the controller searches for data corresponding to the second data range in the second storage unit or the third storage unit.
    Type: Grant
    Filed: August 17, 2009
    Date of Patent: September 22, 2015
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Hirokuni Yano, Naoki Otsuka
  • Patent number: 9037800
    Abstract: A network storage server includes a main buffer cache to buffer writes requested by clients before committing them to primary persistent storage. The server further uses a secondary cache, implemented as low-cost, solid-state memory, such as flash memory, to store data evicted from the main buffer cache or data read from the primary persistent storage. To prevent bursts of writes to the secondary cache, data is copied from the main buffer cache to the secondary cache speculatively, before there is a need to evict data from the main buffer cache. Data can be copied to the secondary cache as soon as the data is marked as clean in the main buffer cache. Data can be written to secondary cache at a substantially constant rate, which can be at or close to the maximum write rate of the secondary cache.
    Type: Grant
    Filed: June 14, 2013
    Date of Patent: May 19, 2015
    Assignee: NetApp, Inc.
    Inventor: Daniel J. Ellard
  • Patent number: 9032157
    Abstract: Disclosed is a computer system (100) comprising a processor unit (110) adapted to run a virtual machine in a first operating mode; a cache (120) accessible to the processor unit, said cache comprising a plurality of cache rows (1210), each cache row comprising a cache line (1214) and an image modification flag (1217) indicating a modification of said cache line caused by the running of the virtual machine; and a memory (140) accessible to the cache controller for storing an image of said virtual machine; wherein the processor unit comprises a replication manager adapted to define a log (200) in the memory prior to running the virtual machine in said first operating mode; and said cache further includes a cache controller (122) adapted to periodically check said image modification flags; write only the memory address of the flagged cache lines in the defined log and subsequently clear the image modification flags.
    Type: Grant
    Filed: December 11, 2012
    Date of Patent: May 12, 2015
    Assignee: International Business Machines Corporation
    Inventors: Sanjeev Ghai, Guy L. Guthrie, Geraint North, William J. Starke, Phillip G. Williams
  • Patent number: 9009417
    Abstract: It is an object to improve a reliability of a data protection for a storage control apparatus that is provided with a redundant configuration that is made of a plurality of clusters. A memory unit in each of the clusters C1 and C2 is provided with a first memory 3 having a volatile property, a battery 5 that is configured to supply an electrical power to the first memory 3, and a second memory 4 that stores data that is transferred from the first memory 3 in the case of a power outage. A control unit selects an operating mode for protecting data from a normal mode, a write through mode, and an access disable mode (a not ready state) based on a remaining power level of the battery 5.
    Type: Grant
    Filed: August 27, 2010
    Date of Patent: April 14, 2015
    Assignee: Hitachi, Ltd.
    Inventor: Tomoaki Okawa
  • Patent number: 9003114
    Abstract: Methods and apparatus for cut-through cache memory management in write command processing on a mirrored virtual volume of a virtualized storage system, the virtual volume comprising a plurality of physical storage devices coupled with the storage system. Features and aspects hereof within the storage system provide for receipt of a write command and associated write data from an attached host. Using a cut-through cache technique, the write data is stored in a cache memory and transmitted to a first of the plurality of storage devices as the write data is stored in the cache memory thus eliminating one read-back of the write data for transfer to a first physical storage device. Following receipt of the write data and storage in the cache memory, the write data is transmitted from the cache memory to the other physical storage devices.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: April 7, 2015
    Assignee: NetApp, Inc.
    Inventor: Howard Young
  • Patent number: 8984234
    Abstract: A method and system for managing a cache for a host machine is disclosed. The method includes: indicating each cache line in the cache as being in a transitional meta-state when any virtual machine hosted on the host machine moves out of the host machine; each time a particular cache line is accessed, indicating that particular cache line as no longer in the transitional meta-state; and marking the cache lines still in the transitional meta-state as invalid when a virtual machine moves back to the host machine.
    Type: Grant
    Filed: January 11, 2013
    Date of Patent: March 17, 2015
    Assignee: LSI Corporation
    Inventors: Parag R. Maharana, Luca Bert, Earl Cohen
  • Patent number: 8954674
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: October 8, 2013
    Date of Patent: February 10, 2015
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8949312
    Abstract: An embodiment generally relates to a method of updating clients from a server. The method includes maintaining a master copy of a software on a server and capturing changes to the master copy of the software on an update disk image, where the changes are contained in at least one chunk. The method also includes merging the update disk image with one of two client disk images of the client copy of the software.
    Type: Grant
    Filed: May 25, 2006
    Date of Patent: February 3, 2015
    Assignee: Red Hat, Inc.
    Inventors: Mark McLoughlin, William Nottingham, Timothy Burke
  • Patent number: 8935471
    Abstract: A method for managing a cache structure of a coupling facility includes receiving a conditional write command from a computing system and determining whether data associated with the conditional write command is part of a working set of data of the cache structure. If the data associated with the conditional write command is part of the working set of data of the cache structure the conditional write command is processed as an unconditional write command. If the data associated with the conditional write command is not part of the working set of data of the cache structure a conditional write failure notification is transmitted to the computing system.
    Type: Grant
    Filed: December 11, 2013
    Date of Patent: January 13, 2015
    Assignee: International Business Machines Corporation
    Inventors: Riaz Ahmad, David A. Elko, Jeffrey W. Josten, Georgette Kurdt, Scott F. Schiffer, David H. Surman
  • Patent number: 8924652
    Abstract: Embodiments provide a method comprising receiving, at a cache associated with a central processing unit that is disposed on an integrated circuit, a request to perform a cache operation on the cache; in response to receiving and processing the request, determining that first data cached in a first cache line of the cache is to be written to a memory that is coupled to the integrated circuit; identifying a second cache line in the cache, the second cache line being complimentary to the first cache line; transmitting a single memory instruction from the cache to the memory to write to the memory (i) the first data from the first cache line and (ii) second data from the second cache line; and invalidating the first data in the first cache line, without invalidating the second data in the second cache line.
    Type: Grant
    Filed: April 4, 2012
    Date of Patent: December 30, 2014
    Assignee: Marvell Israel (M.I.S.L.) Ltd.
    Inventors: Adi Habusha, Eitan Joshua, Shaul Chapman
  • Patent number: 8914391
    Abstract: To provide a method, program, and system for converting graph data to a data structure that enables manipulations in various applications to be reflected in the original graph data. The method uses at least one graph matching pattern to convert at least a part of graph data including nodes and edges to a data structure as an image of a homomorphism thereof. A pattern is provided which includes one representative node variable having a first constraint that at most one node in the graph data matches the representative node variable; the node is prohibited from matching a node variable in another pattern. The method includes the step of performing matching between the graph data and the pattern to obtain a matching result that does not violate constraints including the first constraint, and the step of generating a data structure corresponding to the matching result that does not violate the constraints.
    Type: Grant
    Filed: May 22, 2012
    Date of Patent: December 16, 2014
    Assignee: International Business Machines Corporation
    Inventors: Futoshi Iwama, Hisashi Miyashita, Hideki Tai
  • Patent number: 8909872
    Abstract: A computer system is provided including a central processing unit having an internal cache, a memory controller is coupled to the central processing unit, and a closely coupled peripheral is coupled to the central processing unit. A coherent interconnection may exist between the internal cache and both the memory controller and the closely coupled peripheral, wherein the coherent interconnection is a bus.
    Type: Grant
    Filed: October 31, 2006
    Date of Patent: December 9, 2014
    Assignee: Hewlett-Packard Development Company, L. P.
    Inventors: Michael S. Schlansker, Boon Ang, Erwin Oertli
  • Publication number: 20140337569
    Abstract: A system, method, and computer program product for low-latency scheduling and launch of memory defined tasks. The method includes the steps of receiving a task metadata data structure to be stored in a memory associated with a processor, transmitting the task metadata data structure to a scheduling unit of the processor, storing the task metadata data structure in a cache unit included in the scheduling unit, and copying the task metadata data structure from the cache unit to the memory.
    Type: Application
    Filed: May 8, 2013
    Publication date: November 13, 2014
    Applicant: Nvidia Corporation
    Inventors: Scott Ricketts, Brian Scott Pharris, Nicholas Wang, Luke David Durant, Philip Alexander Cuadra, Jerome F. Duluk, Jr.
  • Patent number: 8886895
    Abstract: A method for fetching information in response to hazard indication information, the method includes: (i) associating hazard indication information to at least one information unit that is being fetched to the cache module; (ii) receiving a request to perform a fetch operation; and (iii) determining whether to fetch at least one information unit to the cache module in response to the hazard indication information and in response to dirty information associated with the at least one information unit.
    Type: Grant
    Filed: September 14, 2004
    Date of Patent: November 11, 2014
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Itay Peled, Moshe Anschel, Jacob Efrat, Alon Eldar, Ziv Zamsky
  • Publication number: 20140317359
    Abstract: A method for accessing data stored in a distributed caching storage system containing a home cluster and a secondary cluster is provided. A first copy of a file is stored on the home cluster and a second copy of the file is stored on the secondary cluster. The second copy of the file is associated with an inode data structure having a consistency attribute. An input/output request is received directed to the file and indicates that file is in an inconsistent state by updating the inode's consistency attribute. The first copy and the second copy of the file is updated according to the received input/output request and it is determined whether the first copy and the second copy were updated successfully. The maintaining of the inode's consistency attribute is indicative of the inconsistent state of the file.
    Type: Application
    Filed: April 17, 2013
    Publication date: October 23, 2014
    Applicant: International Business Machines Corporation
    Inventors: Manoj P. Naik, Frank B. Schmuck, Anurag Sharma, Renu Tewari
  • Patent number: 8868846
    Abstract: Disclosed is a coherent storage system. A network interface device (NIC) receives network storage commands from a host. The NIC may cache the data to/from the storage commands in a solid-state disk. The NIC may respond to future network storage command by supplying the data from the solid-state disk rather than initiating a network transaction. Other NIC's on other hosts may also cache network storage data. These NICs may respond to transactions from the first NIC by supplying data, or changing the state of data in their caches.
    Type: Grant
    Filed: December 29, 2010
    Date of Patent: October 21, 2014
    Assignee: Netapp, Inc.
    Inventor: Robert E. Ober
  • Publication number: 20140281269
    Abstract: A technique includes performing an update to a location of a non-volatile memory. The update is created by execution of at least one machine executable instruction of a plurality of machine executable instructions. The technique includes using a processor-based machine to selectively track the update to allow recovery of the execution to a given consistency point based at least in part on whether the machine executable instruction(s) creating the update are located within a synchronized section of the plurality of machine executable instructions.
    Type: Application
    Filed: March 13, 2013
    Publication date: September 18, 2014
    Applicant: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
    Inventor: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
  • Patent number: 8838906
    Abstract: In a multiprocessor system with at least two levels of cache, a speculative thread may run on a core processor in parallel with other threads. When the thread seeks to do a write to main memory, this access is to be written through the first level cache to the second level cache. After the write though, the corresponding line is deleted from the first level cache and/or prefetch unit, so that any further accesses to the same location in main memory have to be retrieved from the second level cache. The second level cache keeps track of multiple versions of data, where more than one speculative thread is running in parallel, while the first level cache does not have any of the versions during speculation. A switch allows choosing between modes of operation of a speculation blind first level cache.
    Type: Grant
    Filed: January 4, 2011
    Date of Patent: September 16, 2014
    Assignee: International Business Machines Corporation
    Inventors: Alan Gara, Martin Ohmacht
  • Patent number: 8838905
    Abstract: A storage controller that includes a cache, receives a command from a host, wherein a set of criteria corresponding to read response times for executing the command have to be satisfied. A destage application that destages tracks based at least on recency of usage and spatial location of the tracks is executed, wherein a spatial ordering of the tracks is maintained in a data structure, and the destage application traverses the spatial ordering of the tracks. Tracks are destaged from at least inside or outside diameters of disks at periodic intervals, while traversing the spatial ordering of the tracks, wherein the set of criteria corresponding to the read response times for executing the command are satisfied.
    Type: Grant
    Filed: November 17, 2011
    Date of Patent: September 16, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Lokesh M. Gupta, Christopher J. Strauss, Will A. Wright
  • Patent number: 8838888
    Abstract: A method for managing a cache structure of a coupling facility includes receiving a conditional write command from a computing system and determining whether data associated with the conditional write command is part of a working set of data of the cache structure. If the data associated with the conditional write command is part of the working set of data of the cache structure the conditional write command is processed as an unconditional write command. If the data associated with the conditional write command is not part of the working set of data of the cache structure a conditional write failure notification is transmitted to the computing system.
    Type: Grant
    Filed: March 19, 2012
    Date of Patent: September 16, 2014
    Assignee: International Business Machines Corporation
    Inventors: Riaz Ahmad, David A. Elko, Jeffrey W. Josten, Georgette Kurdt, Scott F. Schiffer, David H. Surman
  • Patent number: 8832367
    Abstract: Methods and apparatus for cut-through cache memory management in write command processing on a mirrored virtual volume of a virtualized storage system, the virtual volume comprising a plurality of physical storage devices coupled with the storage system. Features and aspects hereof within the storage system provide for receipt of a write command and associated write data from an attached host. Using a cut-through cache technique, the write data is stored in a cache memory and transmitted to a first of the plurality of storage devices as the write data is stored in the cache memory thus eliminating one read-back of the write data for transfer to a first physical storage device. Following receipt of the write data and storage in the cache memory, the write data is transmitted from the cache memory to the other physical storage devices.
    Type: Grant
    Filed: July 2, 2013
    Date of Patent: September 9, 2014
    Assignee: NetApp Inc.
    Inventor: Howard Young
  • Patent number: 8819343
    Abstract: A storage controller that includes a cache, receives a command from a host, wherein a set of criteria corresponding to read response times for executing the command have to be satisfied. A destage application that destages tracks based at least on recency of usage and spatial location of the tracks is executed, wherein a spatial ordering of the tracks is maintained in a data structure, and the destage application traverses the spatial ordering of the tracks. Tracks are destaged from at least inside or outside diameters of disks at periodic intervals, while traversing the spatial ordering of the tracks, wherein the set of criteria corresponding to the read response times for executing the command are satisfied.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: August 26, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Lokesh M. Gupta, Christopher J. Strauss, Will A. Wright
  • Patent number: 8799581
    Abstract: Color-based caching allows each cache line to be distinguished by a specific color, and enables the manipulation of cache behavior based upon the colors of the cache lines. When multiple threads are able to share a cache, effective cache management is critical to overall performance. Color-based caching provides an effective method to better utilize caches and avoid unnecessary cache thrashing and pollution. Hardware maintains color-based counters relative to the cache lines to monitor and obtain feedback on cache line events. These counters are utilized for cache coherence transactions in multiple processor systems.
    Type: Grant
    Filed: January 5, 2007
    Date of Patent: August 5, 2014
    Assignee: International Business Machines Corporation
    Inventors: Xiaowei Shen, David F. Bacon, Robert W. Wisniewski, Orran Krieger
  • Patent number: 8799584
    Abstract: A method and an apparatus for implementing multi-processor memory coherency are disclosed. The method includes: a Level-2 (L2) cache of a first cluster receives a control signal of the first cluster for reading first data; the L2 cache of the first cluster reads the first data in a Level-1 (L1) cache of a second cluster through an Accelerator Coherency Port (ACP) of the L1 cache of the second cluster if the first data is currently maintained by the second cluster, where the L2 cache of the first cluster is connected to the ACP of the L1 cache of the second cluster; and the L2 cache of the first cluster provides the first data read to the first cluster for processing. The technical solution under the present invention implements memory coherency between clusters in the ARM Cortex-A9 architecture.
    Type: Grant
    Filed: March 31, 2011
    Date of Patent: August 5, 2014
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Xiping Zhou, Jingyu Li
  • Patent number: 8793439
    Abstract: A method of accelerating memory operations using virtualization information includes executing a hypervisor on hardware resources of a computing system. A plurality of domains are created under the control of the hypervisor. Each domain is allocated memory resources that include accessible memory space that is exclusively accessible by that domain. Each domain is allocated one or more processor resources. The hypervisor identifies domain layout information that includes a boundary of accessible memory space of each domain. The hypervisor provides the domain layout information to each processor resource. Each processor resource is configured to implement, on a per domain basis, a restricted coherency protocol based on the domain layout information. The restricted coherency protocol bypasses, relative to the domain, downstream caches when a cache line falls within the accessible memory space of that domain.
    Type: Grant
    Filed: March 18, 2010
    Date of Patent: July 29, 2014
    Assignee: Oracle International Corporation
    Inventor: Lawrence Spracklen
  • Publication number: 20140201462
    Abstract: A method and system for managing a cache for a host machine is disclosed. The method includes: indicating each cache line in the cache as being in a transitional meta-state when any virtual machine hosted on the host machine moves out of the host machine; each time a particular cache line is accessed, indicating that particular cache line as no longer in the transitional meta-state; and marking the cache lines still in the transitional meta-state as invalid when a virtual machine moves back to the host machine.
    Type: Application
    Filed: January 11, 2013
    Publication date: July 17, 2014
    Applicant: LSI CORPORATION
    Inventors: Parag R. Maharana, Luca Bert, Earl Cohen
  • Publication number: 20140189252
    Abstract: A system, processor and method to monitor specific cache events and behavior based on established principles of quantized architectural vulnerability factor (AVF) through the use of a dynamic cache write policy controller. The output of the controller is then used to set the write back or write through mode policy for any given cache. This method can be used to change cache modes dynamically and does not require the system to be rebooted. The dynamic nature of the controller provides the capability of intelligently switching from reliability to performance mode and back as needed. This method eliminates the residency time of dirty lines in a cache, which increases soft errors (SER) resiliency of protected caches in the system and reduces detectable unrecoverable errors (DUE), while keeping implementation cost of hardware at a minimum.
    Type: Application
    Filed: December 31, 2012
    Publication date: July 3, 2014
    Inventor: Arijit Biswas
  • Patent number: 8769199
    Abstract: A method for distributing IO load in a RAID storage system is disclosed. The RAID storage system may include a plurality of RAID volumes and a plurality of processors. The IO load distribution method may include determining whether the RAID storage system is operating in a write-through mode or a write-back mode; distributing the IO load to a particular processor selected among the plurality of processors when the RAID storage system is operating in the write-through mode, the particular processor being selected based on a number of available resources associated with the particular processor; and distributing the IO load among the plurality of processors when the RAID storage system is operating in the write-back mode, the distribution being determined based on: an index of a data stripe, and a number of processors in the plurality of processors.
    Type: Grant
    Filed: May 17, 2011
    Date of Patent: July 1, 2014
    Assignee: LSI Corporation
    Inventor: Kapil Sundrani
  • Patent number: 8762652
    Abstract: A data processing system includes a first master having a cache, a second master, a memory operably coupled to the first master and the second master via a system interconnect. The cache includes a cache controller which implements a set of cache coherency states for data units of the cache. The cache coherency states include an invalid state; an unmodified non-coherent state indicating that data in a data unit of the cache has not been modified and is not guaranteed to be coherent with data in at least one other storage device of the data processing system, and an unmodified coherent state indicating that the data of the data unit has not been modified and is coherent with data in the at least one other storage device of the data processing system.
    Type: Grant
    Filed: April 30, 2008
    Date of Patent: June 24, 2014
    Assignee: Freescale Semiconductor, Inc.
    Inventor: William C. Moyer
  • Patent number: 8756377
    Abstract: An apparatus for storing data that is being processed is disclosed. The apparatus comprises: a cache associated with a processor and for storing a local copy of data items stored in a memory for use by the processor, monitoring circuitry associated with the cache for monitoring write transaction requests to the memory initiated by a further device, the further device being configured not to store data in the cache. The monitoring circuitry is responsive to detecting a write transaction request to write a data item, a local copy of which is stored in the cache, to block a write acknowledge signal transmitted from the memory to the further device indicating the write has completed and to invalidate the stored local copy in the cache and on completion of the invalidation to send the write acknowledge signal to the further device.
    Type: Grant
    Filed: February 2, 2010
    Date of Patent: June 17, 2014
    Assignee: ARM Limited
    Inventors: Simon John Craske, Antony John Penton, Loic Pierron, Andrew Christopher Rose
  • Patent number: 8738840
    Abstract: A memory system is provided. The system includes an operating system kernel that regulates read and write access to one or more FLASH memory devices that are employed for random access memory applications. A buffer component operates in conjunction with the kernel to regulate read and write access to the one or more FLASH devices.
    Type: Grant
    Filed: March 31, 2008
    Date of Patent: May 27, 2014
    Assignee: Spansion LLC
    Inventor: Tzungren Allan Tzeng
  • Patent number: 8715065
    Abstract: Described herein are processes and devices that utilize non-volatile memory on a wagering game machine. One of the devices described is a wagering game system. The wagering game system can receive a request to activate a first wagering game on a wagering game machine, receive critical data for the first wagering game and store the critical data to a fixed-size block within a non-volatile memory store so that the non-volatile memory store includes critical data for only that wagering game. The wagering game system can then copy the critical data for the wagering game to a backing store, verify that the copied critical data on the backing store matches the critical data in the non-volatile memory, activate the first wagering game, present results for the wagering game, and update the backing store with changes made to the critical data on the non-volatile memory store during a game session.
    Type: Grant
    Filed: February 12, 2009
    Date of Patent: May 6, 2014
    Assignee: WMS Gaming, Inc.
    Inventor: Jason A. Smith
  • Publication number: 20140068183
    Abstract: A storage module may be configured to service I/O requests according to different persistence levels. The persistence level of an I/O request may relate to the storage resource(s) used to service the I/O request, the configuration of the storage resource(s), the storage mode of the resources, and so on. In some embodiments, a persistence level may relate to a cache mode of an I/O request. I/O requests pertaining to temporary or disposable data may be serviced using an ephemeral cache mode. An ephemeral cache mode may comprise storing I/O request data in cache storage without writing the data through (or back) to primary storage. Ephemeral cache data may be transferred between hosts in response to virtual machine migration.
    Type: Application
    Filed: March 14, 2013
    Publication date: March 6, 2014
    Inventors: Vikram Joshi, David Flynn, Yang Luan, Michael F. Brown
  • Publication number: 20140068197
    Abstract: A storage module may be configured to service I/O requests according to different persistence levels. The persistence level of an I/O request may relate to the storage resource(s) used to service the I/O request, the configuration of the storage resource(s), the storage mode of the resources, and so on. In some embodiments, a persistence level may relate to a cache mode of an I/O request. I/O requests pertaining to temporary or disposable data may be serviced using an ephemeral cache mode. An ephemeral cache mode may comprise storing I/O request data in cache storage without writing the data through (or back) to primary storage. Ephemeral cache data may be transferred between hosts in response to virtual machine migration.
    Type: Application
    Filed: March 14, 2013
    Publication date: March 6, 2014
    Inventors: Vikram Joshi, David Flynn, Yang Luan, Michael F. Brown
  • Publication number: 20140059300
    Abstract: In one embodiment, a method performed by one or more computing devices includes receiving at a host cache a first request for data comprising at least one snapshot of a cached logical unit number (LUN), sending, by the host cache, the data comprising at least one snapshot of the cached LUN in response to the first request, and in response to the completing sending the data comprising at least one snapshot of the cached LUN, sending, by the host cache, a first response indicating that sending the data is complete.
    Type: Application
    Filed: August 24, 2012
    Publication date: February 27, 2014
    Applicant: DELL PRODUCTS L.P.
    Inventors: Marc David Olin, Michael James Klemm, Ranjit Pandit
  • Publication number: 20140059298
    Abstract: In one embodiment, a method performed by one or more computing devices includes receiving at a host cache, a first request to prepare a volume of the host cache for creating a snapshot of a cached logical unit number (LUN), the request indicating that a snapshot of the cached LUN will be taken, preparing, in response to the first request, the volume of the host cache for creating the snapshot of the cached LUN depending on a mode of the host cache, receiving, at the host cache, a second request to create the snapshot of the cached LUN, and in response to the second request, creating, at the host cache, the snapshot of the cached LUN.
    Type: Application
    Filed: August 24, 2012
    Publication date: February 27, 2014
    Applicant: DELL PRODUCTS L.P.
    Inventors: Marc David Olin, Michael James Klemm
  • Patent number: 8645796
    Abstract: Dynamic pipeline cache error correction includes receiving a request to perform an operation that requires a storage cache slot, the storage cache slot residing in a cache. The dynamic pipeline cache error correction also includes accessing the storage cache slot, determining a cache hit for the storage cache slot, identifying and correcting any correctable soft errors associated with the storage cache slot. The dynamic cache error correction further includes updating the cache with results of corrected data.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: February 4, 2014
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Michael Fee, Edward T. Gerchman, Arthur J. O'Neill, Jr.
  • Publication number: 20140032855
    Abstract: An information processing apparatus includes a processor, a memory, and a cache. Information read from the memory by the processor is stored in the cache. The processor writes the information stored in the memory in all of the regions of the cache at a predetermined timing.
    Type: Application
    Filed: July 29, 2013
    Publication date: January 30, 2014
    Applicant: FUJITSU LIMITED
    Inventors: Tatsuya SHINOZAKI, Nina TSUKAMOTO, Hidehiko NISHIDA
  • Patent number: 8631209
    Abstract: Techniques are described for using chunk stores as building blocks to construct larger chunk stores. A chunk store constructed of other chunk stores (a composite chunk store) may have any number and type of building block chunk stores. Further, the building block chunk stores within a composite chunk store may be arranged in any manner, resulting in any number of levels within the composite chunk store. The building block chunk stores expose a common interface, and apply the same hash function to content of chunks to produce the access key for the chunks. Because the access key is based on content, all copies of the same chunk will have the same access key, regardless of the chunk store that is managing the copy. In addition, no other chunk will have that same access key.
    Type: Grant
    Filed: January 26, 2012
    Date of Patent: January 14, 2014
    Assignee: upthere, Inc.
    Inventors: Bertrand Serlet, Roger Bodamer
  • Patent number: 8627130
    Abstract: A power saving archive system includes a front storage system accessible by clients and one or more back storage systems connected to the front storage system. A client file received by the front storage system is written to one of the back storage systems, while the front storage system stores a reference to the file and deletes the file from the front storage system after a certain time period. Each back storage system enters an inactive state (e.g. a powered off state) after a period of unuse, and can become active again in response to a wakeup command (e.g. a Wake-on-LAN signal) from the front storage system. Upon receiving a file read request from a client, the front storage system wakes up the appropriate back storage system, restores the file from the back storage system, and provides the file to the client.
    Type: Grant
    Filed: October 8, 2010
    Date of Patent: January 7, 2014
    Assignee: Bridgette, Inc.
    Inventor: Lawrence John Dickson
  • Publication number: 20130346705
    Abstract: In a particular embodiment, a method of managing a cache memory includes, responsive to a cache size change command, changing a mode of operation of the cache memory to a write through/no allocate mode. The method also includes processing instructions associated with the cache memory while executing a cache clean operation when the mode of operation of the cache memory is the write through/no allocate mode. The method further includes after completion of the cache clean operation, changing a size of the cache memory and changing the mode of operation of the cache to a mode other than the write through/no allocate mode.
    Type: Application
    Filed: October 19, 2012
    Publication date: December 26, 2013
    Applicant: QUALCOMM Incrorporated
    Inventors: Manojkumar Pyla, Lucian Codrescu