Write-through Patents (Class 711/142)
  • Publication number: 20130326150
    Abstract: A cache is maintained with write order numbers that indicate orders of writes into the cache, so that periodic partial flushes of the cache can be executed while maintaining write order consistency. A method of storing data into the cache includes receiving a request to write data into the cache, identifying lines in the cache for storing the data, writing the data into the lines of the cache, storing a write order number, and associating the write order number with the lines of the cache. A method of flushing a cache having cache lines associated with write order numbers includes the steps of identifying lines in the cache that are associated with either a selected write order number or a write order number that is less than the selected write order number, and flushing data stored in the identified lines to a persistent storage.
    Type: Application
    Filed: June 5, 2012
    Publication date: December 5, 2013
    Applicant: VMware, Inc.
    Inventors: Thomas A. PHELAN, Erik COTA-ROBLES
  • Patent number: 8589908
    Abstract: An embodiment of the present invention provides a system and method for remotely upgrading the firmware of a target device using wireless technology from the Bluetooth-enabled PC or Laptop to another Bluetooth device e.g., mouse, Keyboard, headset, mobile phone etc. Existing solutions either may not have upgrade capabilities, or may require the use of proprietary cables. An embodiment of the solution proposed here extends the “Connecting without cables” concept of Bluetooth to firmware upgrades. The system comprises a host device for sending the firmware required for upgradation; and a target device containing a first code and a second code wherein said first code identifies details of the firmware; and said second code identifies the completion of the download operation when the firmware is successfully downloaded.
    Type: Grant
    Filed: November 29, 2006
    Date of Patent: November 19, 2013
    Assignee: St-Ericsson SA
    Inventors: Sriharsha Mysore Subbakrishna, Naresh Kumar Gupta, Mohan Kashivasi
  • Patent number: 8578097
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: October 24, 2011
    Date of Patent: November 5, 2013
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8554851
    Abstract: Methods, apparatus and systems for facilitating one-way ordering of otherwise independent message classes. A one-way message ordering mechanism facilitates one-way ordering of messages of different message classes sent between interconnects employing independent pathways for the message classes. In one aspect, messages of a second message class may not pass messages of a first message class. Moreover, when messages of the first and second classes are received in sequence, the ordering mechanism ensures that messages of the first class are forwarded to, and received at, a next hop prior to forwarding messages of the second class.
    Type: Grant
    Filed: September 24, 2010
    Date of Patent: October 8, 2013
    Assignee: Intel Corporation
    Inventors: James R. Vash, Vida Vakilotojar, Bongjin Jung, Yen-Cheng Liu
  • Publication number: 20130262780
    Abstract: An apparatus and method to enable a fast cache shutdown is disclosed. In one embodiment, a cache subsystem includes a cache memory and a cache controller coupled to the cache memory. The cache controller is configured to, upon restoring power to the cache subsystem, inhibit writing of modified data exclusively into the cache memory.
    Type: Application
    Filed: March 30, 2012
    Publication date: October 3, 2013
    Inventors: Srilatha Manne, William L. Bircher, Madhu Sarvana Sibi Govindan, James M. O'Connor, Michael J. Schulte
  • Publication number: 20130246713
    Abstract: A method for managing a cache structure of a coupling facility includes receiving a conditional write command from a computing system and determining whether data associated with the conditional write command is part of a working set of data of the cache structure. If the data associated with the conditional write command is part of the working set of data of the cache structure the conditional write command is processed as an unconditional write command. If the data associated with the conditional write command is not part of the working set of data of the cache structure a conditional write failure notification is transmitted to the computing system.
    Type: Application
    Filed: March 19, 2012
    Publication date: September 19, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Riaz Ahmad, David A. Elko, Jeffrey W. Josten, Georgette Kurdt, Scott F. Schiffer, David H. Surman
  • Patent number: 8510506
    Abstract: The invention proposes a disk array device that can improve response performance while maintaining data consistency even in the case a write request is received from a host device by a controller that does not have master authority. The disk array device includes a master controller and a slave controller. Upon adding identifying information indicating that write data has been stored in a buffer memory to the write request, the slave controller transmits, to the master controller, the write request to which the identifying information has been added as well as the write data. After having stored the write data, the master controller transmits the write request to which the identifying information has been added to the slave controller. Upon receiving the write request, the slave controller alters the attributes of the buffer memory where the write data has been stored, from the buffer memory to the cache memory.
    Type: Grant
    Filed: March 10, 2011
    Date of Patent: August 13, 2013
    Assignee: NEC Corporation
    Inventor: Yuji Kaneko
  • Publication number: 20130205097
    Abstract: A system and method facilitate processing atomic storage requests. The method includes receiving, from a storage client, an atomic storage request for a first storage device that is incapable of processing atomic write operations. The method also includes processing the atomic storage request at a translation interface. The method also includes storing the atomic storage request in one or more storage operations in a second storage device capable of processing the atomic storage request.
    Type: Application
    Filed: March 15, 2013
    Publication date: August 8, 2013
    Applicant: FUSION-IO
    Inventor: FUSION-IO
  • Patent number: 8504777
    Abstract: A method includes determining if a data processing instruction is a decorated access instruction with cache bypass, and determining if the data processing instruction generates a cache hit to a cache. When the data processing instruction is determined to be a decorated access instruction with cache bypass and the data processing instruction is determined to generate a cache hit, the method further includes invalidating a cache entry of the cache associated with the cache hit; and performing by a memory controller of the memory, a decoration operation specified by the data processor instruction on a location in the memory designated by a target address of the data processor instruction, wherein the performing the decorated access includes the memory controller performing a read of a value of the location in memory, modifying the value to generate a modified value, and writing the modified value to the location.
    Type: Grant
    Filed: September 21, 2010
    Date of Patent: August 6, 2013
    Assignee: Freescale Semiconductor, Inc.
    Inventor: William C. Moyer
  • Patent number: 8504766
    Abstract: Methods and apparatus for cut-through cache memory management in write command processing on a mirrored virtual volume of a virtualized storage system, the virtual volume comprising a plurality of physical storage devices coupled with the storage system. Features and aspects hereof within the storage system provide for receipt of a write command and associated write data from an attached host. Using a cut-through cache technique, the write data is stored in a cache memory and transmitted to a first of the plurality of storage devices as the write data is stored in the cache memory thus eliminating one read-back of the write data for transfer to a first physical storage device. Following receipt of the write data and storage in the cache memory, the write data is transmitted from the cache memory to the other physical storage devices.
    Type: Grant
    Filed: April 15, 2010
    Date of Patent: August 6, 2013
    Assignee: Netapp, Inc.
    Inventor: Howard Young
  • Publication number: 20130198462
    Abstract: Techniques are described for using chunk stores as building blocks to construct larger chunk stores. A chunk store constructed of other chunk stores (a composite chunk store) may have any number and type of building block chunk stores. Further, the building block chunk stores within a composite chunk store may be arranged in any manner, resulting in any number of levels within the composite chunk store. The building block chunk stores expose a common interface, and apply the same hash function to content of chunks to produce the access key for the chunks. Because the access key is based on content, all copies of the same chunk will have the same access key, regardless of the chunk store that is managing the copy. In addition, no other chunk will have that same access key.
    Type: Application
    Filed: January 26, 2012
    Publication date: August 1, 2013
    Inventors: Bertrand Serlet, Roger Bodamer
  • Patent number: 8489820
    Abstract: A network storage server includes a main buffer cache to buffer writes requested by clients before committing them to primary persistent storage. The server further uses a secondary cache, implemented as low-cost, solid-state memory, such as flash memory, to store data evicted from the main buffer cache or data read from the primary persistent storage. To prevent bursts of writes to the secondary cache, data is copied from the main buffer cache to the secondary cache speculatively, before there is a need to evict data from the main buffer cache. Data can be copied to the secondary cache as soon as the data is marked as clean in the main buffer cache. Data can be written to secondary cache at a substantially constant rate, which can be at or close to the maximum write rate of the secondary cache.
    Type: Grant
    Filed: March 18, 2008
    Date of Patent: July 16, 2013
    Assignee: NetApp, Inc
    Inventor: Daniel J. Ellard
  • Publication number: 20130179642
    Abstract: Systems and methods for performing non-allocating memory access instructions with physical address. A system includes a processor, one or more levels of caches, a memory, a translation look-aside buffer (TLB), and a memory access instruction specifying a memory access by the processor and an associated physical address. Execution logic is configured to bypass the TLB for the memory access instruction and perform the memory access with the physical address, while avoiding allocation of one or more intermediate levels of caches where a miss may be encountered.
    Type: Application
    Filed: February 17, 2012
    Publication date: July 11, 2013
    Applicant: QUALCOMM INCORPORATED
    Inventors: Erich James Plondke, Ajay Anant Ingle, Lucian Codrescu
  • Patent number: 8458433
    Abstract: A method and apparatus creates and manages persistent memory (PM) in a multi-node computing system. A PM Manager in the service node creates and manages pools of nodes with various sizes of PM. A node manager uses the pools of nodes to load applications to the nodes according to the size of the available PM. The PM Manager can dynamically adjust the size of the PM according to the needs of the applications based on historical use or as determined by a system administrator. The PM Manager works with an operating system kernel on the nodes to provide persistent memory for application data and system metadata. The PM Manager uses the persistent memory to load applications to preserve data from one application to the next. Also, the data preserved in persistent memory may be system metadata such as file system data that will be available to subsequent applications.
    Type: Grant
    Filed: October 29, 2007
    Date of Patent: June 4, 2013
    Assignee: International Business Machines Corporation
    Inventors: Eric Lawrence Barsness, David L. Darrington, Patrick Joseph McCarthy, Amanda Peters, John Matthew Santosuosso
  • Patent number: 8447924
    Abstract: A migration destination storage creates an expansion device for virtualizing a migration source logical unit. A host computer accesses an external volume by way of an access path of a migration destination logical unit, a migration destination storage, a migration source storage, and an external volume. After destaging all dirty data accumulated in the disk cache of the migration source storage to the external volume, an expansion device for virtualizing the external volume is mapped to the migration destination logical unit.
    Type: Grant
    Filed: April 20, 2012
    Date of Patent: May 21, 2013
    Assignee: Hitachi, Ltd.
    Inventors: Shunji Kawamura, Yasutomo Yamamoto, Yoshiaki Eguchi
  • Patent number: 8443134
    Abstract: Apparatuses, systems, and methods are disclosed for implementing a cache policy. A method may include determining a risk of data loss on a cache device. The cache device may comprise a non-volatile storage device configured to perform cache functions for a backing store. The cache device may implement a cache policy. A method may include determining that a risk of data loss on the cache devices exceeds a threshold risk level. A method may include implementing a modified cache policy for the cache device in response to the risk of data loss exceeding the threshold risk level. The modified cache policy may reduce the risk of data loss below the threshold level.
    Type: Grant
    Filed: September 17, 2010
    Date of Patent: May 14, 2013
    Assignee: Fusion-io, Inc.
    Inventor: David Flynn
  • Patent number: 8423721
    Abstract: A method includes detecting a bus transaction on a system interconnect of a data processing system having at least two masters; determining whether the bus transaction is one of a first type of bus transaction or a second type of bus transaction, where the determining is based upon a burst attribute of the bus transaction; performing a cache coherency operation for the bus transaction in response to the determining that the bus transaction is of the first type, where the performing the cache coherency operation includes searching at least one cache of the data processing system to determine whether the at least one cache contains data associated with a memory address the bus transaction; and not performing cache coherency operations for the bus transaction in response to the determining that the bus transaction is of the second type.
    Type: Grant
    Filed: April 30, 2008
    Date of Patent: April 16, 2013
    Assignee: Freescale Semiconductor, Inc.
    Inventor: William C. Moyer
  • Patent number: 8412911
    Abstract: A system and method for invalidating obsolete virtual/real address to physical address translations may employ translation lookaside buffers to cache translations. TLB entries may be invalidated in response to changes in the virtual memory space, and thus may need to be demapped. A non-cacheable unit (NCU) residing on a processor may be configured to receive and manage a global TLB demap request from a thread executing on a core residing on the processor. The NCU may send the request to local cores and/or to NCUs of external processors in a multiprocessor system using a hardware instruction to broadcast to all cores and/or processors or to multicast to designated cores and/or processors. The NCU may track completion of the demap operation across the cores and/or processors using one or more counters, and may send an acknowledgement to the initiator of the demap request when the global demap request has been satisfied.
    Type: Grant
    Filed: June 29, 2009
    Date of Patent: April 2, 2013
    Assignee: Oracle America, Inc.
    Inventors: Gregory F. Grohoski, Paul J. Jordan, Mark A. Luttrell, Zeid Hartuon Samoail
  • Patent number: 8402225
    Abstract: In a computing system, cache coherency is performed by selecting one of a plurality of coherency protocols for a first memory transaction. Each of the plurality of coherency protocols has a unique set of cache states that may be applied to cached data for the first memory transaction. Cache coherency is performed on appropriate caches in the computing system by applying the set of cache states of the selected one of the plurality of coherency protocols.
    Type: Grant
    Filed: September 21, 2010
    Date of Patent: March 19, 2013
    Assignee: Silicon Graphics International Corp.
    Inventors: Steven C. Miller, Martin M. Deneroff, Kenneth C. Yeager
  • Patent number: 8392664
    Abstract: A network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers; each IP block adapted to a router through a memory communications controller and a network interface controller; and at least one IP block also including a computer processor and an L1, write-through data cache comprising high speed local memory on the IP block, the cache controlled by a cache controller having a cache line replacement policy, the cache controller configured to lock segments of the cache, the computer processor configured to store thread-private data in main memory off the IP block, the computer processor further configured to store thread-private data on a segment of the L1 data cache, the segment locked against replacement upon cache misses under the cache controller's replacement policy, the segment further locked against write-through to main memory.
    Type: Grant
    Filed: May 9, 2008
    Date of Patent: March 5, 2013
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Russell D. Hoover, Eric O. Mejdrich
  • Patent number: 8386664
    Abstract: Reducing runtime coherency checking using global data flow analysis is provided. A determination is made as to whether a call is for at least one of a DMA get operation or a DMA put operation in response to the call being issued during execution of a compiled and optimized code. A determination is made as to whether a software cache write operation has been issued since a last flush operation in response to the call being the DMA get operation. A DMA get runtime coherency check is then performed in response to the software cache write operation being issued since the fast flush operation.
    Type: Grant
    Filed: May 22, 2008
    Date of Patent: February 26, 2013
    Assignee: International Business Machines Corporation
    Inventors: Tong Chen, Haibo Lin, John K. O'Brien, Tao Zhang
  • Patent number: 8375172
    Abstract: A mechanism is provided for enabling a proper write through during a write-through operation. Responsive to determining the memory access as a write-through operation, first circuitry determines whether a data input signal is in a first state or a second state. Responsive to the data input signal being in the second state, the first circuitry outputs a global write line signal in the first state. Responsive to the global write line signal being in the first state, second circuitry outputs a column select signal in the second state. Responsive to the column select signal being in the second state, third circuitry keeps a downstream read path of the cache access memory at the first state such that data output by the cache memory array is in the first state.
    Type: Grant
    Filed: April 16, 2010
    Date of Patent: February 12, 2013
    Assignee: International Business Machines Corporation
    Inventors: Eddie K. Chan, Michael J. Lee, Ricardo H. Nigaglioni, Bao G. Truong
  • Publication number: 20120303906
    Abstract: Embodiments are provided for cache memory systems. In one general embodiment, a method that includes receiving a host write request from a host computer, creating a sequential log file in a storage device, and copying data received during the host write request to a storage buffer. The method further includes determining if a selected quantity of data has been accumulated in the storage buffer and executing a write through of data to sequentially write the data accumulated in the storage buffer to the sequential log file and to a storage class memory device if the selected quantity of data has been accumulated in the storage buffer.
    Type: Application
    Filed: August 6, 2012
    Publication date: November 29, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Binny S. Gill
  • Patent number: 8321701
    Abstract: Methods and a processing device are provided for monitoring a level of power in a power supply of a processing device and changing a data flushing policy, with respect to data to be written to a non-volatile storage device, based on a predicted amount of time until power loss. When the predicted amount of time until power loss is higher than a threshold, as defined by a flushing policy, requests from applications for data flushes of data to a non-volatile storage device may be discarded. When the predicted amount of time remaining until power loss drops below the threshold, the requests from the applications for data flushes of the data to the non-volatile storage device may be honored and the data may be flushed to the non-volatile storage device. In some embodiments, the flushing policy may define additional thresholds.
    Type: Grant
    Filed: July 10, 2009
    Date of Patent: November 27, 2012
    Assignee: Microsoft Corporation
    Inventors: Nathan Steven Obr, Andrew Herron
  • Publication number: 20120290774
    Abstract: A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required.
    Type: Application
    Filed: May 16, 2012
    Publication date: November 15, 2012
    Inventor: SANJEEV N. TRIKA
  • Patent number: 8307270
    Abstract: An advanced memory having improved performance, reduced power and increased reliability. A memory device includes a memory array, a receiver for receiving a command and associated data, error control coding circuitry for performing error control checking on the received command, and data masking circuitry for preventing the associated data from being written to the memory array in response to the error control coding circuitry detecting an error in the received command. Another memory device includes a programmable preamble. Another memory device includes a fast exit self-refresh mode. Another memory device includes auto refresh function that is controlled by the characteristic device. Another memory device includes an auto refresh function that is controlled by a characteristic of the memory device.
    Type: Grant
    Filed: September 3, 2009
    Date of Patent: November 6, 2012
    Assignee: International Business Machines Corporation
    Inventors: Kyu-Hyoun Kim, George L. Chiu, Paul W. Coteus, Daniel M. Dreps, Kevin C. Gower, Hillery C. Hunter, Charles A. Kilmer, Warren E. Maule
  • Patent number: 8301844
    Abstract: Multi-processor systems and methods are disclosed. One embodiment may comprise a multi-processor system including a processor that executes program instructions across at least one memory barrier. A request engine may provide an updated data fill corresponding to an invalid cache line. The invalid cache line may be associated with at least one executed load instruction. A load compare component may compare the invalid cache line to the updated data fill to evaluate the consistency of the at least one executed load instruction.
    Type: Grant
    Filed: January 13, 2004
    Date of Patent: October 30, 2012
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Simon C. Steely, Jr., Gregory Edward Tierney
  • Publication number: 20120254541
    Abstract: Methods and apparatus for updating data in passive variable resistive memory (PVRM) are provided. In one example, a method for updating data stored in PVRM is disclosed. The method includes updating a memory block of a plurality of memory blocks in a cache hierarchy without invalidating the memory block. The updated memory block may be copied from the cache hierarchy to a write through buffer. Additionally, the method includes writing the updated memory block to the PVRM, thereby updating the data in the PVRM.
    Type: Application
    Filed: April 4, 2011
    Publication date: October 4, 2012
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Brad Beckmann, Lisa Hsu
  • Publication number: 20120246413
    Abstract: Method and system for supporting multiple byte order formats, separately or simultaneously, are provided and described. In one embodiment, a page attribute table (PAT), which is programmable, is utilized to indicate byte order format. In another embodiment, a memory type range register (MTRR), which is programmable, is utilized to indicate byte order format.
    Type: Application
    Filed: March 2, 2012
    Publication date: September 27, 2012
    Inventor: H. Peter Anvin
  • Patent number: 8266386
    Abstract: A design structure for a processor system may be embodied in a machine readable medium for designing, manufacturing or testing a processor integrated circuit. The design structure may embody a processor integrated circuit including multiple processors with respective processor cache memories. The design structure may specify enhanced cache coherency protocols to achieve cache memory integrity in a multi-processor environment. The design structure may describe a processor bus controller manages cache coherency bus interfaces to master devices and slave devices. The design structure may also describe a master I/O device controller and a slave I/O device controller that couple directly to the processor bus controller while system memory couples to the processor bus controller via a memory controller.
    Type: Grant
    Filed: November 25, 2008
    Date of Patent: September 11, 2012
    Assignee: International Business Machines Corporation
    Inventor: Bernard Charles Drerup
  • Patent number: 8250304
    Abstract: A memory device comprising a cache memory with a predetermined amount of cache sets, each cache set comprising a predetermined amount of cache lines. Each cache line is operable to indicate a cache data injection into the particular cache line triggered by a bus-actor.
    Type: Grant
    Filed: December 3, 2008
    Date of Patent: August 21, 2012
    Assignee: International Business Machines Corporation
    Inventors: Florian Alexander Auernhammer, Patricia Maria Sagmeister
  • Publication number: 20120198164
    Abstract: This invention is a cache system with a memory attribute register having plural entries. Each entry stores a write-through or a write-back indication for a corresponding memory address range. On a write to cached data the cache the cache consults the memory attribute register for the corresponding address range. Writes to addresses in regions marked as write-through always update all levels of the memory hierarchy. Writes to addresses in regions marked as write- back update only the first cache level that can service the write. The memory attribute register is preferably a memory mapped control register writable by the central processing unit.
    Type: Application
    Filed: September 28, 2011
    Publication date: August 2, 2012
    Applicant: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Raguram Damodaran, Abhijeet Ashok Chachad, Naveen Bhoria
  • Patent number: 8219745
    Abstract: A method, an apparatus, and a computer program are provided to account for data stored in Dynamic Random Access Memory (DRAM) write buffers. There is difficulty in tracking the data stored in DRAM write buffers. To alleviate the difficulty, a cache line list is employed. The cache line list is maintained in a memory controller, which is updated with data movement. This list allows for ease of maintenance of data without loss of consistency.
    Type: Grant
    Filed: December 2, 2004
    Date of Patent: July 10, 2012
    Assignee: International Business Machines Corporation
    Inventors: Mark David Bellows, Kent Harold Haselhorst, Ryan Abel Heakendorf, Paul Allen Ganfield, Tolga Ozguner
  • Patent number: 8161248
    Abstract: A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.
    Type: Grant
    Filed: November 24, 2010
    Date of Patent: April 17, 2012
    Assignee: International Business Machines Corporation
    Inventors: Matthias A. Blumrich, Dong Chen, Paul W. Coteus, Alan G. Gara, Mark E. Giampapa, Phillip Heidelberger, Dirk Hoenicke, Martin Ohmacht
  • Publication number: 20120072675
    Abstract: A method includes determining if a data processing instruction is a decorated access instruction with cache bypass, and determining if the data processing instruction generates a cache hit to a cache.
    Type: Application
    Filed: September 21, 2010
    Publication date: March 22, 2012
    Inventor: William C. Moyer
  • Publication number: 20120047332
    Abstract: In an embodiment, a combining write buffer is configured to maintain one or more flush metrics to determine when to transmit write operations from buffer entries. The combining write buffer may be configured to dynamically modify the flush metrics in response to activity in the write buffer, modifying the conditions under which write operations are transmitted from the write buffer to the next lower level of memory. For example, in one implementation, the flush metrics may include categorizing write buffer entries as “collapsed.” A collapsed write buffer entry, and the collapsed write operations therein, may include at least one write operation that has overwritten data that was written by a previous write operation in the buffer entry. In another implementation, the combining write buffer may maintain the threshold of buffer fullness as a flush metric and may adjust it over time based on the actual buffer fullness.
    Type: Application
    Filed: August 20, 2010
    Publication date: February 23, 2012
    Inventors: Peter J. Bannon, Andrew J. Beaumont-Smith, Ramesh Gunna, Wei-han Lien, Brian P. Lilly, Jaidev P. Patwardhan, Shih-Chieh R. Wen, Tse-Yu Yeh
  • Patent number: 8122197
    Abstract: A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.
    Type: Grant
    Filed: August 19, 2009
    Date of Patent: February 21, 2012
    Assignee: International Business Machines Corporation
    Inventors: Matthias A. Blumrich, Dong Chen, Paul W. Coteus, Alan G. Gara, Mark E. Giampapa, Philip Heidelberger, Dirk Hoenicke, Martin Ohmacht
  • Patent number: 8117400
    Abstract: A device and a method for fetching an information unit, the method includes: receiving a request to execute a write through cacheable operation of the information unit; emptying a fetch unit from data, wherein the fetch unit is connected to a cache module and to a high level memory unit; determining, when the fetch unit is empty, whether the cache module stores an older version of the information unit; and selectively writing the information unit to the cache module in response to the cache module in response to the determination.
    Type: Grant
    Filed: October 20, 2006
    Date of Patent: February 14, 2012
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Ziv Zamsky, Moshe Anschel, Alon Eldar, Dmitry Flat, Kostantin Godin, Itay Peled, Dvir Peleg
  • Patent number: 8108618
    Abstract: An information handling system includes a processor integrated circuit including multiple processors with respective processor cache memories. Enhanced cache coherency protocols achieve cache memory integrity in a multi-processor environment. A processor bus controller manages cache coherency bus interfaces to master devices and slave devices. In one embodiment, a master I/O device controller and a slave I/O device controller couple directly to the processor bus controller while system memory couples to the processor bus controller via a memory controller. In one embodiment, the processor bus controller blocks partial responses that it receives from all devices except the slave I/O device from being included in a combined response that the processor bus controller sends over the cache coherency buses.
    Type: Grant
    Filed: October 30, 2007
    Date of Patent: January 31, 2012
    Assignee: International Business Machines Corporation
    Inventor: Bernard Charles Drerup
  • Publication number: 20120011326
    Abstract: The configuration of a cache memory can be changed while minimizing the influence over input-output performance with a host system on the active storage system. A data transfer control unit transfers data via a cache memory by a write-after method; and as triggered by an event where an amount of input to and output from an object area in the cache memory falls below a certain value, the data transfer control unit switches from the write-after method to a write-through method and then transfers data via the cache memory. Subsequently, as triggered by an event where there is no longer any input to and output from the object area in the cache memory, a processor changes the configuration of the cache memory relating to the object area.
    Type: Application
    Filed: March 19, 2010
    Publication date: January 12, 2012
    Applicant: HITACHI, LTD.
    Inventors: Naoki Higashijima, Yuko Matsui
  • Patent number: 8090907
    Abstract: A method, system, computer program product, and computer program storage device for receiving and processing I/O requests from a host device and providing data consistency in both a primary site and a secondary site, while migrating a SRC (Synchronous Peer to Peer Remote Copy) from a backend storage subsystem to a storage virtualization appliance. While transferring SRC from the backend storage subsystem to the storage virtualization appliance, all new I/O requests are saved in both a primary cache memory and a secondary cache memory, allowing a time window during which the SRC at the backend storage subsystem can be stopped and the secondary storage device is made as a readable and writable medium. The primary cache memory and secondary cache memory operates separately on each I/O request in write-through, read-write or no-flush mode.
    Type: Grant
    Filed: July 9, 2008
    Date of Patent: January 3, 2012
    Assignee: International Business Machines Corporation
    Inventors: Alexander H. Ainscow, John M. Clifton
  • Patent number: 8074026
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: May 10, 2006
    Date of Patent: December 6, 2011
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8051246
    Abstract: A method and apparatus for utilizing a semiconductor memory of a node as disk cache is described. In one embodiment, a method of utilizing a semiconductor memory of a second server for a first server, comprising generating a storage access request at a first server, routing the storage access request through a communication link to a second server and performing the storage access request using a semiconductor memory of the second server.
    Type: Grant
    Filed: December 21, 2007
    Date of Patent: November 1, 2011
    Assignee: Symantec Corporation
    Inventor: Nenad Caklovic
  • Publication number: 20110258395
    Abstract: A mechanism is provided for enabling a proper write through during a write-through operation. Responsive to determining the memory access as a write-through operation, first circuitry determines whether a data input signal is in a first state or a second state. Responsive to the data input signal being in the second state, the first circuitry outputs a global write line signal in the first state. Responsive to the global write line signal being in the first state, second circuitry outputs a column select signal in the second state. Responsive to the column select signal being in the second state, third circuitry keeps a downstream read path of the cache access memory at the first state such that data output by the cache memory array is in the first state.
    Type: Application
    Filed: April 16, 2010
    Publication date: October 20, 2011
    Applicant: International Business Machines Corporation
    Inventors: Eddie K. Chan, Michael J. Lee, Ricardo H. Nigaglioni, Bao G. Truong
  • Publication number: 20110231615
    Abstract: Disclosed is a coherent storage system. A network interface device (NIC) receives network storage commands from a host. The NIC may cache the data to/from the storage commands in a solid-state disk. The NIC may respond to future network storage command by supplying the data from the solid-state disk rather than initiating a network transaction. Other NIC's on other hosts may also cache network storage data. These NICs may respond to transactions from the first NIC by supplying data, or changing the state of data in their caches.
    Type: Application
    Filed: December 29, 2010
    Publication date: September 22, 2011
    Inventor: Robert E. Ober
  • Patent number: 8015351
    Abstract: A migration destination storage creates an expansion device for virtualizing a migration source logical unit. A host computer accesses an external volume by way of an access path of a migration destination logical unit, a migration destination storage, a migration source storage, and an external volume. After destaging all dirty data accumulated in the disk cache of the migration source storage to the external volume, an expansion device for virtualizing the external volume is mapped to the migration destination logical unit.
    Type: Grant
    Filed: November 15, 2010
    Date of Patent: September 6, 2011
    Assignee: Hitachi, Ltd.
    Inventors: Shunji Kawamura, Yasutomo Yamamoto, Yoshiaki Eguchi
  • Publication number: 20110202727
    Abstract: Techniques and methods are used to reduce allocations to a higher level cache of cache lines displaced from a lower level cache. The allocations of the displaced cache lines are prevented for displaced cache lines that are determined to be redundant in the next level cache, whereby castouts are reduced. To such ends, a line is selected to be displaced in a lower level cache. Information associated with the selected line is identified which indicates that the selected line is present in a higher level cache or the selected line is a write-through line. An allocation of the selected line in the higher level cache is prevented based on the identified information. Preventing an allocation of the selected line saves power that would be associated with the allocation.
    Type: Application
    Filed: February 18, 2010
    Publication date: August 18, 2011
    Applicant: QUALCOMM INCORPORATED
    Inventors: Thomas Philip Speier, James Norris Dieffenderfer, Thomas Andrew Sartorius
  • Publication number: 20110191543
    Abstract: An apparatus for storing data that is being processed is disclosed. The apparatus comprises: a cache associated with a processor and for storing a local copy of data items stored in a memory for use by the processor, monitoring circuitry associated with the cache for monitoring write transaction requests to the memory initiated by a further device, the further device being configured not to store data in the cache. The monitoring circuitry is responsive to detecting a write transaction request to write a data item, a local copy of which is stored in the cache, to block a write acknowledge signal transmitted from the memory to the further device indicating the write has completed and to invalidate the stored local copy in the cache and on completion of the invalidation to send the write acknowledge signal to the further device.
    Type: Application
    Filed: February 2, 2010
    Publication date: August 4, 2011
    Applicant: ARM LIMITED
    Inventors: Simon John Craske, Antony John Penton, Loic Pierron, Andrew Christopher Rose
  • Publication number: 20110161586
    Abstract: Technologies are described herein related to multi-core processors that are adapted to share processor resources. An example multi-core processor can include a plurality of processor cores. The multi-core processor further can include a shared register file selectively coupled to two or more of the plurality of processor cores, where the shared register file is adapted to serve as a shared resource among the selected processor cores.
    Type: Application
    Filed: December 29, 2009
    Publication date: June 30, 2011
    Inventors: Miodrag Potkonjak, Nathan Zachary Beckmann
  • Publication number: 20110153944
    Abstract: A variety of circuits, methods and devices are implemented for secure storage of sensitive data in a computing system. A first dataset that is stored in main memory is accessed and a cache memory is configured to maintain logical consistency between the main memory and the cache. In response to determining that a second dataset is a sensitive dataset, the cache memory is directed to store the second dataset in a memory location of the cache memory without maintaining logical consistency with the dataset and main memory.
    Type: Application
    Filed: December 22, 2009
    Publication date: June 23, 2011
    Inventor: Klaus Kursawe