Write-through Patents (Class 711/142)
-
Publication number: 20130326150Abstract: A cache is maintained with write order numbers that indicate orders of writes into the cache, so that periodic partial flushes of the cache can be executed while maintaining write order consistency. A method of storing data into the cache includes receiving a request to write data into the cache, identifying lines in the cache for storing the data, writing the data into the lines of the cache, storing a write order number, and associating the write order number with the lines of the cache. A method of flushing a cache having cache lines associated with write order numbers includes the steps of identifying lines in the cache that are associated with either a selected write order number or a write order number that is less than the selected write order number, and flushing data stored in the identified lines to a persistent storage.Type: ApplicationFiled: June 5, 2012Publication date: December 5, 2013Applicant: VMware, Inc.Inventors: Thomas A. PHELAN, Erik COTA-ROBLES
-
Patent number: 8589908Abstract: An embodiment of the present invention provides a system and method for remotely upgrading the firmware of a target device using wireless technology from the Bluetooth-enabled PC or Laptop to another Bluetooth device e.g., mouse, Keyboard, headset, mobile phone etc. Existing solutions either may not have upgrade capabilities, or may require the use of proprietary cables. An embodiment of the solution proposed here extends the “Connecting without cables” concept of Bluetooth to firmware upgrades. The system comprises a host device for sending the firmware required for upgradation; and a target device containing a first code and a second code wherein said first code identifies details of the firmware; and said second code identifies the completion of the download operation when the firmware is successfully downloaded.Type: GrantFiled: November 29, 2006Date of Patent: November 19, 2013Assignee: St-Ericsson SAInventors: Sriharsha Mysore Subbakrishna, Naresh Kumar Gupta, Mohan Kashivasi
-
Patent number: 8578097Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.Type: GrantFiled: October 24, 2011Date of Patent: November 5, 2013Assignee: Intel CorporationInventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
-
Patent number: 8554851Abstract: Methods, apparatus and systems for facilitating one-way ordering of otherwise independent message classes. A one-way message ordering mechanism facilitates one-way ordering of messages of different message classes sent between interconnects employing independent pathways for the message classes. In one aspect, messages of a second message class may not pass messages of a first message class. Moreover, when messages of the first and second classes are received in sequence, the ordering mechanism ensures that messages of the first class are forwarded to, and received at, a next hop prior to forwarding messages of the second class.Type: GrantFiled: September 24, 2010Date of Patent: October 8, 2013Assignee: Intel CorporationInventors: James R. Vash, Vida Vakilotojar, Bongjin Jung, Yen-Cheng Liu
-
Publication number: 20130262780Abstract: An apparatus and method to enable a fast cache shutdown is disclosed. In one embodiment, a cache subsystem includes a cache memory and a cache controller coupled to the cache memory. The cache controller is configured to, upon restoring power to the cache subsystem, inhibit writing of modified data exclusively into the cache memory.Type: ApplicationFiled: March 30, 2012Publication date: October 3, 2013Inventors: Srilatha Manne, William L. Bircher, Madhu Sarvana Sibi Govindan, James M. O'Connor, Michael J. Schulte
-
Publication number: 20130246713Abstract: A method for managing a cache structure of a coupling facility includes receiving a conditional write command from a computing system and determining whether data associated with the conditional write command is part of a working set of data of the cache structure. If the data associated with the conditional write command is part of the working set of data of the cache structure the conditional write command is processed as an unconditional write command. If the data associated with the conditional write command is not part of the working set of data of the cache structure a conditional write failure notification is transmitted to the computing system.Type: ApplicationFiled: March 19, 2012Publication date: September 19, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Riaz Ahmad, David A. Elko, Jeffrey W. Josten, Georgette Kurdt, Scott F. Schiffer, David H. Surman
-
Patent number: 8510506Abstract: The invention proposes a disk array device that can improve response performance while maintaining data consistency even in the case a write request is received from a host device by a controller that does not have master authority. The disk array device includes a master controller and a slave controller. Upon adding identifying information indicating that write data has been stored in a buffer memory to the write request, the slave controller transmits, to the master controller, the write request to which the identifying information has been added as well as the write data. After having stored the write data, the master controller transmits the write request to which the identifying information has been added to the slave controller. Upon receiving the write request, the slave controller alters the attributes of the buffer memory where the write data has been stored, from the buffer memory to the cache memory.Type: GrantFiled: March 10, 2011Date of Patent: August 13, 2013Assignee: NEC CorporationInventor: Yuji Kaneko
-
Publication number: 20130205097Abstract: A system and method facilitate processing atomic storage requests. The method includes receiving, from a storage client, an atomic storage request for a first storage device that is incapable of processing atomic write operations. The method also includes processing the atomic storage request at a translation interface. The method also includes storing the atomic storage request in one or more storage operations in a second storage device capable of processing the atomic storage request.Type: ApplicationFiled: March 15, 2013Publication date: August 8, 2013Applicant: FUSION-IOInventor: FUSION-IO
-
Patent number: 8504777Abstract: A method includes determining if a data processing instruction is a decorated access instruction with cache bypass, and determining if the data processing instruction generates a cache hit to a cache. When the data processing instruction is determined to be a decorated access instruction with cache bypass and the data processing instruction is determined to generate a cache hit, the method further includes invalidating a cache entry of the cache associated with the cache hit; and performing by a memory controller of the memory, a decoration operation specified by the data processor instruction on a location in the memory designated by a target address of the data processor instruction, wherein the performing the decorated access includes the memory controller performing a read of a value of the location in memory, modifying the value to generate a modified value, and writing the modified value to the location.Type: GrantFiled: September 21, 2010Date of Patent: August 6, 2013Assignee: Freescale Semiconductor, Inc.Inventor: William C. Moyer
-
Patent number: 8504766Abstract: Methods and apparatus for cut-through cache memory management in write command processing on a mirrored virtual volume of a virtualized storage system, the virtual volume comprising a plurality of physical storage devices coupled with the storage system. Features and aspects hereof within the storage system provide for receipt of a write command and associated write data from an attached host. Using a cut-through cache technique, the write data is stored in a cache memory and transmitted to a first of the plurality of storage devices as the write data is stored in the cache memory thus eliminating one read-back of the write data for transfer to a first physical storage device. Following receipt of the write data and storage in the cache memory, the write data is transmitted from the cache memory to the other physical storage devices.Type: GrantFiled: April 15, 2010Date of Patent: August 6, 2013Assignee: Netapp, Inc.Inventor: Howard Young
-
Publication number: 20130198462Abstract: Techniques are described for using chunk stores as building blocks to construct larger chunk stores. A chunk store constructed of other chunk stores (a composite chunk store) may have any number and type of building block chunk stores. Further, the building block chunk stores within a composite chunk store may be arranged in any manner, resulting in any number of levels within the composite chunk store. The building block chunk stores expose a common interface, and apply the same hash function to content of chunks to produce the access key for the chunks. Because the access key is based on content, all copies of the same chunk will have the same access key, regardless of the chunk store that is managing the copy. In addition, no other chunk will have that same access key.Type: ApplicationFiled: January 26, 2012Publication date: August 1, 2013Inventors: Bertrand Serlet, Roger Bodamer
-
Patent number: 8489820Abstract: A network storage server includes a main buffer cache to buffer writes requested by clients before committing them to primary persistent storage. The server further uses a secondary cache, implemented as low-cost, solid-state memory, such as flash memory, to store data evicted from the main buffer cache or data read from the primary persistent storage. To prevent bursts of writes to the secondary cache, data is copied from the main buffer cache to the secondary cache speculatively, before there is a need to evict data from the main buffer cache. Data can be copied to the secondary cache as soon as the data is marked as clean in the main buffer cache. Data can be written to secondary cache at a substantially constant rate, which can be at or close to the maximum write rate of the secondary cache.Type: GrantFiled: March 18, 2008Date of Patent: July 16, 2013Assignee: NetApp, IncInventor: Daniel J. Ellard
-
Publication number: 20130179642Abstract: Systems and methods for performing non-allocating memory access instructions with physical address. A system includes a processor, one or more levels of caches, a memory, a translation look-aside buffer (TLB), and a memory access instruction specifying a memory access by the processor and an associated physical address. Execution logic is configured to bypass the TLB for the memory access instruction and perform the memory access with the physical address, while avoiding allocation of one or more intermediate levels of caches where a miss may be encountered.Type: ApplicationFiled: February 17, 2012Publication date: July 11, 2013Applicant: QUALCOMM INCORPORATEDInventors: Erich James Plondke, Ajay Anant Ingle, Lucian Codrescu
-
Patent number: 8458433Abstract: A method and apparatus creates and manages persistent memory (PM) in a multi-node computing system. A PM Manager in the service node creates and manages pools of nodes with various sizes of PM. A node manager uses the pools of nodes to load applications to the nodes according to the size of the available PM. The PM Manager can dynamically adjust the size of the PM according to the needs of the applications based on historical use or as determined by a system administrator. The PM Manager works with an operating system kernel on the nodes to provide persistent memory for application data and system metadata. The PM Manager uses the persistent memory to load applications to preserve data from one application to the next. Also, the data preserved in persistent memory may be system metadata such as file system data that will be available to subsequent applications.Type: GrantFiled: October 29, 2007Date of Patent: June 4, 2013Assignee: International Business Machines CorporationInventors: Eric Lawrence Barsness, David L. Darrington, Patrick Joseph McCarthy, Amanda Peters, John Matthew Santosuosso
-
Patent number: 8447924Abstract: A migration destination storage creates an expansion device for virtualizing a migration source logical unit. A host computer accesses an external volume by way of an access path of a migration destination logical unit, a migration destination storage, a migration source storage, and an external volume. After destaging all dirty data accumulated in the disk cache of the migration source storage to the external volume, an expansion device for virtualizing the external volume is mapped to the migration destination logical unit.Type: GrantFiled: April 20, 2012Date of Patent: May 21, 2013Assignee: Hitachi, Ltd.Inventors: Shunji Kawamura, Yasutomo Yamamoto, Yoshiaki Eguchi
-
Patent number: 8443134Abstract: Apparatuses, systems, and methods are disclosed for implementing a cache policy. A method may include determining a risk of data loss on a cache device. The cache device may comprise a non-volatile storage device configured to perform cache functions for a backing store. The cache device may implement a cache policy. A method may include determining that a risk of data loss on the cache devices exceeds a threshold risk level. A method may include implementing a modified cache policy for the cache device in response to the risk of data loss exceeding the threshold risk level. The modified cache policy may reduce the risk of data loss below the threshold level.Type: GrantFiled: September 17, 2010Date of Patent: May 14, 2013Assignee: Fusion-io, Inc.Inventor: David Flynn
-
Patent number: 8423721Abstract: A method includes detecting a bus transaction on a system interconnect of a data processing system having at least two masters; determining whether the bus transaction is one of a first type of bus transaction or a second type of bus transaction, where the determining is based upon a burst attribute of the bus transaction; performing a cache coherency operation for the bus transaction in response to the determining that the bus transaction is of the first type, where the performing the cache coherency operation includes searching at least one cache of the data processing system to determine whether the at least one cache contains data associated with a memory address the bus transaction; and not performing cache coherency operations for the bus transaction in response to the determining that the bus transaction is of the second type.Type: GrantFiled: April 30, 2008Date of Patent: April 16, 2013Assignee: Freescale Semiconductor, Inc.Inventor: William C. Moyer
-
Patent number: 8412911Abstract: A system and method for invalidating obsolete virtual/real address to physical address translations may employ translation lookaside buffers to cache translations. TLB entries may be invalidated in response to changes in the virtual memory space, and thus may need to be demapped. A non-cacheable unit (NCU) residing on a processor may be configured to receive and manage a global TLB demap request from a thread executing on a core residing on the processor. The NCU may send the request to local cores and/or to NCUs of external processors in a multiprocessor system using a hardware instruction to broadcast to all cores and/or processors or to multicast to designated cores and/or processors. The NCU may track completion of the demap operation across the cores and/or processors using one or more counters, and may send an acknowledgement to the initiator of the demap request when the global demap request has been satisfied.Type: GrantFiled: June 29, 2009Date of Patent: April 2, 2013Assignee: Oracle America, Inc.Inventors: Gregory F. Grohoski, Paul J. Jordan, Mark A. Luttrell, Zeid Hartuon Samoail
-
Patent number: 8402225Abstract: In a computing system, cache coherency is performed by selecting one of a plurality of coherency protocols for a first memory transaction. Each of the plurality of coherency protocols has a unique set of cache states that may be applied to cached data for the first memory transaction. Cache coherency is performed on appropriate caches in the computing system by applying the set of cache states of the selected one of the plurality of coherency protocols.Type: GrantFiled: September 21, 2010Date of Patent: March 19, 2013Assignee: Silicon Graphics International Corp.Inventors: Steven C. Miller, Martin M. Deneroff, Kenneth C. Yeager
-
Patent number: 8392664Abstract: A network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers; each IP block adapted to a router through a memory communications controller and a network interface controller; and at least one IP block also including a computer processor and an L1, write-through data cache comprising high speed local memory on the IP block, the cache controlled by a cache controller having a cache line replacement policy, the cache controller configured to lock segments of the cache, the computer processor configured to store thread-private data in main memory off the IP block, the computer processor further configured to store thread-private data on a segment of the L1 data cache, the segment locked against replacement upon cache misses under the cache controller's replacement policy, the segment further locked against write-through to main memory.Type: GrantFiled: May 9, 2008Date of Patent: March 5, 2013Assignee: International Business Machines CorporationInventors: Miguel Comparan, Russell D. Hoover, Eric O. Mejdrich
-
Patent number: 8386664Abstract: Reducing runtime coherency checking using global data flow analysis is provided. A determination is made as to whether a call is for at least one of a DMA get operation or a DMA put operation in response to the call being issued during execution of a compiled and optimized code. A determination is made as to whether a software cache write operation has been issued since a last flush operation in response to the call being the DMA get operation. A DMA get runtime coherency check is then performed in response to the software cache write operation being issued since the fast flush operation.Type: GrantFiled: May 22, 2008Date of Patent: February 26, 2013Assignee: International Business Machines CorporationInventors: Tong Chen, Haibo Lin, John K. O'Brien, Tao Zhang
-
Patent number: 8375172Abstract: A mechanism is provided for enabling a proper write through during a write-through operation. Responsive to determining the memory access as a write-through operation, first circuitry determines whether a data input signal is in a first state or a second state. Responsive to the data input signal being in the second state, the first circuitry outputs a global write line signal in the first state. Responsive to the global write line signal being in the first state, second circuitry outputs a column select signal in the second state. Responsive to the column select signal being in the second state, third circuitry keeps a downstream read path of the cache access memory at the first state such that data output by the cache memory array is in the first state.Type: GrantFiled: April 16, 2010Date of Patent: February 12, 2013Assignee: International Business Machines CorporationInventors: Eddie K. Chan, Michael J. Lee, Ricardo H. Nigaglioni, Bao G. Truong
-
Publication number: 20120303906Abstract: Embodiments are provided for cache memory systems. In one general embodiment, a method that includes receiving a host write request from a host computer, creating a sequential log file in a storage device, and copying data received during the host write request to a storage buffer. The method further includes determining if a selected quantity of data has been accumulated in the storage buffer and executing a write through of data to sequentially write the data accumulated in the storage buffer to the sequential log file and to a storage class memory device if the selected quantity of data has been accumulated in the storage buffer.Type: ApplicationFiled: August 6, 2012Publication date: November 29, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Binny S. Gill
-
Patent number: 8321701Abstract: Methods and a processing device are provided for monitoring a level of power in a power supply of a processing device and changing a data flushing policy, with respect to data to be written to a non-volatile storage device, based on a predicted amount of time until power loss. When the predicted amount of time until power loss is higher than a threshold, as defined by a flushing policy, requests from applications for data flushes of data to a non-volatile storage device may be discarded. When the predicted amount of time remaining until power loss drops below the threshold, the requests from the applications for data flushes of the data to the non-volatile storage device may be honored and the data may be flushed to the non-volatile storage device. In some embodiments, the flushing policy may define additional thresholds.Type: GrantFiled: July 10, 2009Date of Patent: November 27, 2012Assignee: Microsoft CorporationInventors: Nathan Steven Obr, Andrew Herron
-
Publication number: 20120290774Abstract: A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required.Type: ApplicationFiled: May 16, 2012Publication date: November 15, 2012Inventor: SANJEEV N. TRIKA
-
Patent number: 8307270Abstract: An advanced memory having improved performance, reduced power and increased reliability. A memory device includes a memory array, a receiver for receiving a command and associated data, error control coding circuitry for performing error control checking on the received command, and data masking circuitry for preventing the associated data from being written to the memory array in response to the error control coding circuitry detecting an error in the received command. Another memory device includes a programmable preamble. Another memory device includes a fast exit self-refresh mode. Another memory device includes auto refresh function that is controlled by the characteristic device. Another memory device includes an auto refresh function that is controlled by a characteristic of the memory device.Type: GrantFiled: September 3, 2009Date of Patent: November 6, 2012Assignee: International Business Machines CorporationInventors: Kyu-Hyoun Kim, George L. Chiu, Paul W. Coteus, Daniel M. Dreps, Kevin C. Gower, Hillery C. Hunter, Charles A. Kilmer, Warren E. Maule
-
Patent number: 8301844Abstract: Multi-processor systems and methods are disclosed. One embodiment may comprise a multi-processor system including a processor that executes program instructions across at least one memory barrier. A request engine may provide an updated data fill corresponding to an invalid cache line. The invalid cache line may be associated with at least one executed load instruction. A load compare component may compare the invalid cache line to the updated data fill to evaluate the consistency of the at least one executed load instruction.Type: GrantFiled: January 13, 2004Date of Patent: October 30, 2012Assignee: Hewlett-Packard Development Company, L.P.Inventors: Simon C. Steely, Jr., Gregory Edward Tierney
-
Publication number: 20120254541Abstract: Methods and apparatus for updating data in passive variable resistive memory (PVRM) are provided. In one example, a method for updating data stored in PVRM is disclosed. The method includes updating a memory block of a plurality of memory blocks in a cache hierarchy without invalidating the memory block. The updated memory block may be copied from the cache hierarchy to a write through buffer. Additionally, the method includes writing the updated memory block to the PVRM, thereby updating the data in the PVRM.Type: ApplicationFiled: April 4, 2011Publication date: October 4, 2012Applicant: Advanced Micro Devices, Inc.Inventors: Brad Beckmann, Lisa Hsu
-
Publication number: 20120246413Abstract: Method and system for supporting multiple byte order formats, separately or simultaneously, are provided and described. In one embodiment, a page attribute table (PAT), which is programmable, is utilized to indicate byte order format. In another embodiment, a memory type range register (MTRR), which is programmable, is utilized to indicate byte order format.Type: ApplicationFiled: March 2, 2012Publication date: September 27, 2012Inventor: H. Peter Anvin
-
Patent number: 8266386Abstract: A design structure for a processor system may be embodied in a machine readable medium for designing, manufacturing or testing a processor integrated circuit. The design structure may embody a processor integrated circuit including multiple processors with respective processor cache memories. The design structure may specify enhanced cache coherency protocols to achieve cache memory integrity in a multi-processor environment. The design structure may describe a processor bus controller manages cache coherency bus interfaces to master devices and slave devices. The design structure may also describe a master I/O device controller and a slave I/O device controller that couple directly to the processor bus controller while system memory couples to the processor bus controller via a memory controller.Type: GrantFiled: November 25, 2008Date of Patent: September 11, 2012Assignee: International Business Machines CorporationInventor: Bernard Charles Drerup
-
Patent number: 8250304Abstract: A memory device comprising a cache memory with a predetermined amount of cache sets, each cache set comprising a predetermined amount of cache lines. Each cache line is operable to indicate a cache data injection into the particular cache line triggered by a bus-actor.Type: GrantFiled: December 3, 2008Date of Patent: August 21, 2012Assignee: International Business Machines CorporationInventors: Florian Alexander Auernhammer, Patricia Maria Sagmeister
-
Publication number: 20120198164Abstract: This invention is a cache system with a memory attribute register having plural entries. Each entry stores a write-through or a write-back indication for a corresponding memory address range. On a write to cached data the cache the cache consults the memory attribute register for the corresponding address range. Writes to addresses in regions marked as write-through always update all levels of the memory hierarchy. Writes to addresses in regions marked as write- back update only the first cache level that can service the write. The memory attribute register is preferably a memory mapped control register writable by the central processing unit.Type: ApplicationFiled: September 28, 2011Publication date: August 2, 2012Applicant: TEXAS INSTRUMENTS INCORPORATEDInventors: Raguram Damodaran, Abhijeet Ashok Chachad, Naveen Bhoria
-
Patent number: 8219745Abstract: A method, an apparatus, and a computer program are provided to account for data stored in Dynamic Random Access Memory (DRAM) write buffers. There is difficulty in tracking the data stored in DRAM write buffers. To alleviate the difficulty, a cache line list is employed. The cache line list is maintained in a memory controller, which is updated with data movement. This list allows for ease of maintenance of data without loss of consistency.Type: GrantFiled: December 2, 2004Date of Patent: July 10, 2012Assignee: International Business Machines CorporationInventors: Mark David Bellows, Kent Harold Haselhorst, Ryan Abel Heakendorf, Paul Allen Ganfield, Tolga Ozguner
-
Patent number: 8161248Abstract: A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.Type: GrantFiled: November 24, 2010Date of Patent: April 17, 2012Assignee: International Business Machines CorporationInventors: Matthias A. Blumrich, Dong Chen, Paul W. Coteus, Alan G. Gara, Mark E. Giampapa, Phillip Heidelberger, Dirk Hoenicke, Martin Ohmacht
-
Publication number: 20120072675Abstract: A method includes determining if a data processing instruction is a decorated access instruction with cache bypass, and determining if the data processing instruction generates a cache hit to a cache.Type: ApplicationFiled: September 21, 2010Publication date: March 22, 2012Inventor: William C. Moyer
-
Publication number: 20120047332Abstract: In an embodiment, a combining write buffer is configured to maintain one or more flush metrics to determine when to transmit write operations from buffer entries. The combining write buffer may be configured to dynamically modify the flush metrics in response to activity in the write buffer, modifying the conditions under which write operations are transmitted from the write buffer to the next lower level of memory. For example, in one implementation, the flush metrics may include categorizing write buffer entries as “collapsed.” A collapsed write buffer entry, and the collapsed write operations therein, may include at least one write operation that has overwritten data that was written by a previous write operation in the buffer entry. In another implementation, the combining write buffer may maintain the threshold of buffer fullness as a flush metric and may adjust it over time based on the actual buffer fullness.Type: ApplicationFiled: August 20, 2010Publication date: February 23, 2012Inventors: Peter J. Bannon, Andrew J. Beaumont-Smith, Ramesh Gunna, Wei-han Lien, Brian P. Lilly, Jaidev P. Patwardhan, Shih-Chieh R. Wen, Tse-Yu Yeh
-
Patent number: 8122197Abstract: A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.Type: GrantFiled: August 19, 2009Date of Patent: February 21, 2012Assignee: International Business Machines CorporationInventors: Matthias A. Blumrich, Dong Chen, Paul W. Coteus, Alan G. Gara, Mark E. Giampapa, Philip Heidelberger, Dirk Hoenicke, Martin Ohmacht
-
Patent number: 8117400Abstract: A device and a method for fetching an information unit, the method includes: receiving a request to execute a write through cacheable operation of the information unit; emptying a fetch unit from data, wherein the fetch unit is connected to a cache module and to a high level memory unit; determining, when the fetch unit is empty, whether the cache module stores an older version of the information unit; and selectively writing the information unit to the cache module in response to the cache module in response to the determination.Type: GrantFiled: October 20, 2006Date of Patent: February 14, 2012Assignee: Freescale Semiconductor, Inc.Inventors: Ziv Zamsky, Moshe Anschel, Alon Eldar, Dmitry Flat, Kostantin Godin, Itay Peled, Dvir Peleg
-
Patent number: 8108618Abstract: An information handling system includes a processor integrated circuit including multiple processors with respective processor cache memories. Enhanced cache coherency protocols achieve cache memory integrity in a multi-processor environment. A processor bus controller manages cache coherency bus interfaces to master devices and slave devices. In one embodiment, a master I/O device controller and a slave I/O device controller couple directly to the processor bus controller while system memory couples to the processor bus controller via a memory controller. In one embodiment, the processor bus controller blocks partial responses that it receives from all devices except the slave I/O device from being included in a combined response that the processor bus controller sends over the cache coherency buses.Type: GrantFiled: October 30, 2007Date of Patent: January 31, 2012Assignee: International Business Machines CorporationInventor: Bernard Charles Drerup
-
Publication number: 20120011326Abstract: The configuration of a cache memory can be changed while minimizing the influence over input-output performance with a host system on the active storage system. A data transfer control unit transfers data via a cache memory by a write-after method; and as triggered by an event where an amount of input to and output from an object area in the cache memory falls below a certain value, the data transfer control unit switches from the write-after method to a write-through method and then transfers data via the cache memory. Subsequently, as triggered by an event where there is no longer any input to and output from the object area in the cache memory, a processor changes the configuration of the cache memory relating to the object area.Type: ApplicationFiled: March 19, 2010Publication date: January 12, 2012Applicant: HITACHI, LTD.Inventors: Naoki Higashijima, Yuko Matsui
-
Patent number: 8090907Abstract: A method, system, computer program product, and computer program storage device for receiving and processing I/O requests from a host device and providing data consistency in both a primary site and a secondary site, while migrating a SRC (Synchronous Peer to Peer Remote Copy) from a backend storage subsystem to a storage virtualization appliance. While transferring SRC from the backend storage subsystem to the storage virtualization appliance, all new I/O requests are saved in both a primary cache memory and a secondary cache memory, allowing a time window during which the SRC at the backend storage subsystem can be stopped and the secondary storage device is made as a readable and writable medium. The primary cache memory and secondary cache memory operates separately on each I/O request in write-through, read-write or no-flush mode.Type: GrantFiled: July 9, 2008Date of Patent: January 3, 2012Assignee: International Business Machines CorporationInventors: Alexander H. Ainscow, John M. Clifton
-
Patent number: 8074026Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.Type: GrantFiled: May 10, 2006Date of Patent: December 6, 2011Assignee: Intel CorporationInventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
-
Patent number: 8051246Abstract: A method and apparatus for utilizing a semiconductor memory of a node as disk cache is described. In one embodiment, a method of utilizing a semiconductor memory of a second server for a first server, comprising generating a storage access request at a first server, routing the storage access request through a communication link to a second server and performing the storage access request using a semiconductor memory of the second server.Type: GrantFiled: December 21, 2007Date of Patent: November 1, 2011Assignee: Symantec CorporationInventor: Nenad Caklovic
-
Publication number: 20110258395Abstract: A mechanism is provided for enabling a proper write through during a write-through operation. Responsive to determining the memory access as a write-through operation, first circuitry determines whether a data input signal is in a first state or a second state. Responsive to the data input signal being in the second state, the first circuitry outputs a global write line signal in the first state. Responsive to the global write line signal being in the first state, second circuitry outputs a column select signal in the second state. Responsive to the column select signal being in the second state, third circuitry keeps a downstream read path of the cache access memory at the first state such that data output by the cache memory array is in the first state.Type: ApplicationFiled: April 16, 2010Publication date: October 20, 2011Applicant: International Business Machines CorporationInventors: Eddie K. Chan, Michael J. Lee, Ricardo H. Nigaglioni, Bao G. Truong
-
Publication number: 20110231615Abstract: Disclosed is a coherent storage system. A network interface device (NIC) receives network storage commands from a host. The NIC may cache the data to/from the storage commands in a solid-state disk. The NIC may respond to future network storage command by supplying the data from the solid-state disk rather than initiating a network transaction. Other NIC's on other hosts may also cache network storage data. These NICs may respond to transactions from the first NIC by supplying data, or changing the state of data in their caches.Type: ApplicationFiled: December 29, 2010Publication date: September 22, 2011Inventor: Robert E. Ober
-
Patent number: 8015351Abstract: A migration destination storage creates an expansion device for virtualizing a migration source logical unit. A host computer accesses an external volume by way of an access path of a migration destination logical unit, a migration destination storage, a migration source storage, and an external volume. After destaging all dirty data accumulated in the disk cache of the migration source storage to the external volume, an expansion device for virtualizing the external volume is mapped to the migration destination logical unit.Type: GrantFiled: November 15, 2010Date of Patent: September 6, 2011Assignee: Hitachi, Ltd.Inventors: Shunji Kawamura, Yasutomo Yamamoto, Yoshiaki Eguchi
-
Publication number: 20110202727Abstract: Techniques and methods are used to reduce allocations to a higher level cache of cache lines displaced from a lower level cache. The allocations of the displaced cache lines are prevented for displaced cache lines that are determined to be redundant in the next level cache, whereby castouts are reduced. To such ends, a line is selected to be displaced in a lower level cache. Information associated with the selected line is identified which indicates that the selected line is present in a higher level cache or the selected line is a write-through line. An allocation of the selected line in the higher level cache is prevented based on the identified information. Preventing an allocation of the selected line saves power that would be associated with the allocation.Type: ApplicationFiled: February 18, 2010Publication date: August 18, 2011Applicant: QUALCOMM INCORPORATEDInventors: Thomas Philip Speier, James Norris Dieffenderfer, Thomas Andrew Sartorius
-
Publication number: 20110191543Abstract: An apparatus for storing data that is being processed is disclosed. The apparatus comprises: a cache associated with a processor and for storing a local copy of data items stored in a memory for use by the processor, monitoring circuitry associated with the cache for monitoring write transaction requests to the memory initiated by a further device, the further device being configured not to store data in the cache. The monitoring circuitry is responsive to detecting a write transaction request to write a data item, a local copy of which is stored in the cache, to block a write acknowledge signal transmitted from the memory to the further device indicating the write has completed and to invalidate the stored local copy in the cache and on completion of the invalidation to send the write acknowledge signal to the further device.Type: ApplicationFiled: February 2, 2010Publication date: August 4, 2011Applicant: ARM LIMITEDInventors: Simon John Craske, Antony John Penton, Loic Pierron, Andrew Christopher Rose
-
Publication number: 20110161586Abstract: Technologies are described herein related to multi-core processors that are adapted to share processor resources. An example multi-core processor can include a plurality of processor cores. The multi-core processor further can include a shared register file selectively coupled to two or more of the plurality of processor cores, where the shared register file is adapted to serve as a shared resource among the selected processor cores.Type: ApplicationFiled: December 29, 2009Publication date: June 30, 2011Inventors: Miodrag Potkonjak, Nathan Zachary Beckmann
-
Publication number: 20110153944Abstract: A variety of circuits, methods and devices are implemented for secure storage of sensitive data in a computing system. A first dataset that is stored in main memory is accessed and a cache memory is configured to maintain logical consistency between the main memory and the cache. In response to determining that a second dataset is a sensitive dataset, the cache memory is directed to store the second dataset in a memory location of the cache memory without maintaining logical consistency with the dataset and main memory.Type: ApplicationFiled: December 22, 2009Publication date: June 23, 2011Inventor: Klaus Kursawe