Cache Status Data Bit Patents (Class 711/144)
  • Patent number: 9213652
    Abstract: Managing data in a computing system comprising one or more cores includes: providing a cache in each of one or more of the cores that includes multiple storage locations; storing data of a first type of multiple types of data in a selected storage location of a first cache of a first core that is selected according to status information associated with the first cache, and updating the status information; and storing data of a second type of the multiple types of data in a storage location within a subset of fewer than all of the storage locations of the first cache and managing the status information to ensure that subsequent data of the second type received by the first core for storage in the first cache is stored in the storage location within the subset.
    Type: Grant
    Filed: September 20, 2010
    Date of Patent: December 15, 2015
    Assignee: Tilera Corperation
    Inventors: Chyi-Chang Miao, Christopher D. Metcalf, Ian Rudolf Bratt, Carl G. Ramey
  • Patent number: 9195395
    Abstract: An apparatus, computer program product, and associated method/processing unit are provided for utilizing a memory subsystem including NAND flash memory and dynamic random access memory. Further included is a first circuit for receiving DDR signals and converting the DDR signals to SATA signals. The first circuit includes embedded dynamic random access memory. Also provided is a second circuit for receiving the SATA signals and converting the SATA signals to NAND flash signals. The second circuit is communicatively coupled to the first circuit via a first memory bus associated with a SATA protocol, the NAND flash memory via a second memory bus associated with a NAND flash protocol, and the dynamic random access memory.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: November 24, 2015
    Assignee: P4TENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 9189409
    Abstract: Methods and structure are provided for reducing the number of writes to a cache of a storage controller. One exemplary embodiment includes a storage controller that has a non-volatile flash cache memory, a primary memory that is distinct from the cache memory, and a memory manager. The memory manager is able to receive data for storage in the cache memory, to generate a hash key from the received data, and to compare the hash key to hash values for entries in the cache memory. The memory manager can write the received data to the cache memory if the hash key does not match one of the hash values. Also, the memory manager can modify the primary memory instead of writing to the cache if the hash key matches a hash value, in order to reduce the amount of data written to the cache memory.
    Type: Grant
    Filed: February 19, 2013
    Date of Patent: November 17, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventor: Parag R. Maharana
  • Patent number: 9189442
    Abstract: An apparatus and associated method/processing unit are provided for utilizing a memory subsystem including NAND flash memory and dynamic random access memory. Further included is a first circuit for receiving DDR signals and converting the DDR signals to SATA signals. The first circuit includes embedded dynamic random access memory. Also provided is a second circuit for receiving the SATA signals and converting the SATA signals to NAND flash signals. The second circuit is communicatively coupled to the first circuit via a first memory bus associated with a SATA protocol, the NAND flash memory via a second memory bus associated with a NAND flash protocol, and the dynamic random access memory. In operation, data is fetched using a time between an execution of a plurality of threads.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: November 17, 2015
    Assignee: P4TENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 9182984
    Abstract: A computer implemented instruction is executed. One or more translation table entry locations (TLB) are specified by the instruction. Based on a local-clearing (LC) control specified by the instruction being a first value, the processor selectively clears TLBs in a plurality of the CPUs in a configuration of entries corresponding to the determined translation table entry location. Based on the local-clearing (LC) being a second value, the processor selectively clears only the TLBs of the CPU executing the instruction of entries corresponding to the determined translation table entry location. A computer program product, computer system and computer implemented method are provided.
    Type: Grant
    Filed: June 15, 2012
    Date of Patent: November 10, 2015
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dan F. Greiner, Gustav E. Sittmann, III, Cynthia Sittmann
  • Patent number: 9183081
    Abstract: Systems and methods for performing defect detection and data recovery within a memory system are disclosed. A controller of a memory system may receive a command to write data in a memory of the memory system; determine a physical location of the memory that is associated with the data write; write data associated with the data write to the physical location; and store the physical location of the memory that is associated with the data write in a Tag cache. The controller may further identify a data keep cache of a plurality of data keep caches that is associated with the data write based on the physical location of the memory that is associated with the data write; update an XOR sum based on the data of the data write; and store the updated XOR sum in the identified data keep cache.
    Type: Grant
    Filed: March 12, 2013
    Date of Patent: November 10, 2015
    Assignee: SanDisk Technologies Inc.
    Inventors: Abhijeet Manohar, Chris Avila, Jianmin Huang, Daniel Edward Tuers
  • Patent number: 9182914
    Abstract: An apparatus, computer program product, and associated method/processing unit are provided for utilizing a memory subsystem including a first memory of a first memory class, and a second memory of a second memory class communicatively coupled to the first memory. In operation, data is fetched using a time between a plurality of threads.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: November 10, 2015
    Assignee: P4TENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 9176671
    Abstract: An apparatus and associated method/processing unit are provided for utilizing a memory subsystem including NAND flash memory and dynamic random access memory. Further included is a first circuit for receiving DDR signals and converting the DDR signals to SATA signals. The first circuit includes embedded dynamic random access memory. Also provided is a second circuit for receiving the SATA signals and converting the SATA signals to NAND flash signals. The second circuit is communicatively coupled to the first circuit via a first memory bus associated with a SATA protocol, the NAND flash memory via a second memory bus associated with a NAND flash protocol, and the dynamic random access memory. In operation, data is fetched using a time between an execution of a plurality of threads.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: November 3, 2015
    Assignee: P4TENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 9170744
    Abstract: A computer program product, apparatus and associated method/processing unit are provided for utilizing a memory subsystem including NAND flash physical memory and DRAM physical memory. Further included is a first buffer for receiving DDR signals and converting the DDR signals to SATA signals. The first buffer includes embedded DRAM physical memory. Also provided is a second buffer for receiving the SATA signals and converting the SATA signals to NAND flash signals. The second buffer is communicatively coupled to the first buffer via a first memory bus associated with a SATA protocol, the NAND flash physical memory via a second memory bus associated with a NAND flash protocol, and the DRAM physical memory.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: October 27, 2015
    Assignee: P4TENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 9164679
    Abstract: An apparatus, computer program product, and associated method/processing unit are provided for utilizing a memory subsystem including a first memory of a first memory class, and a second memory of a second memory class communicatively coupled to the first memory. In operation, data is fetched using a time between a plurality of threads.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: October 20, 2015
    Assignee: PATENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 9164910
    Abstract: A data processing apparatus comprises: at least one processor; having a private cache, a shared cache for storing data processed by the processor and by a further device, and coherency control circuitry. The coherency control circuitry is responsive to a write request from the further device to determine if data related to an address targeted by the write request is stored in the private cache, and if it is, forcing an eviction of the stored data from the private cache to the shared cache prior to perform the write to the shared cache. The data is stored in the private cache in conjunction with an indicator indicating if the stored data is consistent with data stored in a corresponding address in a further data store, and the stored data is evicted whether the stored data is indicated as being consistent or inconsistent.
    Type: Grant
    Filed: February 21, 2008
    Date of Patent: October 20, 2015
    Assignee: ARM Limited
    Inventors: Nicolas Chaussade, Stephane Eric Sebastien Brochier
  • Patent number: 9165088
    Abstract: According to an example, multi-mode storage may include operating a first array including a first memory and a second array including a second memory in one or more modes of operation. The first memory may be a relatively denser memory compared to the second memory and the second memory may be a relatively faster memory compared to the first memory. The modes of operation may include a first mode of operation where the first array functions as the relatively denser memory compared to the second memory and the second array functions as the relatively faster memory compared to the first memory, a second mode of operation where the second array is operated as an automatic cache of a portion of a dataset, and a third mode of operation where a cache-tag functionality used to support the second mode of operation is instead used to provide a CAM.
    Type: Grant
    Filed: July 8, 2013
    Date of Patent: October 20, 2015
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Robert J. Brooks
  • Patent number: 9158546
    Abstract: A computer program product, apparatus and associated method/processing unit are provided for utilizing a physical memory system including a first physical memory of a first physical memory class, and a second physical memory of a second physical memory class communicatively coupled to the first physical memory. In operation, one or more pages are fetched from the first physical memory using a time between an execution of a plurality of threads associated with the second physical memory.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: October 13, 2015
    Assignee: P4TENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 9158689
    Abstract: Technologies are described herein generally relate to aggregation of cache eviction notifications to a directory. Some example technologies may be utilized to update an aggregation table to reflect evictions of a plurality of blocks from a plurality of block addresses of at least one cache memory. An aggregate message can be generated, where the message specifies the evictions of the plurality of blocks as reflected in the aggregation table. The aggregate message can be sent to the directory. The directory can parse the aggregate message and update a plurality of directory entries to reflect the evictions from the cache memory as specified in the aggregate message.
    Type: Grant
    Filed: February 11, 2013
    Date of Patent: October 13, 2015
    Assignee: Empire Technology Development LLC
    Inventor: Yan Solihin
  • Patent number: 9122588
    Abstract: Some implementations provide a method for managing data in a storage system that includes a persistent storage device and a non-volatile random access memory (NVRAM) cache device. The method includes: accessing a direct mapping between a logical address associated with data stored on the persistent storage device and a physical address on the NVRAM cache device; receiving, from a host computing device coupled to the storage system, a request to access a particular unit of data stored on the persistent storage device; using the direct mapping as a basis between the logical address associated with the data stored on the persistent storage device and the physical address on the NVRAM cache device to determine whether the particular unit of data being requested is present on the NVRAM cache device.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: September 1, 2015
    Assignee: Virident Systems Inc.
    Inventors: Shibabrata Mondal, Vijay Karamcheti, Ankur Arora, Ajit Yagaty
  • Patent number: 9081619
    Abstract: A method of provisioning a Web hosting resource includes providing a cloud service. A request for a Web hosting resource is received by the cloud service, wherein the request is provided by a client. The cloud service identifies a Web host based on the received request for a Web hosting resource. The cloud service sends a request to the Web host to provision a first Web hosting resource for use by the client.
    Type: Grant
    Filed: June 30, 2011
    Date of Patent: July 14, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Muhammad Bilal Aslam, Crystal L. Hoyer, Sayed Ibrahim Hashimi, Vishal R. Joshi, Omar Khan, Jonathan Kevin Wall, Bill Staples, Bradley John Bartz, Younus Aftab
  • Patent number: 9075928
    Abstract: A coherence maintenance address queue tracks each memory access from receipt until the memory reports the access complete. The address of each new access is compared against the address of all entries in the queue. This check is made when the access is ready to transmit to the memory. If there is no address match, then the current access does not conflict with any pending access. If there is an address match, the current access is stalled. The multi-core shared memory controller would then typically proceed to another access waiting a slot to the endpoint memory. Stored addresses in the coherence maintenance address queue are retired when the endpoint memory reports completion of the operation. At this point the access is no longer a hazard to following operations.
    Type: Grant
    Filed: October 22, 2013
    Date of Patent: July 7, 2015
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Matthew D Pierson, Kai Chirca
  • Patent number: 9069682
    Abstract: A system and method for providing a faster disk recovery is provided by bypassing the file system cache temporarily holding a sub set of metadata objects of the file system and instead using a persistent fast storage that can be accessed at deterministic speeds to hold all the metadata objects of the file system. The system speeds recovery by only writing updated metadata objects to the persistent disk storage when file system recovery is complete.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: June 30, 2015
    Assignee: EMC Corporation
    Inventor: Sairam Veeraswamy
  • Publication number: 20150149733
    Abstract: Method and system for supporting speculative modification in a data cache are provided and described. In one embodiment, a speculative cache buffer includes a plurality of cache lines and a plurality of state indicators. At least one of the cache lines is operable to receive an evicted cache line from a cache. The at least one of the cache lines is operable to return the evicted cache line to the cache if the cache requests the evicted cache line. Further, the plurality of state indicators is operable to indicate a state of a corresponding cache line of the cache lines.
    Type: Application
    Filed: January 14, 2011
    Publication date: May 28, 2015
    Inventors: Guillermo Rozas, Alexander Klaiber, David Dunn, Paul Serris, Lacky Shah
  • Patent number: 9043561
    Abstract: To provide a storage device with low power consumption. The storage device includes a plurality of cache lines. Each of the cache lines includes a data field which stores cache data; a tag which stores address data corresponding the cache data; and a valid bit which stores valid data indicating whether the cache data stored in the data field is valid or invalid. Whether power is supplied to the tag and the data field in each of the cache lines is determined based on the valid data stored in the valid bit.
    Type: Grant
    Filed: April 30, 2013
    Date of Patent: May 26, 2015
    Assignee: Semiconductor Energy Laboratory Co., Ltd.
    Inventor: Masashi Fujita
  • Patent number: 9043507
    Abstract: An information processing system includes a CPU that is connected to a bus; a device that is connected to the bus; a memory that is accessed by the CPU or the device; and a power mode control circuit that sets a power consumption mode. The power mode control circuit sets the power consumption mode based on first information that indicates a cache hit or a cache miss of a cache memory in the CPU and second information that indicates an activated state or a non-activated state of the device.
    Type: Grant
    Filed: May 9, 2013
    Date of Patent: May 26, 2015
    Assignee: FUJITSU LIMITED
    Inventors: Koichiro Yamashita, Hiromasa Yamauchi, Takahisa Suzuki, Koji Kurihara, Fumihiko Hayakawa
  • Patent number: 9037810
    Abstract: Some of the embodiments of the present disclosure provide a method comprising receiving a data packet, and storing the received data packet in a memory; generating a descriptor for the data packet, the descriptor including information for fetching at least a portion of the data packet from the memory; and in advance of a processing core requesting the at least a portion of the data packet to execute a processing operation on the at least a portion of the data packet, fetching the at least a portion of the data packet to a cache based at least in part on information in the descriptor. Other embodiments are also described and claimed.
    Type: Grant
    Filed: March 1, 2011
    Date of Patent: May 19, 2015
    Assignee: Marvell Israel (M.I.S.L.) Ltd.
    Inventors: Adi Habusha, Alon Pais, Rabeeh Khoury
  • Publication number: 20150134917
    Abstract: A system includes reception of a client query identifying data stored by a remote data source, generation of a remote query of the remote data source based on the client query, determination of a cache name based on the remote query, determination of whether the remote data source comprises a cache associated with the cache name and, if it is determined that the remote data source comprises a valid cache associated with the cache name, instruction of the remote data source to read the data of the cache, and reception of the data of the cache from the remote data source.
    Type: Application
    Filed: November 14, 2013
    Publication date: May 14, 2015
    Inventors: Shahul Hameed P, Shasha Luo, Sinisa Knezevic, Engin Dogusoy, Petya Nikolova
  • Patent number: 9032157
    Abstract: Disclosed is a computer system (100) comprising a processor unit (110) adapted to run a virtual machine in a first operating mode; a cache (120) accessible to the processor unit, said cache comprising a plurality of cache rows (1210), each cache row comprising a cache line (1214) and an image modification flag (1217) indicating a modification of said cache line caused by the running of the virtual machine; and a memory (140) accessible to the cache controller for storing an image of said virtual machine; wherein the processor unit comprises a replication manager adapted to define a log (200) in the memory prior to running the virtual machine in said first operating mode; and said cache further includes a cache controller (122) adapted to periodically check said image modification flags; write only the memory address of the flagged cache lines in the defined log and subsequently clear the image modification flags.
    Type: Grant
    Filed: December 11, 2012
    Date of Patent: May 12, 2015
    Assignee: International Business Machines Corporation
    Inventors: Sanjeev Ghai, Guy L. Guthrie, Geraint North, William J. Starke, Phillip G. Williams
  • Patent number: 9021222
    Abstract: A method is used for managing incremental cache backup and restore. I/O operations are quiesced at a cache module. A first snapshot of a storage object and a second snapshot of an SSD cache object are taken. The I/O operations at the cache module are unquiesced. A single backup image comprising the first snapshot and the second snapshot is created.
    Type: Grant
    Filed: March 28, 2012
    Date of Patent: April 28, 2015
    Assignee: Lenovoemc Limited
    Inventors: Vamsikrishna Sadhu, Brian R. Gruttadauria
  • Patent number: 9015419
    Abstract: Embodiments relate to a transactional read footprint after a cache line eviction. An aspect includes executing one or more read instructions in an active transaction. A cross invalidate (XI) request for a target cache line is received, and it is determined if the target cache line is part of a congruence class in a local cache. It is further determined whether an extension flag associated with the congruence class is set. The extension flag is used to indicate that cache lines of the congruence class associated with the active transaction have been replaced based only on being least recently used and that the target cache line is not in the cache. Execution of the active transaction continues based on determining that the extension flag is not set. Execution of the active transaction is aborted based on determining that the extension flag is set.
    Type: Grant
    Filed: June 15, 2012
    Date of Patent: April 21, 2015
    Assignee: International Business Machines Corporation
    Inventors: Khary J. Alexander, Jonathan T. Hsieh, Christian Jacobi
  • Patent number: 9015416
    Abstract: Some embodiments provide systems and methods for validating cached content based on changes in the content instead of an expiration interval. One method involves caching content and a first checksum in response to a first request for that content. The caching produces a cached instance of the content representative of a form of the content at the time of caching. The first checksum identifies the cached instance. In response to receiving a second request for the content, the method submits a request for a second checksum representing a current instance of the content and a request for the current instance. Upon receiving the second checksum, the method serves the cached instance of the content when the first checksum matches the second checksum and serves the current instance of the content upon completion of the transfer of the current instance when the first checksum does not match the second checksum.
    Type: Grant
    Filed: July 21, 2014
    Date of Patent: April 21, 2015
    Assignee: Edgecast Networks, Inc.
    Inventor: Andrew Lientz
  • Patent number: 9009363
    Abstract: A method for indicating an overload condition of a data storage system, comprises the steps of: defining one or more load indexes, wherein each of the load indexes has an overload threshold; and if one of the load indexes has met its respective overload threshold, providing an indicator of the overload condition of the storage system, else, monitoring the load indexes.
    Type: Grant
    Filed: June 14, 2010
    Date of Patent: April 14, 2015
    Assignee: Rasilient Systems, Inc.
    Inventors: Yee-Hsiang Sean Chang, Yiqiang Ding, John S. Hoch
  • Patent number: 9009426
    Abstract: A method of storing data on a data storage device having a cache, includes receiving, by the data storage device, a write command indicating a data portion and a range of addresses on the data storage device, the write command instructing the data storage device to write the data portion to each address in the range of addresses; and storing, in the cache, indicia that each address in the range of addresses comprises the data portion.
    Type: Grant
    Filed: December 30, 2010
    Date of Patent: April 14, 2015
    Assignee: EMC Corporation
    Inventors: Walter A. O'Brien III, David W. Harvey
  • Publication number: 20150100737
    Abstract: According to at least one example embodiment, a method and corresponding apparatus for conditionally storing data include initiating an atomic sequence by executing, by a core processor, an instruction/operation designed to initiate an atomic sequence. Executing the instruction designed to initiate the atomic sequence includes loading content associated with a memory location into a first cache memory, and maintaining an indication of the memory location and a copy of the corresponding content loaded. A conditional storing operation is then performed, the conditional storing operation includes a compare-and-swap operation, executed by a controller associated with a second cache memory, based on the maintained copy of the content and the indication of the memory location.
    Type: Application
    Filed: October 3, 2013
    Publication date: April 9, 2015
    Applicant: Cavium, Inc.
    Inventors: Richard E. Kessler, David H. Asher, Michael Sean Bertone, Shubhendu S. Mukherjee, Wilson P. Snyder, II, John M. Perveiler, Christopher J. Comis
  • Patent number: 9003130
    Abstract: A data processing device is provided that facilitates cache coherence policies. In one embodiment, a data processing device utilizes invalidation tags in connection with a cache that is associated with a processing engine. In some embodiments, the cache is configured to store a plurality of cache entries where each cache entry includes a cache line configured to store data and a corresponding cache tag configured to store address information associated with data stored in the cache line. Such address information includes invalidation flags with respect to addresses stored in the cache tags. Each cache tag is associated with an invalidation tag configured to store information related to invalidation commands of addresses stored in the cache tag. In such embodiment, the cache is configured to set invalidation flags of cache tags based upon information stored in respective invalidation tags.
    Type: Grant
    Filed: December 19, 2012
    Date of Patent: April 7, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: James O'Connor, Bradford M. Beckmann
  • Patent number: 9003129
    Abstract: A method, performed at a first storage processor (SP) connected to a mirroring second SP, includes (a) receiving a write command at the first SP from a host directed to a particular address of a data storage array, (b) identifying a reference in a first cache that is uniquely associated with the particular address, the reference having a token count field, (c) determining whether the reference is synchronized with a corresponding reference in a second cache, and (d) if the reference is synchronized with the corresponding reference, then (1) performing a cache write operation on a cache page pointed to by the reference if the reference stores a maximum token count value and (2) otherwise, sending a token request message from the first SP to the second SP over a cache mirroring bus to request a token from the second SP prior to performing the cache write operation.
    Type: Grant
    Filed: March 30, 2012
    Date of Patent: April 7, 2015
    Assignee: EMC Corporation
    Inventors: David W. Harvey, Henry Austin Spang, IV
  • Patent number: 9003123
    Abstract: An instruction cache stores cacheable instructions for access by a processing circuitry, the instruction cache having a data storage comprising a plurality of cache lines and a tag storage comprising a plurality of tag entries, each cache line for storing instruction data specifying a plurality of cacheable instructions, and each tag entry for storing an address identifier for the instruction data stored in an associated cache line. The instruction cache including valid flag storage for identifying whether each cache line is valid. Instruction cache control circuitry is arranged to store within a selected cache line of the data storage the instruction data for a plurality of cacheable instructions as retrieved from memory, to store within the tag entry associated with that selected cache line the address identifier for that stored instruction data, and to identify that selected cache line as valid within the valid flag storage.
    Type: Grant
    Filed: June 26, 2012
    Date of Patent: April 7, 2015
    Assignee: ARM Limited
    Inventors: Alex James Waugh, Matthew Lee Winrow
  • Publication number: 20150089158
    Abstract: A method of operation of a navigation system includes: identifying a geocache at a geocache-location; determining an external association for representing the geocache associated with an entity, an event, or a combination thereof; determining an external status for identifying the external status of the entity, the event, or a combination thereof; and generating a cache status based on the external status for displaying on a device.
    Type: Application
    Filed: September 24, 2013
    Publication date: March 26, 2015
    Applicant: Telenav, Inc.
    Inventor: Gregory Stewart Aist
  • Publication number: 20150089159
    Abstract: Cache lines in a multi-processor computing environment are configurable with a coherency mode. Cache lines in full-line coherency mode are operated or managed with full-line granularity. Cache lines in sub-line coherency mode are operated or managed as sub-cache line portions of a full cache line. Each cache is associated with a directory having a number of directory entries and with a side table having a smaller number of entries. The directory entry for a cache line associates the cache line with a tag and a set of full-line descriptive bits. Creating a side table entry for the cache line places the cache line in sub-line coherency mode. The side table entry associates each of the sub-cache line portions of the cache line with a set of sub-line descriptive bits. Removing the side table entry may return the cache line to full-line coherency mode.
    Type: Application
    Filed: September 26, 2013
    Publication date: March 26, 2015
    Applicant: International Business Machines Corporation
    Inventors: Fadi Y. Busaba, Harold W. Cain, III, Michael K. Gschwind, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum
  • Patent number: 8990512
    Abstract: A processor includes a core to execute instructions and a cache memory coupled to the core and having a plurality of entries. Each entry of the cache memory may include a data storage including a plurality of data storage portions, each data storage portion to store a corresponding data portion. Each entry may also include a metadata storage to store a plurality of portion modification indicators, each portion modification indicator corresponding to one of the data storage portions. Each portion modification indicator is to indicate whether the data portion stored in the corresponding data storage portion has been modified, independently of cache coherency state information of the entry. Other embodiments are described as claimed.
    Type: Grant
    Filed: October 31, 2012
    Date of Patent: March 24, 2015
    Assignee: Intel Corporation
    Inventors: Stanislav Shwartsman, Raanan Sade, Larisa Novakovsky, Arijit Biswas
  • Patent number: 8990506
    Abstract: In one embodiment, the present invention includes a cache memory including cache lines that each have a tag field including a state portion to store a cache coherency state of data stored in the line and a weight portion to store a weight corresponding to a relative importance of the data. In various implementations, the weight can be based on the cache coherency state and a recency of usage of the data. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 16, 2009
    Date of Patent: March 24, 2015
    Assignee: Intel Corporation
    Inventors: Naveen Cherukuri, Dennis W. Brzezinski, Ioannis T. Schoinas, Anahita Shayesteh, Akhilesh Kumar, Mani Azimi
  • Patent number: 8977821
    Abstract: A method to eliminate the delay of multiple overlapping block invalidate operations in a multi CPU environment by overlapping the block invalidate operation with normal CPU accesses, thus making the delay transparent. The cache controller performing the block invalidate operation merges multiple overlapping requests into a parallel stream to eliminate execution delays. Cache operations other that block invalidate, such as block write back or block write back invalidate may also be merged into the execution stream.
    Type: Grant
    Filed: October 25, 2012
    Date of Patent: March 10, 2015
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Raguram Damodaran
  • Publication number: 20150067246
    Abstract: An apparatus for processing cache requests in a computing system is disclosed. The apparatus may include a plurality of state memories, a plurality tag memories, and a control circuit. Each of the state memories may be configured to store coherency state information for a cache memory of a respective plurality of coherent agents. Each of the tag memories may be configured to store duplicate tag information a cache memory of the respective plurality of coherent agents. The control circuit may be configured to receive a tag address, access tag information in each of the tag memories in parallel dependent upon the received tag address, determine, for each cache memory, new coherency state information for a cache entry corresponding to the received tag address, and store the new coherency state information for each of the cache memories into a respective one of the plurality of state memories.
    Type: Application
    Filed: August 29, 2013
    Publication date: March 5, 2015
    Applicant: Apple Inc
    Inventors: Muditha Kanchana, Odutola O. Ewedemi
  • Patent number: 8959289
    Abstract: A data processing system includes a processor core supported by upper and lower level caches. In response to executing a deallocate instruction in the processor core, a deallocation request is sent from the processor core to the lower level cache, the deallocation request specifying a target address associated with a target cache line. In response to receipt of the deallocation request at the lower level cache, a determination is made if the target address hits in the lower level cache. In response to determining that the target address hits in the lower level cache, the target cache line is retained in a data array of the lower level cache and a replacement order field in a directory of the lower level cache is updated such that the target cache line is more likely to be evicted from the lower level cache in response to a subsequent cache miss.
    Type: Grant
    Filed: October 19, 2012
    Date of Patent: February 17, 2015
    Assignee: International Business Machines Corporation
    Inventors: Sanjeev Ghai, Guy L. Guthrie, William J. Starke, Jeff A. Stuecheli, Derek E. Williams, Phillip G. Williams
  • Publication number: 20150032973
    Abstract: In one embodiment, a method for detecting false sharing includes running code on a plurality of cores, where the code includes instrumentation and tracking cache invalidations in the code while running the code to produce tracked invalidations in accordance with the instrumentation, where tracking the cache invalidations includes tracking cache accesses to a plurality of cache lines by a plurality of tasks. The method also includes reporting false sharing in accordance with the tracked invalidations to produce a false sharing report.
    Type: Application
    Filed: July 18, 2014
    Publication date: January 29, 2015
    Inventors: Tongping Liu, Chen Tian, Ziang Hu
  • Patent number: 8943272
    Abstract: According to one aspect of the present disclosure, a method and technique for variable cache line size management is disclosed. The method includes: determining whether an eviction of a cache line from an upper level sectored cache to an unsectored lower level cache is to be performed, wherein the upper level cache includes a plurality of sub-sectors, each sub-sector having a cache line size corresponding to a cache line size of the lower level cache; responsive to determining that an eviction is to be performed, identifying referenced sub-sectors of the cache line to be evicted; invalidating unreferenced sub-sectors of the cache line to be evicted; and storing the referenced sub-sectors in the lower level cache.
    Type: Grant
    Filed: April 20, 2012
    Date of Patent: January 27, 2015
    Assignee: International Business Machines Corporation
    Inventors: Robert H. Bell, Jr., Wen-Tzer T. Chen, Diane G. Flemming, Hong L. Hua, William A. Maron, Mysore S. Srinivas
  • Patent number: 8938587
    Abstract: A coherent attached processor proxy (CAPP) that participates in coherence communication in a primary coherent system on behalf of an attached processor external to the primary coherent system tracks delivery of data to destinations in the primary coherent system via one or more entries in a data structure. Each of the one or more entries specifies with a destination tag a destination in the primary coherent system to which data is to be delivered from the attached processor. In response to initiation of recovery operations for the CAPP, the CAPP performs data recovery operations, including transmitting, to at least one destination indicated by the destination tag of one or more entries, an indication of a data error in data to be delivered to that destination from the attached processor.
    Type: Grant
    Filed: January 11, 2013
    Date of Patent: January 20, 2015
    Assignee: International Business Machines Corporation
    Inventors: Bartholomew Blaner, Kenneth A. Lauricella, Joseph G. McDonald, Michael S. Siegel, Jeff A. Stuecheli
  • Patent number: 8935478
    Abstract: According to one aspect of the present disclosure, a system and technique for variable cache line size management is disclosed. The system includes a processor and a cache hierarchy, where the cache hierarchy includes a sectored upper level cache and an unsectored lower level cache, and wherein the upper level cache includes a plurality of sub-sectors, each sub-sector having a cache line size corresponding to a cache line size of the lower level cache. The system also includes logic executable to, responsive to determining that a cache line from the upper level cache is to be evicted to the lower level cache: identify referenced sub-sectors of the cache line to be evicted; invalidate unreferenced sub-sectors of the cache line to be evicted; and store the referenced sub-sectors in the lower level cache.
    Type: Grant
    Filed: November 1, 2011
    Date of Patent: January 13, 2015
    Assignee: International Business Machines Corporation
    Inventors: Robert H. Bell, Jr., Wen-Tzer T. Chen, Diane G. Flemming, Hong L. Hua, William A. Maron, Mysore S. Srinivas
  • Patent number: 8930619
    Abstract: A method for destaging write data from a storage controller to storage devices is provided. The method includes determining that a cache element should be transferred from a write cache of the storage controller to the storage devices, calculating that a dirty watermark is above a dirty watermark maximum value, identifying a first cache element to destage from the write cache to the storage devices, transferring a first data container including the first cache element to the storage devices, and incrementing an active destage count. The method also includes repeating determining, calculating, identifying, transferring, and incrementing if the active destage count is less than an active destage count maximum value. The active destage count is a current number of write requests issued to a virtual disk that have not yet been completed, and the virtual disk is a RAID group comprising one or more specific storage devices.
    Type: Grant
    Filed: August 21, 2014
    Date of Patent: January 6, 2015
    Assignee: Dot Hill Systems Corporation
    Inventors: Michael David Barrell, Zachary David Traut
  • Patent number: 8930518
    Abstract: An application server of a server cluster may store a payload of a write request in a local cache and thereafter serve read requests based on payloads in the local cache if the corresponding data is present when such read requests are received. The payloads are however later propagated to respective data stores at a later suitable time. Each application server in the server cluster retrieves data from the data stores if the required payload is unavailable in the respective local cache. According to another aspect, an application server signals to other application servers of the server cluster if a required payload is unavailable in the local cache. In response, the application server having the specified payload (in local cache) propagates the payload with a higher priority to the corresponding data store, such that the payload is available to the requesting application server.
    Type: Grant
    Filed: October 3, 2012
    Date of Patent: January 6, 2015
    Assignee: Oracle International Corporation
    Inventors: Krishnamoorthy Dharmalingam, Sankar Mani
  • Patent number: 8930629
    Abstract: In response to executing a deallocate instruction, a deallocation request specifying a target address of a target cache line is sent from a processor core to a lower level cache. In response, a determination is made if the target address hits in the lower level cache. If so, the target cache line is retained in a data array of the lower level cache, and a replacement order field of the lower level cache is updated such that the target cache line is more likely to be evicted in response to a subsequent cache miss in a congruence class including the target cache line. In response to the subsequent cache miss, the target cache line is cast out to the lower level cache with an indication that the target cache line was a target of a previous deallocation request of the processor core.
    Type: Grant
    Filed: October 19, 2012
    Date of Patent: January 6, 2015
    Assignee: International Business Machines Corporation
    Inventors: Sanjeev Ghai, Guy L. Guthrie, William J. Starke, Jeff A. Stuecheli, Derek E. Williams, Phillip G. Williams
  • Patent number: 8930760
    Abstract: A mechanism is provided for effectively validating cache coherency within a processor. For each node in a set of nodes, responsive to a node in a set of nodes being a controlling node, at least one action is performed on each controlled node mapped to the controlling node. After performing the at least one action on each controlled node mapped to the controlling node or responsive to the node failing to be a controlling node, a self-modifying branch test pattern is executed based on the selected execution pattern in the condition register through the set of nodes. Responsive to the self-modifying branch test pattern ending, values output from the execution unit during execution of the self-modifying branch test pattern are compared to a set of expected results. Responsive to a match of the comparison for the execution patterns in the set of execution patterns, the execution unit is validated.
    Type: Grant
    Filed: December 17, 2012
    Date of Patent: January 6, 2015
    Assignee: International Business Machines Corporation
    Inventors: Sangram Alapati, Prathiba Kumar, Varun Mallikarjunan, Satish K. Sadasivam
  • Patent number: 8924648
    Abstract: A method for caching attribute data for matching attributes with physical addresses. The method includes storing a plurality of attribute entries in a memory, wherein the memory is configured to provide at least one attribute entry when accessed with a physical address, and wherein the attribute entry provided describes characteristics of the physical address.
    Type: Grant
    Filed: September 20, 2013
    Date of Patent: December 30, 2014
    Inventors: H. Peter Anvin, Guillermo J. Rozas, Alexander Klaiber, John P. Banning
  • Patent number: 8924653
    Abstract: A method for providing a transactional memory is described. A cache coherency protocol is enforced upon a cache memory including cache lines, wherein each line is in one of a modified state, an owned state, an exclusive state, a shared state, and an invalid state. Upon initiation of a transaction accessing at least one of the cache lines, each of the lines is ensured to be either shared or invalid. During the transaction, in response to an external request for any cache line in the modified, owned, or exclusive state, each line in the modified or owned state is invalidated without writing the line to a main memory. Also, each exclusive line is demoted to either the shared or invalid state, and the transaction is aborted.
    Type: Grant
    Filed: October 31, 2006
    Date of Patent: December 30, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Blaine D. Gaither, Judson E. Veazey