Cache Status Data Bit Patents (Class 711/144)
  • Patent number: 9558147
    Abstract: A system and method for monitoring a plurality of data streams is disclosed. At a first processing stage, a first memory area is associated to an element of a plurality of data streams. Upon arrival of a frame associated with one of the plurality of data streams, a second memory area is associated to the arrived frame based on the element. In the second memory area, a data indicating an arrival of the arrived frame is recorded and on a successful recording, the frame is forwarded to a second processing stage. An independent process executes at a preselected time interval to erase contents of the first memory area.
    Type: Grant
    Filed: June 12, 2014
    Date of Patent: January 31, 2017
    Assignee: NXP B.V.
    Inventors: Nicola Concer, Sujan Pandey, Hubertus Gerardus Hendrikus Vermeulen
  • Patent number: 9501418
    Abstract: The present disclosure relates to caches, methods, and systems for using an invalidation data area. The cache can include a journal configured for tracking data blocks, and an invalidation data area configured for tracking invalidated data blocks associated with the data blocks tracked in the journal. The invalidation data area can be on a separate cache region from the journal. A method for invalidating a cache block can include determining a journal block tracking a memory address associated with a received write operation. The method can also include determining a mapped journal block based on the journal block and on an invalidation record. The method can also include determining whether write operations are outstanding. If so, the method can include aggregating the outstanding write operations and performing a single write operation based on the aggregated write operations.
    Type: Grant
    Filed: June 26, 2014
    Date of Patent: November 22, 2016
    Assignee: HGST Netherlands B.V.
    Inventor: Pulkit Misra
  • Patent number: 9477514
    Abstract: A TRANSACTION BEGIN instruction and a TRANSACTION END instruction are provided. The TRANSACTION BEGIN instruction causes either a constrained or nonconstrained transaction to be initiated, depending on a field of the instruction. A constrained transaction has one or more restrictions associated therewith, while a nonconstrained transaction is not limited in the manner of a constrained transaction. The TRANSACTION END instruction ends the transaction started by the TRANSACTION BEGIN instruction.
    Type: Grant
    Filed: March 7, 2013
    Date of Patent: October 25, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dan F. Greiner, Christian Jacobi, Marcel Mitran, Timothy J. Slegel
  • Patent number: 9471446
    Abstract: A system includes a plurality of storage devices and an information processing device including a cache memory. The information processing device is configured to access the plurality of storage devices. When a failure has occurred in a first storage device included in the plurality of storage devices, the information processing device perform a procedure including: specifying a second storage device in which no failure has occurred, among the plurality of storage devices, creating an invisible file including a cache that has been stored in the cache memory and is to be stored in the first storage device, and storing the created invisible file in the second storage device. The information processing device stores the cache included in the invisible file stored in the second storage device, in the first storage device when the failure of the first storage device is eliminated.
    Type: Grant
    Filed: October 6, 2014
    Date of Patent: October 18, 2016
    Assignee: FUJITSU LIMITED
    Inventor: Yoshihisa Chujo
  • Patent number: 9471503
    Abstract: A computer system processor of a multi-processor computer system having a cache subsystem, the computer system having exclusive ownership of a cache line, executes a demote instruction to cause its own exclusively owned cache line to become shared or read-only in the computer processor cache.
    Type: Grant
    Filed: February 11, 2016
    Date of Patent: October 18, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Chung-Lung Kevin Shum, Kathryn Marie Jackson, Charles Franklin Webb
  • Patent number: 9465739
    Abstract: A system, method, and computer program product are provided for conditionally sending a request for data to a node based on a determination. In operation, a first request for data is sent to a cache of a first node. Additionally, it is determined whether the first request can be satisfied within the first node, where the determining includes at least one of determining a type of the first request and determining a state of the data in the cache. Furthermore, a second request for the data is conditionally sent to a second node, based on the determination.
    Type: Grant
    Filed: October 17, 2013
    Date of Patent: October 11, 2016
    Assignee: Broadcom Corporation
    Inventors: Gaurav Garg, David T. Hass
  • Patent number: 9460011
    Abstract: A computer system that includes a processor, a memory and a processor cache for the main memory with a check-in-cache instruction may be provided. The processor executes computer readable instructions stored in the memory that include receiving a check-in-cache instruction from a check-in-cache storage location. The instructions also include responsive to receiving the check-in-cache instruction, determining whether data bytes specified by the check-in-cache instruction are at least partially available in the processor cache. The instructions further include storing a condition code of the determination result in a storage location.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: October 4, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Marco Kraemer, Carsten Otte, Christoph Raisch
  • Patent number: 9454491
    Abstract: Systems and methods for accessing a unified translation lookaside buffer (TLB) are disclosed. A method includes receiving an indicator of a level one translation lookaside buffer (L1TLB) miss corresponding to a request for a virtual address to physical address translation, searching a cache that includes virtual addresses and page sizes that correspond to translation table entries (TTEs) that have been evicted from the L1TLB, where a page size is identified, and searching a second level TLB and identifying a physical address that is contained in the second level TLB. Access is provided to the identified physical address.
    Type: Grant
    Filed: January 6, 2015
    Date of Patent: September 27, 2016
    Assignee: SOFT MACHINES INC.
    Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
  • Patent number: 9448954
    Abstract: The subject matter discloses a method for data coherency; the method comprising receiving an interrupt request for interrupting a CPU; wherein the interrupt request is from one of a plurality of modules; wherein the interrupt request notifying a writing instruction of a first data by the one of the plurality of modules to a shared memory; and wherein the shared memory is accessible to the plurality of modules through a shared bus; suspending the interrupt request; validating a completion of an execution of the writing instruction; wherein the validating is performed after the suspending; and resuming the interrupt request after the completion of the execution of the writing is validated, whereby to notify a to the CPU about the completion of the execution of the writing instruction.
    Type: Grant
    Filed: February 28, 2011
    Date of Patent: September 20, 2016
    Assignee: DSP GROUP LTD.
    Inventors: Leonardo Vainsencher, Yaron P. Folk, Yuval Itkin
  • Patent number: 9430410
    Abstract: A method for supporting a plurality of load accesses is disclosed. A plurality of requests to access a data cache is accessed, and in response, a tag memory is accessed that maintains a plurality of copies of tags for each entry in the data cache. Tags are identified that correspond to individual requests. The data cache is accessed based on the tags that correspond to the individual requests. A plurality of requests to access the same block of the plurality of blocks causes an access arbitration that is executed in the same clock cycle as is the access of the tag memory.
    Type: Grant
    Filed: July 30, 2012
    Date of Patent: August 30, 2016
    Assignee: SOFT MACHINES, INC.
    Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
  • Patent number: 9432679
    Abstract: A data processing system is provided for processing video data on a window basis. At least one memory unit (L1) is provided for fetching and storing video data from an image memory (IM) according to a first window (R) in a first scanning order. At least one second memory unit (L0) is provided for fetching and storing video data from the first memory unit (L1) according to a second window in a second scanning order (SO). Furthermore, at least one processing unit (PU) is provided for performing video processing on the video data of the second window as stored in the at least one second memory unit (L0) based on the second scanning order (SO). The second scanning order (SO) is a meandering scanning order being orthogonal to the first scanning order (SO1).
    Type: Grant
    Filed: October 27, 2006
    Date of Patent: August 30, 2016
    Assignee: ENTROPIC COMMUNICATIONS, LLC
    Inventors: Aleksandar Beric, Ramanathan Sethuraman
  • Patent number: 9424198
    Abstract: A processor includes at least one execution unit, a near memory, and memory management logic to manage the near memory and a far memory external to the processor as a unified exclusive memory. Each of a plurality of data blocks may be exclusively stored in either the far memory or the near memory. The unified exclusive memory space may be divided into a plurality of sets and a plurality of ways. In response to a request for a first block stored in the far memory, the memory management logic may move the first block from the far memory to the near memory, and may move a second block from the near memory to the far memory. A tag buffer may store tags associated with blocks being moved between the near memory and the far memory. Fill and drain buffers may also be used. Other implementations are described and claimed.
    Type: Grant
    Filed: November 30, 2012
    Date of Patent: August 23, 2016
    Assignee: Intel Corporation
    Inventors: Shlomo Raikin, Zvika Greenfield
  • Patent number: 9396102
    Abstract: For cache/data management in a computing storage environment, incoming data segments into a Non Volatile Storage (NVS) device of the computing storage environment are validated against a bitmap to determine if the incoming data segments are currently in use. Those of the incoming data segments determined to be currently in use are designated to the computing storage environment to protect data integrity.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: July 19, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin John Ash, Michael Thomas Benhase, Lokesh Mohan Gupta, Kenneth Wayne Todd
  • Patent number: 9390023
    Abstract: According to at least one example embodiment, a method and corresponding apparatus for conditionally storing data include initiating an atomic sequence by executing, by a core processor, an instruction/operation designed to initiate an atomic sequence. Executing the instruction designed to initiate the atomic sequence includes loading content associated with a memory location into a first cache memory, and maintaining an indication of the memory location and a copy of the corresponding content loaded. A conditional storing operation is then performed, the conditional storing operation includes a compare-and-swap operation, executed by a controller associated with a second cache memory, based on the maintained copy of the content and the indication of the memory location.
    Type: Grant
    Filed: October 3, 2013
    Date of Patent: July 12, 2016
    Assignee: Cavium, Inc.
    Inventors: Richard E. Kessler, David H. Asher, Michael Sean Bertone, Shubhendu S. Mukherjee, Wilson P. Snyder, II, John M. Perveiler, Christopher J. Comis
  • Patent number: 9372800
    Abstract: A multi-chip system includes multiple chip devices configured to communicate to each other and share resources. According to at least one example embodiment, a method of providing memory coherence within the multi-chip system comprises maintaining, at a first chip device of the multi-chip system, state information indicative of one or more states of one or more copies, residing in one or more chip devices of the multi-chip system, of a data block. The data block is stored in a memory associated with one of the multiple chip devices. The first chip device receives a message associated with a copy of the one or more copies of the data block from a second chip device of the multiple chip devices, and, in response, executes a scheme of one or more actions determined based on the state information maintained at the first chip device and the message received.
    Type: Grant
    Filed: March 7, 2014
    Date of Patent: June 21, 2016
    Assignee: Cavium, Inc.
    Inventors: Isam Akkawi, Richard E. Kessler, David H. Asher, Bryan W. Chin, Wilson P. Snyder, II
  • Patent number: 9367464
    Abstract: A method is described that includes alternating cache requests sent to a tag array between data requests and dataless requests.
    Type: Grant
    Filed: December 30, 2011
    Date of Patent: June 14, 2016
    Assignee: Intel Corporation
    Inventor: Larisa Novakovsky
  • Patent number: 9336144
    Abstract: Three-dimensional processing systems are provided which have multiple layers of conjoined chips, wherein one or more chip layers include processor cores that share cache hierarchies over multiple chip layers. The caches can be partitioned, conjoined, and managed according to various sets of rules and configurations.
    Type: Grant
    Filed: July 25, 2013
    Date of Patent: May 10, 2016
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Alper Buyuktosunoglu, Philip G. Emma, Allan M. Hartstein, Michael B. Healy, Krishnan K. Kailas
  • Patent number: 9336146
    Abstract: Technologies are generally described herein for accelerating a cache state transfer in a multicore processor. The multicore processor may include first, second, and third tiles. The multicore processor may initiate migration of a thread executing on the first core at the first tile from the first tile to the second tile. The multicore processor may determine block addresses of blocks to be transferred from a first cache at the first tile to a second cache at the second tile, and identify that a directory at the third tile corresponds to the block addresses. The multicore processor may update the directory to reflect that the second cache shares the blocks. The multicore processor may transfer the blocks from the first cache in the first tile to the second cache in the second tile effective to complete the migration of the thread from the first tile to the second tile.
    Type: Grant
    Filed: December 29, 2010
    Date of Patent: May 10, 2016
    Assignee: EMPIRE TECHNOLOGY DEVELOPMENT LLC
    Inventor: Yan Solihin
  • Patent number: 9329800
    Abstract: A data storage system and associated method are provided wherein a policy engine continuously collects qualitative information about a network load to the data storage system in order to dynamically characterize the load and continuously correlates the load characterization to the content of a command queue of transfer requests for writeback commands and host read commands, selectively limiting the content with respect to writeback commands to only those transfer requests for writeback data that are selected on a physical zone basis of a plurality of predefined physical zones of a storage media.
    Type: Grant
    Filed: June 29, 2007
    Date of Patent: May 3, 2016
    Assignee: Seagate Technology LLC
    Inventors: Clark Edward Lubbers, Robert Michael Lester
  • Patent number: 9311244
    Abstract: An interconnect has transaction tracking circuitry for enforcing ordering of a set of data access transactions so that they are issued to slave devices in an order in which they are received from master devices. The transaction tracking circuitry is reused for also enforcing ordering of snoop transactions which are triggered by the set of data access transactions, for snooping master devices identified by a snoop filter as holding cache data for the target address of the transactions.
    Type: Grant
    Filed: August 25, 2014
    Date of Patent: April 12, 2016
    Assignee: ARM Limited
    Inventors: Sean James Salisbury, Andrew David Tune, Daniel Sara
  • Patent number: 9304863
    Abstract: A method of backstepping through a program execution includes dividing the program execution into a plurality of epochs, wherein the program execution is performed by an active core, determining, during a subsequent epoch of the plurality of epochs, that a rollback is to be performed, performing the rollback including re-executing a previous epoch of the plurality of epochs, wherein the previous epoch includes one or more instructions of the program execution stored by a checkpointing core, and adjusting a granularity of the plurality of epochs according to a frequency of the rollback.
    Type: Grant
    Filed: June 27, 2013
    Date of Patent: April 5, 2016
    Assignee: International Business Machines Corporation
    Inventors: Harold W. Cain, III, David M. Daly, Kattamuri Ekanadham, Jose E. Moreira, Mauricio J. Serrano
  • Patent number: 9304946
    Abstract: Technologies are described herein for providing a hardware-based accelerator adapted to manage copy-on-write. Some example technologies may identify a read request adapted to read a block at an original memory address. The technologies may utilize the hardware-based accelerator to determine whether the block is located at the original memory address. When a determination is made that the block is located in at the original memory address, the technologies may utilize the hardware-based accelerator to pass the original memory address so that the read request can be performed utilizing the original memory address. When a determination is made that the block is not located in the memory at the original memory address, the technologies may utilize the hardware-based accelerator to generate a new memory address and to pass the new memory address so that the read request can be performed utilizing the new memory address.
    Type: Grant
    Filed: June 25, 2012
    Date of Patent: April 5, 2016
    Assignee: EMPIRE TECHNOLOGY DEVELOPMENT LLC
    Inventor: Yan Solihin
  • Patent number: 9298624
    Abstract: The present disclosure relates to systems, methods, and computer program products for keeping multiple caches updated, or coherent, on multiple servers when the multiple caches contain independent copies of cached data. Example methods may include receiving a request to write data to a block of a first cache associated with a first server in a clustered server environment. The methods may also include identifying a second cache storing a copy of the block, where the second cache is associated with a second server in the clustered environment. The methods may further include transmitting a request to update the second cache with the received write data, and upon receiving a subsequent request to write subsequent data, identifying a third cache for invalidating based on access patterns of the blocks, where the third cache is associated with a third server in the clustered environment.
    Type: Grant
    Filed: May 14, 2014
    Date of Patent: March 29, 2016
    Assignee: HGST Netherlands B.V.
    Inventors: Jin Ren, Ken Qing Yang, Gregory Evan Fedynyshyn
  • Patent number: 9298665
    Abstract: This invention optimizes non-shared accesses and avoids dependencies across coherent endpoints to ensure bandwidth across the system even when sharing. The coherence controller is distributed across all coherent endpoints. The coherence controller for each memory endpoint keeps a state around for each coherent access to ensure the proper ordering of events. The coherence controller of this invention uses First-In-First-Out allocation to ensure full utilization of the resources before stalling and simplicity of implementation. The coherence controller provides Snoop Command/Response ID Allocation per memory endpoint.
    Type: Grant
    Filed: October 22, 2013
    Date of Patent: March 29, 2016
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Matthew D Pierson, Kai Chirca
  • Patent number: 9292449
    Abstract: A cache memory data compression and decompression technique is described. A processor device includes a memory controller unit (MCU) coupled to a main memory and a cache memory. The MCU includes a cache memory data compression and decompression module that compresses data received from the main memory. The compressed data may then be stored in the cache memory. The cache memory data compression and decompression module may also decompress data that is stored in the cache memory. For example, in response to a cache hit for data requested by a processor, the compressed data in the cache memory may be decompressed and subsequently read or operated upon by the processor.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: March 22, 2016
    Assignee: Intel Corporation
    Inventors: Alaa R. Alameldeen, Niranjan L. Cooray, Jayesh Gaur, Steven D. Pudar, Manuel A. Aguilar Arreola, Margareth E. Marrugo, Chinnakrishnan Ballapuram
  • Patent number: 9280480
    Abstract: A facility and cache machine instruction of a computer architecture for specifying a target cache cache-level and a target cache attribute of interest for obtaining a cache attribute of one or more target caches. The requested cache attribute of the target cache(s) is saved in a register.
    Type: Grant
    Filed: July 15, 2013
    Date of Patent: March 8, 2016
    Assignee: International Business Machines Corporation
    Inventors: Dan F Greiner, Timothy J Siegel
  • Patent number: 9262318
    Abstract: A system including a processor, a memory controller, and a flash memory module. The processor is configured to generate a request to retrieve information corresponding to an address. The memory controller module includes a cache memory configured to store information, and a cache control logic module configured to determine whether the cache memory stores the information corresponding to the address, if the cache memory stores the information corresponding to the address, retrieve the information from the cache memory and provide the information to the processor, and if the cache memory does not store the information corresponding to the address, generate a flash memory read request based on the address. The flash memory module is configured to, in response to receiving the flash memory read request, provide the information corresponding to the address to the memory controller module.
    Type: Grant
    Filed: March 12, 2014
    Date of Patent: February 16, 2016
    Assignee: Marvell International Ltd.
    Inventors: Satya Vadlamani, Sindhu Rajaram, Yongjiang Wang, Lin Chen
  • Patent number: 9256527
    Abstract: The present idea provides a high read and write performance from/to a solid state memory device. The main memory of the controller is not blocked by a complete address mapping table covering the entire memory device. Instead such table is stored in the memory device itself, and only selected portions of address mapping information are buffered in the main memory in a read cache and a write cache. A separation of the read cache from the write cache enables an address mapping entry being evictable from the read cache without the need to update the related flash memory page storing such entry in the flash memory device. By this design, the read cache may advantageously be stored on a DRAM even without power down protection, while the write cache may preferably be implemented in nonvolatile or other fail-safe memory. This leads to a reduction of the overall provisioning of nonvolatile or fail-safe memory and to an improved scalability and performance.
    Type: Grant
    Filed: July 25, 2011
    Date of Patent: February 9, 2016
    Assignee: International Business Machines Corporation
    Inventors: Werner Bux, Robert Haas, Xiao-Yu Hu, Roman Pletka
  • Patent number: 9213652
    Abstract: Managing data in a computing system comprising one or more cores includes: providing a cache in each of one or more of the cores that includes multiple storage locations; storing data of a first type of multiple types of data in a selected storage location of a first cache of a first core that is selected according to status information associated with the first cache, and updating the status information; and storing data of a second type of the multiple types of data in a storage location within a subset of fewer than all of the storage locations of the first cache and managing the status information to ensure that subsequent data of the second type received by the first core for storage in the first cache is stored in the storage location within the subset.
    Type: Grant
    Filed: September 20, 2010
    Date of Patent: December 15, 2015
    Assignee: Tilera Corperation
    Inventors: Chyi-Chang Miao, Christopher D. Metcalf, Ian Rudolf Bratt, Carl G. Ramey
  • Patent number: 9195395
    Abstract: An apparatus, computer program product, and associated method/processing unit are provided for utilizing a memory subsystem including NAND flash memory and dynamic random access memory. Further included is a first circuit for receiving DDR signals and converting the DDR signals to SATA signals. The first circuit includes embedded dynamic random access memory. Also provided is a second circuit for receiving the SATA signals and converting the SATA signals to NAND flash signals. The second circuit is communicatively coupled to the first circuit via a first memory bus associated with a SATA protocol, the NAND flash memory via a second memory bus associated with a NAND flash protocol, and the dynamic random access memory.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: November 24, 2015
    Assignee: P4TENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 9189442
    Abstract: An apparatus and associated method/processing unit are provided for utilizing a memory subsystem including NAND flash memory and dynamic random access memory. Further included is a first circuit for receiving DDR signals and converting the DDR signals to SATA signals. The first circuit includes embedded dynamic random access memory. Also provided is a second circuit for receiving the SATA signals and converting the SATA signals to NAND flash signals. The second circuit is communicatively coupled to the first circuit via a first memory bus associated with a SATA protocol, the NAND flash memory via a second memory bus associated with a NAND flash protocol, and the dynamic random access memory. In operation, data is fetched using a time between an execution of a plurality of threads.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: November 17, 2015
    Assignee: P4TENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 9189409
    Abstract: Methods and structure are provided for reducing the number of writes to a cache of a storage controller. One exemplary embodiment includes a storage controller that has a non-volatile flash cache memory, a primary memory that is distinct from the cache memory, and a memory manager. The memory manager is able to receive data for storage in the cache memory, to generate a hash key from the received data, and to compare the hash key to hash values for entries in the cache memory. The memory manager can write the received data to the cache memory if the hash key does not match one of the hash values. Also, the memory manager can modify the primary memory instead of writing to the cache if the hash key matches a hash value, in order to reduce the amount of data written to the cache memory.
    Type: Grant
    Filed: February 19, 2013
    Date of Patent: November 17, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventor: Parag R. Maharana
  • Patent number: 9182984
    Abstract: A computer implemented instruction is executed. One or more translation table entry locations (TLB) are specified by the instruction. Based on a local-clearing (LC) control specified by the instruction being a first value, the processor selectively clears TLBs in a plurality of the CPUs in a configuration of entries corresponding to the determined translation table entry location. Based on the local-clearing (LC) being a second value, the processor selectively clears only the TLBs of the CPU executing the instruction of entries corresponding to the determined translation table entry location. A computer program product, computer system and computer implemented method are provided.
    Type: Grant
    Filed: June 15, 2012
    Date of Patent: November 10, 2015
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dan F. Greiner, Gustav E. Sittmann, III, Cynthia Sittmann
  • Patent number: 9183081
    Abstract: Systems and methods for performing defect detection and data recovery within a memory system are disclosed. A controller of a memory system may receive a command to write data in a memory of the memory system; determine a physical location of the memory that is associated with the data write; write data associated with the data write to the physical location; and store the physical location of the memory that is associated with the data write in a Tag cache. The controller may further identify a data keep cache of a plurality of data keep caches that is associated with the data write based on the physical location of the memory that is associated with the data write; update an XOR sum based on the data of the data write; and store the updated XOR sum in the identified data keep cache.
    Type: Grant
    Filed: March 12, 2013
    Date of Patent: November 10, 2015
    Assignee: SanDisk Technologies Inc.
    Inventors: Abhijeet Manohar, Chris Avila, Jianmin Huang, Daniel Edward Tuers
  • Patent number: 9182914
    Abstract: An apparatus, computer program product, and associated method/processing unit are provided for utilizing a memory subsystem including a first memory of a first memory class, and a second memory of a second memory class communicatively coupled to the first memory. In operation, data is fetched using a time between a plurality of threads.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: November 10, 2015
    Assignee: P4TENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 9176671
    Abstract: An apparatus and associated method/processing unit are provided for utilizing a memory subsystem including NAND flash memory and dynamic random access memory. Further included is a first circuit for receiving DDR signals and converting the DDR signals to SATA signals. The first circuit includes embedded dynamic random access memory. Also provided is a second circuit for receiving the SATA signals and converting the SATA signals to NAND flash signals. The second circuit is communicatively coupled to the first circuit via a first memory bus associated with a SATA protocol, the NAND flash memory via a second memory bus associated with a NAND flash protocol, and the dynamic random access memory. In operation, data is fetched using a time between an execution of a plurality of threads.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: November 3, 2015
    Assignee: P4TENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 9170744
    Abstract: A computer program product, apparatus and associated method/processing unit are provided for utilizing a memory subsystem including NAND flash physical memory and DRAM physical memory. Further included is a first buffer for receiving DDR signals and converting the DDR signals to SATA signals. The first buffer includes embedded DRAM physical memory. Also provided is a second buffer for receiving the SATA signals and converting the SATA signals to NAND flash signals. The second buffer is communicatively coupled to the first buffer via a first memory bus associated with a SATA protocol, the NAND flash physical memory via a second memory bus associated with a NAND flash protocol, and the DRAM physical memory.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: October 27, 2015
    Assignee: P4TENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 9165088
    Abstract: According to an example, multi-mode storage may include operating a first array including a first memory and a second array including a second memory in one or more modes of operation. The first memory may be a relatively denser memory compared to the second memory and the second memory may be a relatively faster memory compared to the first memory. The modes of operation may include a first mode of operation where the first array functions as the relatively denser memory compared to the second memory and the second array functions as the relatively faster memory compared to the first memory, a second mode of operation where the second array is operated as an automatic cache of a portion of a dataset, and a third mode of operation where a cache-tag functionality used to support the second mode of operation is instead used to provide a CAM.
    Type: Grant
    Filed: July 8, 2013
    Date of Patent: October 20, 2015
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Robert J. Brooks
  • Patent number: 9164679
    Abstract: An apparatus, computer program product, and associated method/processing unit are provided for utilizing a memory subsystem including a first memory of a first memory class, and a second memory of a second memory class communicatively coupled to the first memory. In operation, data is fetched using a time between a plurality of threads.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: October 20, 2015
    Assignee: PATENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 9164910
    Abstract: A data processing apparatus comprises: at least one processor; having a private cache, a shared cache for storing data processed by the processor and by a further device, and coherency control circuitry. The coherency control circuitry is responsive to a write request from the further device to determine if data related to an address targeted by the write request is stored in the private cache, and if it is, forcing an eviction of the stored data from the private cache to the shared cache prior to perform the write to the shared cache. The data is stored in the private cache in conjunction with an indicator indicating if the stored data is consistent with data stored in a corresponding address in a further data store, and the stored data is evicted whether the stored data is indicated as being consistent or inconsistent.
    Type: Grant
    Filed: February 21, 2008
    Date of Patent: October 20, 2015
    Assignee: ARM Limited
    Inventors: Nicolas Chaussade, Stephane Eric Sebastien Brochier
  • Patent number: 9158689
    Abstract: Technologies are described herein generally relate to aggregation of cache eviction notifications to a directory. Some example technologies may be utilized to update an aggregation table to reflect evictions of a plurality of blocks from a plurality of block addresses of at least one cache memory. An aggregate message can be generated, where the message specifies the evictions of the plurality of blocks as reflected in the aggregation table. The aggregate message can be sent to the directory. The directory can parse the aggregate message and update a plurality of directory entries to reflect the evictions from the cache memory as specified in the aggregate message.
    Type: Grant
    Filed: February 11, 2013
    Date of Patent: October 13, 2015
    Assignee: Empire Technology Development LLC
    Inventor: Yan Solihin
  • Patent number: 9158546
    Abstract: A computer program product, apparatus and associated method/processing unit are provided for utilizing a physical memory system including a first physical memory of a first physical memory class, and a second physical memory of a second physical memory class communicatively coupled to the first physical memory. In operation, one or more pages are fetched from the first physical memory using a time between an execution of a plurality of threads associated with the second physical memory.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: October 13, 2015
    Assignee: P4TENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 9122588
    Abstract: Some implementations provide a method for managing data in a storage system that includes a persistent storage device and a non-volatile random access memory (NVRAM) cache device. The method includes: accessing a direct mapping between a logical address associated with data stored on the persistent storage device and a physical address on the NVRAM cache device; receiving, from a host computing device coupled to the storage system, a request to access a particular unit of data stored on the persistent storage device; using the direct mapping as a basis between the logical address associated with the data stored on the persistent storage device and the physical address on the NVRAM cache device to determine whether the particular unit of data being requested is present on the NVRAM cache device.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: September 1, 2015
    Assignee: Virident Systems Inc.
    Inventors: Shibabrata Mondal, Vijay Karamcheti, Ankur Arora, Ajit Yagaty
  • Patent number: 9081619
    Abstract: A method of provisioning a Web hosting resource includes providing a cloud service. A request for a Web hosting resource is received by the cloud service, wherein the request is provided by a client. The cloud service identifies a Web host based on the received request for a Web hosting resource. The cloud service sends a request to the Web host to provision a first Web hosting resource for use by the client.
    Type: Grant
    Filed: June 30, 2011
    Date of Patent: July 14, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Muhammad Bilal Aslam, Crystal L. Hoyer, Sayed Ibrahim Hashimi, Vishal R. Joshi, Omar Khan, Jonathan Kevin Wall, Bill Staples, Bradley John Bartz, Younus Aftab
  • Patent number: 9075928
    Abstract: A coherence maintenance address queue tracks each memory access from receipt until the memory reports the access complete. The address of each new access is compared against the address of all entries in the queue. This check is made when the access is ready to transmit to the memory. If there is no address match, then the current access does not conflict with any pending access. If there is an address match, the current access is stalled. The multi-core shared memory controller would then typically proceed to another access waiting a slot to the endpoint memory. Stored addresses in the coherence maintenance address queue are retired when the endpoint memory reports completion of the operation. At this point the access is no longer a hazard to following operations.
    Type: Grant
    Filed: October 22, 2013
    Date of Patent: July 7, 2015
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Matthew D Pierson, Kai Chirca
  • Patent number: 9069682
    Abstract: A system and method for providing a faster disk recovery is provided by bypassing the file system cache temporarily holding a sub set of metadata objects of the file system and instead using a persistent fast storage that can be accessed at deterministic speeds to hold all the metadata objects of the file system. The system speeds recovery by only writing updated metadata objects to the persistent disk storage when file system recovery is complete.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: June 30, 2015
    Assignee: EMC Corporation
    Inventor: Sairam Veeraswamy
  • Publication number: 20150149733
    Abstract: Method and system for supporting speculative modification in a data cache are provided and described. In one embodiment, a speculative cache buffer includes a plurality of cache lines and a plurality of state indicators. At least one of the cache lines is operable to receive an evicted cache line from a cache. The at least one of the cache lines is operable to return the evicted cache line to the cache if the cache requests the evicted cache line. Further, the plurality of state indicators is operable to indicate a state of a corresponding cache line of the cache lines.
    Type: Application
    Filed: January 14, 2011
    Publication date: May 28, 2015
    Inventors: Guillermo Rozas, Alexander Klaiber, David Dunn, Paul Serris, Lacky Shah
  • Patent number: 9043561
    Abstract: To provide a storage device with low power consumption. The storage device includes a plurality of cache lines. Each of the cache lines includes a data field which stores cache data; a tag which stores address data corresponding the cache data; and a valid bit which stores valid data indicating whether the cache data stored in the data field is valid or invalid. Whether power is supplied to the tag and the data field in each of the cache lines is determined based on the valid data stored in the valid bit.
    Type: Grant
    Filed: April 30, 2013
    Date of Patent: May 26, 2015
    Assignee: Semiconductor Energy Laboratory Co., Ltd.
    Inventor: Masashi Fujita
  • Patent number: 9043507
    Abstract: An information processing system includes a CPU that is connected to a bus; a device that is connected to the bus; a memory that is accessed by the CPU or the device; and a power mode control circuit that sets a power consumption mode. The power mode control circuit sets the power consumption mode based on first information that indicates a cache hit or a cache miss of a cache memory in the CPU and second information that indicates an activated state or a non-activated state of the device.
    Type: Grant
    Filed: May 9, 2013
    Date of Patent: May 26, 2015
    Assignee: FUJITSU LIMITED
    Inventors: Koichiro Yamashita, Hiromasa Yamauchi, Takahisa Suzuki, Koji Kurihara, Fumihiko Hayakawa
  • Patent number: 9037810
    Abstract: Some of the embodiments of the present disclosure provide a method comprising receiving a data packet, and storing the received data packet in a memory; generating a descriptor for the data packet, the descriptor including information for fetching at least a portion of the data packet from the memory; and in advance of a processing core requesting the at least a portion of the data packet to execute a processing operation on the at least a portion of the data packet, fetching the at least a portion of the data packet to a cache based at least in part on information in the descriptor. Other embodiments are also described and claimed.
    Type: Grant
    Filed: March 1, 2011
    Date of Patent: May 19, 2015
    Assignee: Marvell Israel (M.I.S.L.) Ltd.
    Inventors: Adi Habusha, Alon Pais, Rabeeh Khoury