Cache Status Data Bit Patents (Class 711/144)
  • Patent number: 9940247
    Abstract: The present application describes embodiments of a method and apparatus for concurrently accessing dirty bits in a cache. One embodiment of the apparatus includes a cache configurable to store a plurality of lines. The lines are grouped into a plurality of subsets the plurality of lines. This embodiment of the apparatus also includes a plurality of dirty bits associated with the plurality of lines and first circuitry configurable to concurrently access the plurality of dirty bits associated with at least one of the plurality of subsets of lines.
    Type: Grant
    Filed: June 26, 2012
    Date of Patent: April 10, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventor: William L. Walker
  • Patent number: 9898398
    Abstract: Reusing data in a memory buffer. A method includes reading data into a first portion of memory of a buffer implemented in the memory. The method further includes invalidating the data and marking the first portion of memory as free such that the first portion of memory is marked as being usable for storing other data, but where the data is not yet overwritten. The method further includes reusing the data in the first portion of memory after the data has been invalidated and the first portion of the memory is marked as free.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: February 20, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Cristian Petculescu, Amir Netz
  • Patent number: 9892051
    Abstract: A method can include executing a store instruction that instructs storing of data at an address and, in response to the store instruction, inserting a preloading instruction after the store instruction but before a dependent load instruction to the address. Executing the store instruction can include invalidating a data entry of a cache array at an address of the cache array corresponding to the address and writing the data to a backing memory at an address of the backing memory corresponding to the address. The preloading instruction can cause filling the data entry of the cache array, at the address of the cache array corresponding to the address, with the data from the backing memory at the address of the backing memory corresponding to the address and validating the data entry of the cache array.
    Type: Grant
    Filed: January 26, 2015
    Date of Patent: February 13, 2018
    Assignee: MARVELL INTERNATIONAL LTD.
    Inventors: Sujat Jamil, R. Frank O'Bleness, Russell J. Robideau, Tom Hameenanttila, Joseph Delgross, David E. Miner
  • Patent number: 9891916
    Abstract: A hardware data prefetcher is comprised in a memory access agent, wherein the memory access agent is one of a plurality of memory access agents that share a memory. The hardware data prefetcher includes a prefetch trait that is initially either exclusive or shared. The hardware data prefetcher also includes a prefetch module that performs hardware prefetches from a memory block of the shared memory using the prefetch trait. The hardware data prefetcher also includes an update module that performs analysis of accesses to the memory block by the plurality of memory access agents and, based on the analysis, dynamically updates the prefetch trait to either exclusive or shared while the prefetch module performs hardware prefetches from the memory block using the prefetch trait.
    Type: Grant
    Filed: February 18, 2015
    Date of Patent: February 13, 2018
    Assignee: VIA TECHNOLOGIES, INC.
    Inventors: Rodney E. Hooker, Albert J. Loper, John Michael Greer, Meera Ramani-Augustin
  • Patent number: 9891936
    Abstract: An apparatus and method for page level monitoring are described. For example, one embodiment of a method for monitoring memory pages comprises storing information related to each of a plurality of memory pages including an address identifying a location for a monitor variable for each of the plurality of memory pages in a data structure directly accessible only by a software layer operating at or above a first privilege level; detecting virtual-to-physical page mapping consistency changes or other page modifications to a particular memory page for which information is maintained in the data structure; responsively updating the monitor variable to reflect the consistency changes or page modifications; checking a first monitor variable associated with a first memory page prior to execution of first program code; and refraining from executing the first program code if the first monitor variable indicates consistency changes or page modifications to the first memory page.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: February 13, 2018
    Assignee: INTEL CORPORATION
    Inventors: Jiwei Oliver Lu, Koichi Yamada, James D. Beany, Palaniverlrajan Shanmugavelayutham, Bo Zhang
  • Patent number: 9886392
    Abstract: A method of enhancing a refresh PCI translation (RPCIT) operation to refresh a translation lookaside buffer (TLB) includes determining, by a computer processor, a request to perform at least one RPCIT instruction for purging at least one translation from the TLB. The method further includes purging, by the computer processor, the at least one translation from the TLB in response to executing the at least one RPCIT instruction. The computer processor selectively performs a synchronization operation prior to completing the at least one RPCIT instruction.
    Type: Grant
    Filed: May 19, 2014
    Date of Patent: February 6, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: David F. Craddock, Thomas A. Gregg, Dan F. Greiner, Damian L. Osisek
  • Patent number: 9880942
    Abstract: A method of enhancing a refresh PCI translation (RPCIT) operation to refresh a translation lookaside buffer (TLB) includes determining, by a computer processor, a request to perform at least one RPCIT instruction for purging at least one translation from the TLB. The method further includes purging, by the computer processor, the at least one translation from the TLB in response to executing the at least one RPCIT instruction. The computer processor selectively performs a synchronization operation prior to completing the at least one RPCIT instruction.
    Type: Grant
    Filed: June 22, 2016
    Date of Patent: January 30, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: David F. Craddock, Thomas A. Gregg, Dan F. Greiner, Damian L. Osisek
  • Patent number: 9864692
    Abstract: Managing cache evictions during transactional execution of a process. Based on initiating transactional execution of a memory data accessing instruction, memory data is fetched from a memory location, the memory data to be loaded as a new line into a cache entry of the cache. Based on determining that a threshold number of cache entries have been marked as read-set cache lines, determining whether a cache entry that is a read-set cache line can be replaced by identifying a cache entry that is a read-set cache line for the transaction that contains memory data from a memory address within a predetermined non-conflict address range. Then invalidating the identified cache entry of the transaction. Then loading the fetched memory data into the identified cache entry, and then marking the identified cache entry as a read-set cache line of the transaction.
    Type: Grant
    Filed: August 12, 2015
    Date of Patent: January 9, 2018
    Assignee: International Business Machines Corporation
    Inventors: Dan F. Greiner, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
  • Patent number: 9804801
    Abstract: A method of processing data in a memory system including a control unit and a hybrid memory device having a first memory and a second memory, includes; receiving first write data, storing the first write data in the first memory and assigning a first group state from among a plurality of group states to the stored first write data in response to first attribution information, completing a data processing operation in the memory system directed to the stored first write data that changes the attribution information associated with the stored first write data by monitoring of the first attribution information using an operating system running on the memory controller, and changing the first group state assigned to the stored first write data to a second group state from among the plurality of group states, the second group state having a different priority than a priority for the first group state.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: October 31, 2017
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sangkwon Moon, Jin-Soo Kim, Young-Sik Lee, Jinkyu Jeong, Kyung Ho Kim
  • Patent number: 9773048
    Abstract: A method includes processing a transaction on an in memory database where data being processed has a validity time, updating a time dependent data view responsive to the transaction being processed to capture time validity information regarding the data, and storing the time validity information in a historization table to provide historical access to past time dependent data following expiration of the validity time.
    Type: Grant
    Filed: September 12, 2013
    Date of Patent: September 26, 2017
    Assignee: SAP SE
    Inventor: Siar Sarferaz
  • Patent number: 9766820
    Abstract: An arithmetic processing device which connects to a main memory, the arithmetic processor includes a cache memory which stores data, an arithmetic unit which performs an arithmetic operation for data stored in the cache memory, a first control device which controls the cache memory and outputs a first request which reads the data stored in the main memory, and a second control device which is connected to the main memory and transmits a plurality of second requests which are divided the first request output from the first control device, receives data corresponding to the plurality of second requests which is transmitted from the main memory and sends each of the data to the first control device.
    Type: Grant
    Filed: May 11, 2015
    Date of Patent: September 19, 2017
    Assignee: FUJITSU LIMITED
    Inventors: Yuta Toyoda, Koji Hosoe, Masatoshi Aihara, Akio Tokoyoda, Makoto Suga
  • Patent number: 9760486
    Abstract: Technologies are generally described herein for accelerating a cache state transfer in a multicore processor. The multicore processor may include first, second, and third tiles. The multicore processor may initiate migration of a thread executing on the first core at the first tile from the first tile to the second tile. The multicore processor may determine block addresses of blocks to be transferred from a first cache at the first tile to a second cache at the second tile, and identify that a directory at the third tile corresponds to the block addresses. The multicore processor may update the directory to reflect that the second cache shares the blocks. The multicore processor may transfer the blocks from the first cache in the first tile to the second cache in the second tile effective to complete the migration of the thread from the first tile to the second tile.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: September 12, 2017
    Assignee: EMPIRE TECHNOLOGY DEVELOPMENT LLC
    Inventor: Yan Solihin
  • Patent number: 9760488
    Abstract: A cache system is provided. The cache system includes a first cache and a second cache. The first cache is configured for storing a first status of a plurality of data. The second cache is configured for storing a table. The table includes the plurality of data arranged from a highest level to a lowest level. The cache system is configured to update the first status of the plurality of data in the first cache. The cache system is further configured to update the table in the second cache according to the first status of the plurality of data.
    Type: Grant
    Filed: August 13, 2015
    Date of Patent: September 12, 2017
    Assignee: MACRONIX INTERNATIONAL CO., LTD.
    Inventors: Hung-Sheng Chang, Hsiang-Pang Li, Yuan-Hao Chang, Tei-Wei Kuo
  • Patent number: 9749190
    Abstract: A computer-implemented method is operable on a device having hardware including memory and at least one processor. The method includes maintaining invalidation information in a list at a service on the device, where the invalidation information includes a plurality of invalidation commands. At least some of the invalidation commands in the list are selectively combined to form at least one other invalidation command in the list.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: August 29, 2017
    Assignee: LEVEL 3 COMMUNICATIONS, LLC
    Inventors: Christopher Newton, Lewis Robert Varney, Laurence R. Lipstone, William Crowder, Andrew Swart
  • Patent number: 9710267
    Abstract: A Load Count to Block Boundary instruction is provided that provides a distance from a specified memory address to a specified memory boundary. The memory boundary is a boundary that is not to be crossed in loading data. The boundary may be specified a number of ways, including, but not limited to, a variable value in the instruction text, a fixed instruction text value encoded in the opcode, or a register based boundary; or it may be dynamically determined.
    Type: Grant
    Filed: March 3, 2013
    Date of Patent: July 18, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind, Christian Jacobi, Eric M. Schwarz, Timothy J. Slegel
  • Patent number: 9710266
    Abstract: A Load Count to Block Boundary instruction is provided that provides a distance from a specified memory address to a specified memory boundary. The memory boundary is a boundary that is not to be crossed in loading data. The boundary may be specified a number of ways, including, but not limited to, a variable value in the instruction text, a fixed instruction text value encoded in the opcode, or a register based boundary; or it may be dynamically determined.
    Type: Grant
    Filed: March 15, 2012
    Date of Patent: July 18, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind, Christian Jacobi, Eric M. Schwarz, Timothy J. Slegel
  • Patent number: 9697130
    Abstract: A cache automation module detects the deployment of storage resources in a virtual computing environment and, in response, automatically configures cache services for the detected storage resources. The automation module may detect new storage resources by monitoring storage operations and/or requests, by use of an interface provided by virtualization infrastructure, and/or the like. The cache automation module may deterministically identify storage resources that are to be cached and automatically caching services for the identified storage resources.
    Type: Grant
    Filed: July 16, 2014
    Date of Patent: July 4, 2017
    Assignee: SanDisk Technologies LLC
    Inventors: Jaidil Karippara, Pavan Pamula, Yuepeng Feng, Vikuto Atoka Sema
  • Patent number: 9665658
    Abstract: One embodiment provides an eviction system for dynamically-sized caching comprising a non-blocking data structure for maintaining one or more data nodes. Each data node corresponds to a data item in a cache. Each data node comprises information relating to a corresponding data item. The eviction system further comprises an eviction module configured for removing a data node from the data structure, and determining whether the data node is a candidate for eviction based on information included in the data node. If the data node is not a candidate for eviction, the eviction module inserts the data node back into the data structure; otherwise the eviction module evicts the data node and a corresponding data item from the system and the cache, respectively. Data nodes of the data structure circulate through the eviction module until a candidate for eviction is determined.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: May 30, 2017
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Gage W. Eads, Juan A. Colmenares
  • Patent number: 9639276
    Abstract: A request is received over a link that requests a particular line in memory. A directory state record is identified in memory that identifies a directory state of the particular line. A type of the request is identified from the request. It is determined that the directory state of the particular line is to change from the particular state to a new state based on the directory state of the particular line and the type of the request. The directory state record is changed, in response to receipt of the request, to reflect the new state. A copy of the particular line is sent in response to the request.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: May 2, 2017
    Assignee: Intel Corporation
    Inventor: Robert G. Blankenship
  • Patent number: 9619396
    Abstract: A memory controller receives a memory invalidation request that references a line of far memory in a two level system memory topology with far memory and near memory, identifies an address of the near memory corresponding to the line, and reads data at the address to determine whether a copy of the line is in the near memory. Data of the address is to be flushed to the far memory if the data includes a copy of another line of the far memory and the copy of the other line is dirty. A completion is sent for the memory invalidation request to indicate that a coherence agent is granted exclusive access to the line. With exclusive access, the line is to be modified to generate a modified version of the line and the address of the near memory is to be overwritten with the modified version of the line.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: April 11, 2017
    Assignee: Intel Corporation
    Inventors: Robert G. Blankenship, Jeffrey D. Chamberlain, Yen-Cheng Liu, Vedaraman Geetha
  • Patent number: 9600438
    Abstract: A method and apparatus for controlling and coordinating a multi-component system. Each component in the system contains a computing device. Each computing device is controlled by software running on the computing device. A first portion of the software resident on each computing device is used to control operations needed to coordinate the activities of all the components in the system. This first portion is known as a “coordinating process.” A second portion of the software resident on each computing device is used to control local processes (local activities) specific to that component. Each component in the system is capable of hosting and running the coordinating process. The coordinating process continually cycles from component to component while it is running.
    Type: Grant
    Filed: March 24, 2011
    Date of Patent: March 21, 2017
    Assignee: Florida Institute for Human and Machine Cognition, Inc.
    Inventors: Kenneth M. Ford, Niranjan Suri
  • Patent number: 9569282
    Abstract: Fine-grained parallelism within isolated object graphs is used to provide safe concurrent operations within the isolated object graphs. One example provides an abstraction labeled IsolatedObjectGraph that encapsulates at least one object graph, but often two or more object graphs, rooted by an instance of a type member. By encapsulating the object graph, no references from outside of the object graph are allowed to objects inside of the object graph. Also, the encapsulated object graph does not contain references to objects outside of the graphs. The isolated object graphs provide for safe data parallel operations, including safe data parallel mutations such as for each loops. In an example, the ability to isolate the object graph is provided through type permissions.
    Type: Grant
    Filed: April 24, 2009
    Date of Patent: February 14, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: John J. Duffy, Niklas Gustafsson, Vance Morrison
  • Patent number: 9558147
    Abstract: A system and method for monitoring a plurality of data streams is disclosed. At a first processing stage, a first memory area is associated to an element of a plurality of data streams. Upon arrival of a frame associated with one of the plurality of data streams, a second memory area is associated to the arrived frame based on the element. In the second memory area, a data indicating an arrival of the arrived frame is recorded and on a successful recording, the frame is forwarded to a second processing stage. An independent process executes at a preselected time interval to erase contents of the first memory area.
    Type: Grant
    Filed: June 12, 2014
    Date of Patent: January 31, 2017
    Assignee: NXP B.V.
    Inventors: Nicola Concer, Sujan Pandey, Hubertus Gerardus Hendrikus Vermeulen
  • Patent number: 9501418
    Abstract: The present disclosure relates to caches, methods, and systems for using an invalidation data area. The cache can include a journal configured for tracking data blocks, and an invalidation data area configured for tracking invalidated data blocks associated with the data blocks tracked in the journal. The invalidation data area can be on a separate cache region from the journal. A method for invalidating a cache block can include determining a journal block tracking a memory address associated with a received write operation. The method can also include determining a mapped journal block based on the journal block and on an invalidation record. The method can also include determining whether write operations are outstanding. If so, the method can include aggregating the outstanding write operations and performing a single write operation based on the aggregated write operations.
    Type: Grant
    Filed: June 26, 2014
    Date of Patent: November 22, 2016
    Assignee: HGST Netherlands B.V.
    Inventor: Pulkit Misra
  • Patent number: 9477514
    Abstract: A TRANSACTION BEGIN instruction and a TRANSACTION END instruction are provided. The TRANSACTION BEGIN instruction causes either a constrained or nonconstrained transaction to be initiated, depending on a field of the instruction. A constrained transaction has one or more restrictions associated therewith, while a nonconstrained transaction is not limited in the manner of a constrained transaction. The TRANSACTION END instruction ends the transaction started by the TRANSACTION BEGIN instruction.
    Type: Grant
    Filed: March 7, 2013
    Date of Patent: October 25, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dan F. Greiner, Christian Jacobi, Marcel Mitran, Timothy J. Slegel
  • Patent number: 9471446
    Abstract: A system includes a plurality of storage devices and an information processing device including a cache memory. The information processing device is configured to access the plurality of storage devices. When a failure has occurred in a first storage device included in the plurality of storage devices, the information processing device perform a procedure including: specifying a second storage device in which no failure has occurred, among the plurality of storage devices, creating an invisible file including a cache that has been stored in the cache memory and is to be stored in the first storage device, and storing the created invisible file in the second storage device. The information processing device stores the cache included in the invisible file stored in the second storage device, in the first storage device when the failure of the first storage device is eliminated.
    Type: Grant
    Filed: October 6, 2014
    Date of Patent: October 18, 2016
    Assignee: FUJITSU LIMITED
    Inventor: Yoshihisa Chujo
  • Patent number: 9471503
    Abstract: A computer system processor of a multi-processor computer system having a cache subsystem, the computer system having exclusive ownership of a cache line, executes a demote instruction to cause its own exclusively owned cache line to become shared or read-only in the computer processor cache.
    Type: Grant
    Filed: February 11, 2016
    Date of Patent: October 18, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Chung-Lung Kevin Shum, Kathryn Marie Jackson, Charles Franklin Webb
  • Patent number: 9465739
    Abstract: A system, method, and computer program product are provided for conditionally sending a request for data to a node based on a determination. In operation, a first request for data is sent to a cache of a first node. Additionally, it is determined whether the first request can be satisfied within the first node, where the determining includes at least one of determining a type of the first request and determining a state of the data in the cache. Furthermore, a second request for the data is conditionally sent to a second node, based on the determination.
    Type: Grant
    Filed: October 17, 2013
    Date of Patent: October 11, 2016
    Assignee: Broadcom Corporation
    Inventors: Gaurav Garg, David T. Hass
  • Patent number: 9460011
    Abstract: A computer system that includes a processor, a memory and a processor cache for the main memory with a check-in-cache instruction may be provided. The processor executes computer readable instructions stored in the memory that include receiving a check-in-cache instruction from a check-in-cache storage location. The instructions also include responsive to receiving the check-in-cache instruction, determining whether data bytes specified by the check-in-cache instruction are at least partially available in the processor cache. The instructions further include storing a condition code of the determination result in a storage location.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: October 4, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Marco Kraemer, Carsten Otte, Christoph Raisch
  • Patent number: 9454491
    Abstract: Systems and methods for accessing a unified translation lookaside buffer (TLB) are disclosed. A method includes receiving an indicator of a level one translation lookaside buffer (L1TLB) miss corresponding to a request for a virtual address to physical address translation, searching a cache that includes virtual addresses and page sizes that correspond to translation table entries (TTEs) that have been evicted from the L1TLB, where a page size is identified, and searching a second level TLB and identifying a physical address that is contained in the second level TLB. Access is provided to the identified physical address.
    Type: Grant
    Filed: January 6, 2015
    Date of Patent: September 27, 2016
    Assignee: SOFT MACHINES INC.
    Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
  • Patent number: 9448954
    Abstract: The subject matter discloses a method for data coherency; the method comprising receiving an interrupt request for interrupting a CPU; wherein the interrupt request is from one of a plurality of modules; wherein the interrupt request notifying a writing instruction of a first data by the one of the plurality of modules to a shared memory; and wherein the shared memory is accessible to the plurality of modules through a shared bus; suspending the interrupt request; validating a completion of an execution of the writing instruction; wherein the validating is performed after the suspending; and resuming the interrupt request after the completion of the execution of the writing is validated, whereby to notify a to the CPU about the completion of the execution of the writing instruction.
    Type: Grant
    Filed: February 28, 2011
    Date of Patent: September 20, 2016
    Assignee: DSP GROUP LTD.
    Inventors: Leonardo Vainsencher, Yaron P. Folk, Yuval Itkin
  • Patent number: 9430410
    Abstract: A method for supporting a plurality of load accesses is disclosed. A plurality of requests to access a data cache is accessed, and in response, a tag memory is accessed that maintains a plurality of copies of tags for each entry in the data cache. Tags are identified that correspond to individual requests. The data cache is accessed based on the tags that correspond to the individual requests. A plurality of requests to access the same block of the plurality of blocks causes an access arbitration that is executed in the same clock cycle as is the access of the tag memory.
    Type: Grant
    Filed: July 30, 2012
    Date of Patent: August 30, 2016
    Assignee: SOFT MACHINES, INC.
    Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
  • Patent number: 9432679
    Abstract: A data processing system is provided for processing video data on a window basis. At least one memory unit (L1) is provided for fetching and storing video data from an image memory (IM) according to a first window (R) in a first scanning order. At least one second memory unit (L0) is provided for fetching and storing video data from the first memory unit (L1) according to a second window in a second scanning order (SO). Furthermore, at least one processing unit (PU) is provided for performing video processing on the video data of the second window as stored in the at least one second memory unit (L0) based on the second scanning order (SO). The second scanning order (SO) is a meandering scanning order being orthogonal to the first scanning order (SO1).
    Type: Grant
    Filed: October 27, 2006
    Date of Patent: August 30, 2016
    Assignee: ENTROPIC COMMUNICATIONS, LLC
    Inventors: Aleksandar Beric, Ramanathan Sethuraman
  • Patent number: 9424198
    Abstract: A processor includes at least one execution unit, a near memory, and memory management logic to manage the near memory and a far memory external to the processor as a unified exclusive memory. Each of a plurality of data blocks may be exclusively stored in either the far memory or the near memory. The unified exclusive memory space may be divided into a plurality of sets and a plurality of ways. In response to a request for a first block stored in the far memory, the memory management logic may move the first block from the far memory to the near memory, and may move a second block from the near memory to the far memory. A tag buffer may store tags associated with blocks being moved between the near memory and the far memory. Fill and drain buffers may also be used. Other implementations are described and claimed.
    Type: Grant
    Filed: November 30, 2012
    Date of Patent: August 23, 2016
    Assignee: Intel Corporation
    Inventors: Shlomo Raikin, Zvika Greenfield
  • Patent number: 9396102
    Abstract: For cache/data management in a computing storage environment, incoming data segments into a Non Volatile Storage (NVS) device of the computing storage environment are validated against a bitmap to determine if the incoming data segments are currently in use. Those of the incoming data segments determined to be currently in use are designated to the computing storage environment to protect data integrity.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: July 19, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin John Ash, Michael Thomas Benhase, Lokesh Mohan Gupta, Kenneth Wayne Todd
  • Patent number: 9390023
    Abstract: According to at least one example embodiment, a method and corresponding apparatus for conditionally storing data include initiating an atomic sequence by executing, by a core processor, an instruction/operation designed to initiate an atomic sequence. Executing the instruction designed to initiate the atomic sequence includes loading content associated with a memory location into a first cache memory, and maintaining an indication of the memory location and a copy of the corresponding content loaded. A conditional storing operation is then performed, the conditional storing operation includes a compare-and-swap operation, executed by a controller associated with a second cache memory, based on the maintained copy of the content and the indication of the memory location.
    Type: Grant
    Filed: October 3, 2013
    Date of Patent: July 12, 2016
    Assignee: Cavium, Inc.
    Inventors: Richard E. Kessler, David H. Asher, Michael Sean Bertone, Shubhendu S. Mukherjee, Wilson P. Snyder, II, John M. Perveiler, Christopher J. Comis
  • Patent number: 9372800
    Abstract: A multi-chip system includes multiple chip devices configured to communicate to each other and share resources. According to at least one example embodiment, a method of providing memory coherence within the multi-chip system comprises maintaining, at a first chip device of the multi-chip system, state information indicative of one or more states of one or more copies, residing in one or more chip devices of the multi-chip system, of a data block. The data block is stored in a memory associated with one of the multiple chip devices. The first chip device receives a message associated with a copy of the one or more copies of the data block from a second chip device of the multiple chip devices, and, in response, executes a scheme of one or more actions determined based on the state information maintained at the first chip device and the message received.
    Type: Grant
    Filed: March 7, 2014
    Date of Patent: June 21, 2016
    Assignee: Cavium, Inc.
    Inventors: Isam Akkawi, Richard E. Kessler, David H. Asher, Bryan W. Chin, Wilson P. Snyder, II
  • Patent number: 9367464
    Abstract: A method is described that includes alternating cache requests sent to a tag array between data requests and dataless requests.
    Type: Grant
    Filed: December 30, 2011
    Date of Patent: June 14, 2016
    Assignee: Intel Corporation
    Inventor: Larisa Novakovsky
  • Patent number: 9336146
    Abstract: Technologies are generally described herein for accelerating a cache state transfer in a multicore processor. The multicore processor may include first, second, and third tiles. The multicore processor may initiate migration of a thread executing on the first core at the first tile from the first tile to the second tile. The multicore processor may determine block addresses of blocks to be transferred from a first cache at the first tile to a second cache at the second tile, and identify that a directory at the third tile corresponds to the block addresses. The multicore processor may update the directory to reflect that the second cache shares the blocks. The multicore processor may transfer the blocks from the first cache in the first tile to the second cache in the second tile effective to complete the migration of the thread from the first tile to the second tile.
    Type: Grant
    Filed: December 29, 2010
    Date of Patent: May 10, 2016
    Assignee: EMPIRE TECHNOLOGY DEVELOPMENT LLC
    Inventor: Yan Solihin
  • Patent number: 9336144
    Abstract: Three-dimensional processing systems are provided which have multiple layers of conjoined chips, wherein one or more chip layers include processor cores that share cache hierarchies over multiple chip layers. The caches can be partitioned, conjoined, and managed according to various sets of rules and configurations.
    Type: Grant
    Filed: July 25, 2013
    Date of Patent: May 10, 2016
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Alper Buyuktosunoglu, Philip G. Emma, Allan M. Hartstein, Michael B. Healy, Krishnan K. Kailas
  • Patent number: 9329800
    Abstract: A data storage system and associated method are provided wherein a policy engine continuously collects qualitative information about a network load to the data storage system in order to dynamically characterize the load and continuously correlates the load characterization to the content of a command queue of transfer requests for writeback commands and host read commands, selectively limiting the content with respect to writeback commands to only those transfer requests for writeback data that are selected on a physical zone basis of a plurality of predefined physical zones of a storage media.
    Type: Grant
    Filed: June 29, 2007
    Date of Patent: May 3, 2016
    Assignee: Seagate Technology LLC
    Inventors: Clark Edward Lubbers, Robert Michael Lester
  • Patent number: 9311244
    Abstract: An interconnect has transaction tracking circuitry for enforcing ordering of a set of data access transactions so that they are issued to slave devices in an order in which they are received from master devices. The transaction tracking circuitry is reused for also enforcing ordering of snoop transactions which are triggered by the set of data access transactions, for snooping master devices identified by a snoop filter as holding cache data for the target address of the transactions.
    Type: Grant
    Filed: August 25, 2014
    Date of Patent: April 12, 2016
    Assignee: ARM Limited
    Inventors: Sean James Salisbury, Andrew David Tune, Daniel Sara
  • Patent number: 9304863
    Abstract: A method of backstepping through a program execution includes dividing the program execution into a plurality of epochs, wherein the program execution is performed by an active core, determining, during a subsequent epoch of the plurality of epochs, that a rollback is to be performed, performing the rollback including re-executing a previous epoch of the plurality of epochs, wherein the previous epoch includes one or more instructions of the program execution stored by a checkpointing core, and adjusting a granularity of the plurality of epochs according to a frequency of the rollback.
    Type: Grant
    Filed: June 27, 2013
    Date of Patent: April 5, 2016
    Assignee: International Business Machines Corporation
    Inventors: Harold W. Cain, III, David M. Daly, Kattamuri Ekanadham, Jose E. Moreira, Mauricio J. Serrano
  • Patent number: 9304946
    Abstract: Technologies are described herein for providing a hardware-based accelerator adapted to manage copy-on-write. Some example technologies may identify a read request adapted to read a block at an original memory address. The technologies may utilize the hardware-based accelerator to determine whether the block is located at the original memory address. When a determination is made that the block is located in at the original memory address, the technologies may utilize the hardware-based accelerator to pass the original memory address so that the read request can be performed utilizing the original memory address. When a determination is made that the block is not located in the memory at the original memory address, the technologies may utilize the hardware-based accelerator to generate a new memory address and to pass the new memory address so that the read request can be performed utilizing the new memory address.
    Type: Grant
    Filed: June 25, 2012
    Date of Patent: April 5, 2016
    Assignee: EMPIRE TECHNOLOGY DEVELOPMENT LLC
    Inventor: Yan Solihin
  • Patent number: 9298624
    Abstract: The present disclosure relates to systems, methods, and computer program products for keeping multiple caches updated, or coherent, on multiple servers when the multiple caches contain independent copies of cached data. Example methods may include receiving a request to write data to a block of a first cache associated with a first server in a clustered server environment. The methods may also include identifying a second cache storing a copy of the block, where the second cache is associated with a second server in the clustered environment. The methods may further include transmitting a request to update the second cache with the received write data, and upon receiving a subsequent request to write subsequent data, identifying a third cache for invalidating based on access patterns of the blocks, where the third cache is associated with a third server in the clustered environment.
    Type: Grant
    Filed: May 14, 2014
    Date of Patent: March 29, 2016
    Assignee: HGST Netherlands B.V.
    Inventors: Jin Ren, Ken Qing Yang, Gregory Evan Fedynyshyn
  • Patent number: 9298665
    Abstract: This invention optimizes non-shared accesses and avoids dependencies across coherent endpoints to ensure bandwidth across the system even when sharing. The coherence controller is distributed across all coherent endpoints. The coherence controller for each memory endpoint keeps a state around for each coherent access to ensure the proper ordering of events. The coherence controller of this invention uses First-In-First-Out allocation to ensure full utilization of the resources before stalling and simplicity of implementation. The coherence controller provides Snoop Command/Response ID Allocation per memory endpoint.
    Type: Grant
    Filed: October 22, 2013
    Date of Patent: March 29, 2016
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Matthew D Pierson, Kai Chirca
  • Patent number: 9292449
    Abstract: A cache memory data compression and decompression technique is described. A processor device includes a memory controller unit (MCU) coupled to a main memory and a cache memory. The MCU includes a cache memory data compression and decompression module that compresses data received from the main memory. The compressed data may then be stored in the cache memory. The cache memory data compression and decompression module may also decompress data that is stored in the cache memory. For example, in response to a cache hit for data requested by a processor, the compressed data in the cache memory may be decompressed and subsequently read or operated upon by the processor.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: March 22, 2016
    Assignee: Intel Corporation
    Inventors: Alaa R. Alameldeen, Niranjan L. Cooray, Jayesh Gaur, Steven D. Pudar, Manuel A. Aguilar Arreola, Margareth E. Marrugo, Chinnakrishnan Ballapuram
  • Patent number: 9280480
    Abstract: A facility and cache machine instruction of a computer architecture for specifying a target cache cache-level and a target cache attribute of interest for obtaining a cache attribute of one or more target caches. The requested cache attribute of the target cache(s) is saved in a register.
    Type: Grant
    Filed: July 15, 2013
    Date of Patent: March 8, 2016
    Assignee: International Business Machines Corporation
    Inventors: Dan F Greiner, Timothy J Siegel
  • Patent number: 9262318
    Abstract: A system including a processor, a memory controller, and a flash memory module. The processor is configured to generate a request to retrieve information corresponding to an address. The memory controller module includes a cache memory configured to store information, and a cache control logic module configured to determine whether the cache memory stores the information corresponding to the address, if the cache memory stores the information corresponding to the address, retrieve the information from the cache memory and provide the information to the processor, and if the cache memory does not store the information corresponding to the address, generate a flash memory read request based on the address. The flash memory module is configured to, in response to receiving the flash memory read request, provide the information corresponding to the address to the memory controller module.
    Type: Grant
    Filed: March 12, 2014
    Date of Patent: February 16, 2016
    Assignee: Marvell International Ltd.
    Inventors: Satya Vadlamani, Sindhu Rajaram, Yongjiang Wang, Lin Chen
  • Patent number: 9256527
    Abstract: The present idea provides a high read and write performance from/to a solid state memory device. The main memory of the controller is not blocked by a complete address mapping table covering the entire memory device. Instead such table is stored in the memory device itself, and only selected portions of address mapping information are buffered in the main memory in a read cache and a write cache. A separation of the read cache from the write cache enables an address mapping entry being evictable from the read cache without the need to update the related flash memory page storing such entry in the flash memory device. By this design, the read cache may advantageously be stored on a DRAM even without power down protection, while the write cache may preferably be implemented in nonvolatile or other fail-safe memory. This leads to a reduction of the overall provisioning of nonvolatile or fail-safe memory and to an improved scalability and performance.
    Type: Grant
    Filed: July 25, 2011
    Date of Patent: February 9, 2016
    Assignee: International Business Machines Corporation
    Inventors: Werner Bux, Robert Haas, Xiao-Yu Hu, Roman Pletka