Cache Status Data Bit Patents (Class 711/144)
  • Patent number: 11119770
    Abstract: Performing atomic store-and-invalidate operations in processor-based devices is disclosed. In this regard, a processing element (PE) of one or more PEs of a processor-based device includes a store-and-invalidate logic circuit used by a memory access stage of an execution pipeline of the PE to perform an atomic store-and-invalidate operation. Upon receiving an indication to perform a store-and-invalidate operation (e.g., in response to a store-and-invalidate instruction execution) comprising a store address and store data, the memory access stage uses the store-and-invalidate logic circuit to write the store data to a memory location indicated by the store address, and to invalidate an instruction cache line corresponding to the store address in an instruction cache of the PE.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: September 14, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Thomas Philip Speier, Eric Francis Robinson
  • Patent number: 11113197
    Abstract: A method for joining an event stream with reference data includes loading a plurality of reference data snapshots from a reference data source into a cache. Punctuation events are supplied that indicate temporal validity for the plurality of reference data snapshots in the cache. A logical barrier is provided that restricts a flow of data events in the event stream to a cache lookup operation based on the punctuation events. The cache lookup operation is performed with respect to the data events in the event stream that are permitted to cross the logical barrier.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: September 7, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Boris Shulman, Shoupei Li, Alexander Alperovich, Xindi Zhang, Kanstantsyn Zoryn
  • Patent number: 11093388
    Abstract: The present disclosure relates to a method, an apparatus, an electronic device and a computer readable storage medium for accessing static random access memories. The method includes: receiving an access request for data associated with the static random access memories; writing a plurality of sections of the data into a plurality of different static random access memories in an interleaved manner in response to the access request being a write request for the data, each of the plurality of sections having its respective predetermined size; and reading the plurality of sections of the data from the plurality of static random access memories in an interleaved manner in response to the access request being a read request for the data, each of the plurality of sections having its respective predetermined size.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: August 17, 2021
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Xiaozhang Gong, Jing Wang
  • Patent number: 11042479
    Abstract: Techniques are provided for providing a fully active and non-replicated block storage solution in a clustered filesystem that implements cache coherency. In a clustered filesystem where one or more data blocks are stored in a respective cache of each host node of a plurality of host nodes, a request is received at a host node of the plurality of host nodes from a client device to write the one or more data blocks to a shared storage device. In response to the request, the one or more data blocks are stored in the cache of the host node and a particular notification is sent to another host node of the plurality of host nodes that the one or more data blocks have been written to the shared storage device. In response to receiving the notification, the other host node invalidates a cached copy of the one or more data blocks in the respective cache of the other host node.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: June 22, 2021
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Donald Allan Graves, Jr., Frederick S. Glover, Alan David Brunelle, Pranav Dayananda Bagur, James Bensson
  • Patent number: 11030113
    Abstract: An apparatus and method for efficient process-based compartmentalization.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: June 8, 2021
    Assignee: Intel Corporation
    Inventors: David M. Durham, Jacob Doweck, Michael Lemay, Deepak Gupta
  • Patent number: 11030109
    Abstract: A method of contention-free lookup including mapping a key of a cache lookup operation to determine an expected location of object data, walking a collision chain by determining whether a cache header signature matches a collision chain signature, when the cache header signature does not match, again walking the collision chain, when the cache header signature matches, determining whether a key in the cache header signature matches the key of the cache lookup operation, when the key does not match, reading a cache entry corresponding to the cache lookup operation, and repopulating the cache entry, when the key matches, acquiring an entry lock, and determining whether the key still matches after acquiring the entry lock, when the key still matches finding the object data in the expected location, and when the key no longer matches, releasing the entry lock, and again walking the collision chain.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: June 8, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Vijaya Jakkula, Siva Ramineni, Venkata Bhanu P. Gollapudi
  • Patent number: 11030112
    Abstract: Enhanced address space layout randomization is disclosed. For example, a memory includes first and second memory addresses of a plurality of memory addresses, where at least one of the plurality of memory addresses is a decoy address. A memory manager executes on a processors to generate a page table associated with the memory, which includes a plurality of page table entries. Each page table entry in the plurality of page table entries is flagged as in a valid state. The page table is instantiated with first and second page table entries of the plurality of page table entries associated with the first and second memory addresses respectively. A plurality of unused page table entries of the plurality of page table entries, including a decoy page table entry, is associated with a decoy address.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: June 8, 2021
    Assignee: Red Hat, Inc.
    Inventor: Michael Tsirkin
  • Patent number: 10990490
    Abstract: Establishing a synchronous replication relationship between two or more storage systems, including: identifying, for a dataset, a plurality of storage systems across which the dataset will be synchronously replicated; configuring one or more data communications links between each of the plurality of storage systems to be used for synchronously replicating the dataset; exchanging, between the plurality of storage systems, timing information for at least one of the plurality of storage systems; and establishing, in dependence upon the timing information for at least one of the plurality of storage systems, a synchronous replication lease, the synchronous replication lease identifying a period of time during which the synchronous replication relationship is valid.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: April 27, 2021
    Assignee: Pure Storage, Inc.
    Inventors: Connor Brooks, Thomas Gill, Christopher Golden, David Grunwald, Steven Hodgson, Ronald Karr, Zoheb Shivani, Kunal Trivedi
  • Patent number: 10972537
    Abstract: The subject matter described herein relates to protecting in-flight transaction requests, where a client device is connected via at least two application servers to a backend server device that is capable of processing redundant transaction requests originated by the client device. A first instance of a transaction request identified by a transaction identifier is received at the backend server device. The first instance of the transaction request is processed and a transaction response is sent to the client device. The transaction response identified by the transaction identifier is saved in a cache. If a subsequent instance of the transaction request is received, the cached transaction response is sent to the client device.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: April 6, 2021
    Assignee: International Business Machines Corporation
    Inventors: Jose E. Garza, Stephen J. Hobson
  • Patent number: 10970208
    Abstract: A memory system includes a memory device including a main memory and a cache memory that includes a plurality of cache lines for caching data stored in the main memory, wherein each of the cache lines includes cache data, a valid bit indicating whether or not the corresponding cache data is valid, and a loading bit indicating whether or not read data of the main memory is being loaded; and a memory controller suitable for scheduling an operation of the memory device with reference to the valid bits and the loading bits.
    Type: Grant
    Filed: December 26, 2018
    Date of Patent: April 6, 2021
    Assignee: SK hynix Inc.
    Inventors: Seung-Gyu Jeong, Su-Hae Woo, Chang-Soo Ha
  • Patent number: 10949415
    Abstract: A computer program product, including: a computer readable storage device to store a computer readable program, wherein the computer readable program, when executed by a processor within a computer, causes the computer to perform operations for logging. The operations include: receiving a transaction including data and a log record corresponding to the data; writing the data to a data storage device; and writing the log record to a log space on a persistent memory device coupled to the data storage device.
    Type: Grant
    Filed: March 31, 2011
    Date of Patent: March 16, 2021
    Assignee: International Business Machines Corporation
    Inventors: Ru Fang, Bin He, Hui-I Hsiao, Chandrasekaran Mohan, Yun Wang
  • Patent number: 10911562
    Abstract: A method performed by a computing system includes, with the computing system, caching, within a cache module of the computing system, a resource request result from a web service, storing, by the computing system, metadata associated with the resource request result, the metadata including a set of entities used to produce the resource request result, wherein the metadata further includes a version of each entity associated with the resource request result, with the computing system, in response to determining that an entity from the set of entities has changed since the resource request result was cached, invalidating the cached resource request result, wherein determining that the entity from the set of entities has changed comprises determining that a version of the entity has changed.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: February 2, 2021
    Assignee: RED HAT, INC.
    Inventors: Pavel Slavicek, Rostislav Svoboda
  • Patent number: 10887418
    Abstract: Embodiments of the present invention include methods and systems for domain name system (DNS) pre-caching. A method for DNS pre-caching is provided. The method includes receiving uniform resource locator (URL) hostnames for DNS pre-fetch resolution prior to a user hostname request for any of the URL hostnames. The method also includes making a DNS lookup call for at least one of the URL hostnames that are not cached by a DNS cache prior to the user hostname request. The method further includes discarding at least one IP address provided by a DNS resolver for the URL hostnames, wherein a resolution result for at least one of the URL hostnames is cached in the DNS cache in preparation for the user hostname request. A system for DNS pre-caching is provided. The system includes a renderer, an asynchronous DNS pre-fetcher and a hostname table.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: January 5, 2021
    Assignee: Google LLC
    Inventor: James Roskind
  • Patent number: 10846222
    Abstract: An example method of configuring an application to manage persistent memory (PM) in a computer system includes: modifying, by a compiler during compilation of the application, source code of the application to add instructions to update tracking metadata for store instructions in the source code that target memory blocks mapped to the PM; compiling, by the compiler, the source code to generate an executable process; and issuing, by a synchronization routine executing on the computer, write-back instructions during execution of the executable process based on the tracking metadata.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: November 24, 2020
    Assignee: VMware, Inc.
    Inventors: Aasheesh Kolli, Vijaychidambaram Velayudhan Pillai
  • Patent number: 10775870
    Abstract: A processing device includes multiple processing units and multiple memory devices respectively assigned to the multiple processing units. Each of the multiple processing units includes a cache memory configured to retain data stored in the memory device assigned to itself, and fetched data taken out from the memory device of the processing unit other than itself. When an access request for the fetched data is received from a source processing unit from which the fetched data has been taken out, the cache memory determines occurrence of a crossing in which the access request is received after the cache memory issues write back information instructing to write back the fetched data to the memory device assigned to the source processing unit. If the crossing has occurred, crossing information indicating that the crossing has occurred is output as a response to the access request.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: September 15, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Takeshi Mishina, Hideki Sakata
  • Patent number: 10776383
    Abstract: A method for automatic data synchronization between a source system and a buffer system. The method includes identifying a configurable set of penalties, wherein each penalty defines a number of penalty points associated with a respective one of a plurality of events related to data set stored by the source system. The method also includes, in response detecting one or more events, calculating a total penalty score using the penalty points corresponding to each of the events. The method also includes determining that the total penalty score satisfies a predetermined penalty threshold indicating that the copy of the data set stored on the buffer system is presumed stale and, in response, initiating a data replication operation that updates the copy of data set stored on the buffer system with current data from the data set stored on the source system.
    Type: Grant
    Filed: May 31, 2012
    Date of Patent: September 15, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Al Chakra, Tim Friessinger, Juergen Holtz
  • Patent number: 10768933
    Abstract: A streaming engine employed in a digital data processor specifies a fixed read only data stream defined by plural nested loops. An address generator produces addresses of data elements. A steam head register stores data elements next to be supplied to functional units for use as operands. Stream metadata is stored in response to a stream store instruction. Stored stream metadata is restored to the stream engine in response to a stream restore instruction. An interrupt changes an open stream to a frozen state discarding stored stream data. A return from interrupt changes a frozen stream to an active state.
    Type: Grant
    Filed: February 12, 2019
    Date of Patent: September 8, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Joseph Zbiciak, Timothy D. Anderson
  • Patent number: 10719630
    Abstract: A system and method for metadata processing that can be used to encode an arbitrary number of security policies for code running on a stored-program processor. This disclosure adds metadata to every word in the system and adds a metadata processing unit that works in parallel with data flow to enforce an arbitrary set of policies, such that metadata is unbounded and software programmable to be applicable to a wide range of metadata processing policies. This instant disclosure is applicable to a wide range of uses including safety, security, and synchronization.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: July 21, 2020
    Assignees: BAE Systems Information and Electronic Systems Integration Inc., The Trustees of the University of Pennsylvania
    Inventors: Silviu S Chiricescu, Andre DeHon, Udit Dhawan
  • Patent number: 10664181
    Abstract: Protecting in-memory configuration state registers. A request to access an in-memory configuration state register, such as a read or write request, is obtained. The in-memory configuration state register is mapped to memory. Error correction code of the memory is used to protect the access to the in-memory configuration state register.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: May 26, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10642750
    Abstract: A method and apparatus of a device that includes a shared memory hash table that notifies one or more readers of changes to the shared memory hash table is described. In an exemplary embodiment, a device receives a key that corresponds to the value, where the key used to retrieve the value form the shared memory hash table and the shared memory hash table is written to by a writer and read from by a plurality of readers. In addition, the device retrieves an index from a local values table, where the local values table stores a plurality of indices for one of the plurality of readers and the index is an index into an entry in the shared memory hash table. The device further retrieves the value from the shared memory hash table using the index.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: May 5, 2020
    Assignee: Arista Networks, Inc.
    Inventors: Duncan Stuart Ritchie, Sebastian Sapa, Peter John Fordham
  • Patent number: 10614004
    Abstract: Examples of techniques for memory transaction prioritization for a memory are disclosed. In one example implementation according to aspects of the present disclosure, a computer-implemented method includes allocating, by a memory controller, a reserved portion of the memory controller to execute transactions. The method further includes receiving, by the memory controller, a priority based transaction from a processor to the memory. The method further includes determining, by the memory controller, whether to accommodate the priority based transaction based at least in part on a current processing state of the memory controller. The method further includes, based at least in part on determining to accommodate the priority based transaction, accommodating the priority based transaction by performing at least one of dropping a speculative command in a queue or using the reserved portion of the memory controller to execute the priority based transaction.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: April 7, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Irving G. Baysah, Prasanna Jayaraman
  • Patent number: 10613984
    Abstract: Various embodiments provide for a system that prefetches data from a main memory to a cache and then evicts unused data to a lower level cache. The prefetching system will prefetch data from a main memory to a cache, and data that is not immediately useable or is part of a data set which is too large to fit in the cache can be tagged for eviction to a lower level cache, which keeps the data available with a shorter latency than if the data had to be loaded from main memory again. This lowers the cost of prefetching useable data too far ahead and prevents cache trashing.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: April 7, 2020
    Assignee: AMPERE COMPUTING LLC
    Inventor: Kjeld Svendsen
  • Patent number: 10592465
    Abstract: A node controller for a first processor socket group may include a node memory storing a coherence directory and logic. Logic may cause the node controller to: receive a memory operation request directly from a second processor socket group, follow a coherence protocol based on the memory operation request and the coherence directory and directly access a socket group memory of the first processor socket group based on the request.
    Type: Grant
    Filed: October 26, 2017
    Date of Patent: March 17, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Frank R. Dropps, Eric C. Fromm
  • Patent number: 10592339
    Abstract: Disclosed embodiments relate to a streaming engine employed in, for example, a digital signal processor. A fixed data stream sequence including plural nested loops is specified by a control register. The streaming engine includes an address generator producing addresses of data elements and a steam head register storing data elements next to be supplied as operands. The streaming engine fetches stream data ahead of use by the central processing unit core in a stream buffer. Parity bits are formed upon storage of data in the stream buffer which are stored with the corresponding data. Upon transfer to the stream head register a second parity is calculated and compared with the stored parity. The streaming engine signals a parity fault if the parities do not match. The streaming engine preferably restarts fetching the data stream at the data element generating a parity fault.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: March 17, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Joseph Zbiciak, Timothy Anderson
  • Patent number: 10564972
    Abstract: An apparatus and method for efficiently reclaiming demoted cache lines. For example, one embodiment of a processor comprises: a cache hierarchy including at least one Level 1 (L1) cache and one or more lower level caches; a decoder to decode a cache line (CL) demote instruction specifying at least a first cache line; and execution circuitry to demote the first cache line responsive to the CL demote instruction, the execution circuitry to implement a writeback operation on the first cache line if the first cache line has been modified and homed in a specified memory tier or a default memory tier specified in a register.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: February 18, 2020
    Assignee: Intel Corporation
    Inventors: Kshitij Doshi, Vadim Sukhomlinov, Francesc Bernat Guim
  • Patent number: 10545879
    Abstract: An apparatus and method are provided for handling access requests. The apparatus has processing circuitry for processing a plurality of program threads to perform data processing operations on data, where the operations identify the data using virtual addresses, and the virtual addresses are mapped to physical addresses within a memory system. The cache storage has a plurality of cache entries to store data, an aliasing condition existing when multiple virtual addresses map to the same physical address, and allocation of data into the cache storage being constrained to prevent multiple cache entries of the cache storage simultaneously storing data for the same physical address.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: January 28, 2020
    Assignee: Arm Limited
    Inventors: Richard F. Bryant, Kim Richard Schuttenberg, David Madsen, Lalit Bansal, Sriram Samynathan
  • Patent number: 10534546
    Abstract: A storage system having an adaptive workload-based command processing clock is provided. In one embodiment, a storage system has a memory, a command processing path, and a controller in communication with the memory and the command processing path. The controller is configured to adapt an input clock signal based on a current workload of the controller and provide the adapted clock signal to the command processing path in the controller.
    Type: Grant
    Filed: June 13, 2017
    Date of Patent: January 14, 2020
    Assignee: Western Digital Technologies, Inc.
    Inventors: Shay Benisty, Tal Sharifie
  • Patent number: 10482062
    Abstract: A fleet of query accelerator nodes is established for a data store. Each accelerator node caches data items of the data store locally. In response to determining that an eviction criterion has been met, one accelerator node removes a particular data item from its local cache without notifying any other accelerator node. After the particular data item has been removed, a second accelerator node receives a read query for the particular data item and provides a response using a locally-cached replica of the data item.
    Type: Grant
    Filed: March 30, 2016
    Date of Patent: November 19, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Kiran Kumar Muniswamy Reddy, Anand Sasidharan, Omer Ahmed Zaki, Brian O'Neill
  • Patent number: 10474572
    Abstract: A system and process for recompacting digital storage space involves continuously maintaining a first log of free storage space available from multiple storage regions of a storage system such as a RAID system, and based on the first log, maintaining a second log file including a bitmap identifying the free storage space available from a given storage chunk corresponding to the storage regions. Based on the bitmaps, distributions corresponding to the storage regions are generated, where the distributions represent the percentage of free space available from each chunk, and a corresponding weight is associated with each storage region. The storage region weights may then be sorted and stored in RAM, for use in quickly identifying a particular storage region that includes the maximum amount of free space available, for recompaction.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: November 12, 2019
    Assignee: HGST, Inc.
    Inventors: Shailendra Tripathi, Sreekanth Garigala, Sandeep Sebe
  • Patent number: 10445500
    Abstract: An apparatus has a number of data holding elements for holding data values which are reset to a reset value in response to a transition of a signal at a reset signal input of the data holding element from a first value to a second value. A reset tree is provided to distribute a reset signal received at root node of the reset tree to the reset signal inputs of the data holding elements. At least one reset attack detection element is provided, with its reset signal input coupled to a given node of the reset tree, to assert an error signal when its reset signal input transitions from the first value to a second value. Reset error clearing circuitry triggers clearing of the error signal, when the reset signal at the root node of the reset tree transitions from the second value to the first value.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: October 15, 2019
    Assignee: ARM Limited
    Inventors: Guillaume Schon, Frederic Jean Denis Arsanto, Jocelyn Fran├žois Orion Jaubert, Carlo Dario Fanara
  • Patent number: 10437730
    Abstract: A method for synchronizing primary and secondary read cache in a data replication environment is disclosed. In one embodiment, such a method includes monitoring contents of a primary read cache at a primary site. The method periodically sends, from the primary site to a secondary site, information regarding the contents of the primary read cache, such as a list of storage elements cached in the primary read cache. In certain embodiments, the information also includes temperature information indicating how frequently the storage elements are accessed. The method uses, at the secondary site, the information to substantially synchronize a secondary read cache with the primary read cache. A corresponding system and computer program product are also disclosed herein.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: October 8, 2019
    Assignee: International Business Machines Corporation
    Inventors: Matthew J. Kalos, Peter G. Sutton, Harry M. Yudenfriend
  • Patent number: 10423524
    Abstract: A memory storage device, a memory control circuit unit and a data storage method for a rewritable non-volatile memory module are disclosed. The method includes: receiving first data; mapping a logical unit of the first data to a first physical unit in a first management unit and not storing the first data to the rewritable nonvolatile memory module if a data content of the first data is identical to a data content of second data stored in the first physical unit. The method also includes storing logical-to-physical bit map information to a second physical unit in the first management unit, wherein the logical-to-physical bit map information corresponds to at least one logical-to-physical mapping table and is configured for identifying valid data in the first management unit. Identifiers or symbols of data content may be compared to determine if first and second data are identical.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: September 24, 2019
    Assignee: PHISON ELECTRONICS CORP.
    Inventor: Chih-Kang Yeh
  • Patent number: 10402331
    Abstract: A computer processing system includes a plurality of nodes, each node having at least one processor core and at least one level of cache memory which is private to the node, a shared, last level cache (LLC) memory device and a shared, last level cache location buffer containing cache location entries, each cache location entry storing an address tag and a plurality of location information. The location information stored in a cache location entry points to an identified cacheline location within the LLC that stores a cacheline associated with the location information. The cacheline stored in the LLC has associated information identifying the cache location entry.
    Type: Grant
    Filed: May 1, 2015
    Date of Patent: September 3, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Erik Hagersten, Andreas Sembrant, David Black-Schaffer, Stefanos Kaxiras
  • Patent number: 10394784
    Abstract: Technologies for managing lookup tables are described. The lookup tables may be used for a two-level lookup scheme for packet processing. When the tables need to be updated with a new key for packet processing, information about the new key may be added to a first-level lookup table and a second-level lookup table. The first-level lookup table may be used to identify a handling node for an obtained packet, and the handling node may perform a second-level table lookup to obtain information for further packet processing. The first lookup table may be replicated on all the nodes in a cluster, and the second-level lookup table may be unique to each node in the cluster. Other embodiments are described herein and claimed.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: August 27, 2019
    Assignee: Intel Corporation
    Inventors: Byron Marohn, Christian Maciocco, Sameh Gobriel, Ren Wang, Wei Shen, Tsung-Yuan Charlie Tai, Saikrishna Edupuganti
  • Patent number: 10372624
    Abstract: Provided are techniques for destaging pinned retryable data in cache. A ranks scan structure is created with an indicator for each rank of multiple ranks that indicates whether pinned retryable data in a cache for that rank is destageable. A cache directory is partitioned into chunks, wherein each of the chunks includes one or more tracks from the cache. A number of tasks are determined for the scan of the cache. The number of tasks are executed to scan the cache to destage pinned retryable data that is indicated as ready to be destaged by the ranks scan structure, wherein each of the tasks selects an unprocessed chunk of the cache directory for processing until the chunks of the cache directory have been processed.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: August 6, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta
  • Patent number: 10353606
    Abstract: A host divides a dataset into stripes and sends the stripes to respective data chips of a distributed memory buffer system, where the data chips buffer the respective slices. Each data chip can buffer stripes from multiple datasets. Through the use of: (i) error detection methods; (ii) tagging the stripes for identification; and (iii) acknowledgement responses from the data chips, the host keeps track of the status of each slice at the data chips. If errors are detected for a given stripe, the host resends the stripe in the next store cycle, concurrently with stripes for the next dataset. Once all stripes have been received error-free across all the data chips, the host issues a store command which triggers the data chips to move the respective stripes from buffer to memory.
    Type: Grant
    Filed: October 12, 2017
    Date of Patent: July 16, 2019
    Assignee: International Business Machines Corporation
    Inventors: Susan M. Eickhoff, Steven R. Carlough, Patrick J. Meaney, Stephen J. Powell, Jie Zheng, Gary A. Van Huben
  • Patent number: 10324843
    Abstract: A method, computer program product, and computing system for receiving an indication of an intent to restore at least a portion of a data array based upon a historical record of the data array. One or more changes made to the content of that data array after the generation of the historical record may be identified, thus generating a differential record. One or more data entries within a cache memory system associated with the at least a portion of a data array may be invalidated based, at least in part, upon the differential record.
    Type: Grant
    Filed: May 25, 2016
    Date of Patent: June 18, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: David Erel, Assaf Natanzon
  • Patent number: 10275288
    Abstract: The invention concerns a processing system comprising: a compute node (20) having one or more processors and one or more memory devices storing software enabling virtual computing resources and virtual memory to be assigned to support a plurality of virtual machines (VM1); a reconfigurable circuit (301) comprising a dynamically reconfigurable portion (302) comprising one or more partitions (304) that are reconfigurable during runtime and implement at least one hardware accelerator (ACC #1 to #N) assigned to at least one of the plurality of virtual machines (VM); and a virtualization manager (306) providing an interface between the at least one hardware accelerator (ACC #1 to #N) and the compute node (202) and comprising a circuit (406) adapted to translate, for the at least one hardware accelerator, virtual memory addresses into corresponding physical memory addresses to permit communication between the one or more hardware accelerators and the plurality of virtual machines.
    Type: Grant
    Filed: April 29, 2016
    Date of Patent: April 30, 2019
    Assignee: Virtual Open Systems
    Inventors: Christian Pinto, Michele Paolino, Salvatore Daniele Raho
  • Patent number: 10241715
    Abstract: A method for rendering data invalid within a memory array is described. The method includes establishing governing metadata for a memory location within a memory array. The method also includes receiving a request to retrieve data from the memory location. The method also includes determining whether color metadata associated with the data matches the governing metadata. The method also includes returning the data when the color metadata matches the governing metadata. The method also includes returning invalidated data when the color metadata does not match the governing metadata.
    Type: Grant
    Filed: January 31, 2014
    Date of Patent: March 26, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Gregg B. Lesartre, Siamak Tavallaei, Russ W. Herrell
  • Patent number: 10243990
    Abstract: A system and method for detecting replay attacks on secure data are disclosed. A system on a chip (SOC) includes a security processor. Blocks of data corresponding to sensitive information are stored in off-chip memory. The security processor uses an integrity data structure, such as an integrity tree, for the blocks. The intermediate nodes of the integrity tree use nonces which have been generated independent of any value within a corresponding block. By using only the nonces to generate tags in the root at the top layer stored in on-chip memory and the nodes of the intermediate layers stored in off-chip memory, an amount of storage used is reduced for supporting the integrity tree. When the security processor detects events which create access requests for one or more blocks, the security processor uses the integrity tree to verify a replay attack has not occurred and corrupted data.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: March 26, 2019
    Assignee: Apple Inc.
    Inventors: Zhimin Chen, Timothy R. Paaske, Gilbert H. Herbeck
  • Patent number: 10235302
    Abstract: In an embodiment, a processor for invalidating cache entries comprises: at least one processing unit; a processor cache; and direct cache unit. The direct cache unit is to receive, from a first device, a direct read request for data in a first cache entry in the processor cache; determine whether the direct read request is an invalidating read request; in response to a determination that the direct read request is an invalidating read request: send the data in the first cache entry directly from the processor cache to the first device without accessing a main memory; and invalidate the first cache entry in the processor cache. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: March 19, 2019
    Assignee: Intel Corporation
    Inventors: Samantha J. Edirisooriya, Geetani R. Edirisooriya
  • Patent number: 10235175
    Abstract: A processor of an aspect includes a plurality of logical processors. A first logical processor of the plurality is to execute software that includes a memory access synchronization instruction that is to synchronize accesses to a memory. The processor also includes memory access synchronization relaxation logic that is to prevent the memory access synchronization instruction from synchronizing accesses to the memory when the processor is in a relaxed memory access synchronization mode.
    Type: Grant
    Filed: April 4, 2016
    Date of Patent: March 19, 2019
    Assignee: Intel Corporation
    Inventors: Martin G. Dixon, William C. Rash, Yazmin A. Santiago
  • Patent number: 10230583
    Abstract: Techniques for simulation of objects in a multi-node environment are described herein. Ownership of objects in a simulation scenario is assigned to a plurality of nodes based on a first set of criteria. Simulation authority of the first object is assumed by a second node based on a second set of criteria. Simulation of the first object is performed by the second node without previous acknowledgment, by the first node, of the assumption of simulation authority. Ownership of the first object is maintained by the first node during the time that the second node has simulation authority of the first object.
    Type: Grant
    Filed: September 18, 2015
    Date of Patent: March 12, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Pablo Puo Hen Cheng, Jesse Aaron Van Beurden, Rosen Ognyanov Baklov, Igor Gorelik
  • Patent number: 10230402
    Abstract: A data processing apparatus includes a memory, a processor which outputs write data when making a write request to the memory, and which inputs read data when making a read request to the memory, a parity generating circuit which generates a parity comprising a plurality of parity bits from the write data, the parity being written with the write data into the memory, and a parity check circuit which is coupled between the memory and the processor, and which detects a presence or absence of an error of one bit or two bits in the read data and the parity read from the memory, wherein the parity generating circuit generates the parity so that at least one of a first write data bit and a second write data bit included in the write data contributes to generation of at least two parity bits.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: March 12, 2019
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventors: Yukitoshi Tsuboi, Hideo Nagano
  • Patent number: 10229127
    Abstract: In one embodiment, a computer-implemented method includes capturing a consistent state of data blocks in a namespace cache of a deduplicating storage system. The data blocks contains data for a file system namespace organized in a hierarchical data structure. Each leaf page of the hierarchical data structure contains one or more data blocks. The method further includes determining, for each data block, whether the data block has been written to base on the captured consistent state. For at least one of the written data blocks in the namespace cache, the method includes searching, in the hierarchical data structure, adjacent data blocks to find in the namespace cache one or more data blocks that have also been written to, and upon finding the one or more adjacent written data blocks, flushing the written data block and the found one or more adjacent written data blocks together into a common storage unit.
    Type: Grant
    Filed: December 18, 2014
    Date of Patent: March 12, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Pengju Shang, Pranay Singh, George Mathew
  • Patent number: 10203958
    Abstract: A streaming engine employed in a digital data processor specifies a fixed read only data stream defined by plural nested loops. An address generator produces addresses of data elements. A steam head register stores data elements next to be supplied to functional units for use as operands. Stream metadata is stored in response to a stream store instruction. Stored stream metadata is restored to the stream engine in response to a stream restore instruction. An interrupt changes an open stream to a frozen state discarding stored stream data. A return from interrupt changes a frozen stream to an active state.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: February 12, 2019
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Joseph Zbiciak, Timothy D. Anderson
  • Patent number: 10169254
    Abstract: Embodiments of techniques and systems for increasing efficiencies in computing systems using virtual memory are described. In embodiments, instructions which are located in two memory pages in a virtual memory system, such that one of the pages does not permit execution of the instructions located therein, are identified and then executed under temporary permissions that permit execution of the identified instructions. In various embodiments, the temporary permissions may come from modified virtual memory page tables, temporary virtual memory page tables which allow for execution, and/or emulators which have root access. In embodiments, per-core virtual memory page tables may be provided to allow two cores of a computer processor to operate in accordance with different memory access permissions. In embodiments, a physical page permission table may be utilized to provide for maintenance and tracking of per-physical-page memory access permissions. Other embodiments may be described and claimed.
    Type: Grant
    Filed: August 2, 2017
    Date of Patent: January 1, 2019
    Assignee: Intel Corporation
    Inventors: Ramesh Thomas, Kuo-Lang Tseng, Ravi L. Sahita, David M. Durham, Madhukar Tallam
  • Patent number: 10146602
    Abstract: A host data processing system provides a virtual operating environment for a guest data processing system. A transaction is initiated for translation of a guest system memory address to a host system physical address in response to a transaction request from a device overseen by a guest system. For a stalled transaction incurring an error, the following are performed: (i) storing identification information relating to that transaction including data identifying the requesting device; (ii) providing translation error condition information to the overseeing guest system; and (iii) deferring handling of the stalled transaction until a subsequent command is received from that guest system. Initiation of a closure process for a guest system initiates cancellation of certain stalled transactions.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: December 4, 2018
    Assignee: ARM Limited
    Inventor: Matthew Lucien Evans
  • Patent number: 10133669
    Abstract: An example system on a chip (SoC) includes a cache, a processor, and a predictor circuit. The cache may store data. The processor may be coupled to the cache and store a first data set at a first location in the cache and receive a first request from an application to write a second data set to the cache. The predictor circuit may be coupled to the processor and determine that a second location where the second data set is to be written to in the cache is nonconsecutive to the first location, where the processor is to perform a request-for-ownership (RFO) operation for the second data set and write the second data set to the cache.
    Type: Grant
    Filed: November 15, 2016
    Date of Patent: November 20, 2018
    Assignee: Intel Corporation
    Inventors: Pavel I. Kryukov, Stanislav Shwartsman, Joseph Nuzman, Alexandr Titov
  • Patent number: 10114747
    Abstract: Systems and methods for performing operations on memory of a computing device are disclosed. According to an aspect, a method includes storing update data on a first memory of a computing device, wherein the update data comprises data for updating a second memory on the computing device. The method also includes initiating an update mode on the second memory. Further, the method includes suspending an I/O operation of the second memory. The method also includes switching the computing device to a system management mode (SMM) while the second memory is in the update mode. Further, the method includes retrieving the update data from the first memory. The method also includes determining whether the update data is valid. The method also includes resuming the I/O operation of the second memory for updating the second memory based on the retrieved update data in response to determining that the update data is valid.
    Type: Grant
    Filed: May 13, 2015
    Date of Patent: October 30, 2018
    Assignee: Lenovo Enterprise Solutions (Singapore) Pte. Ltd.
    Inventors: Shiva R. Dasari, Scott N. Dunham, Sumeet Kochar