Hierarchical Caches Patents (Class 711/122)
  • Patent number: 11507441
    Abstract: A method of performing a remotely-initiated procedure on a computing device is provided. The method includes (a) receiving, by memory of the computing device, a request from a remote device via remote direct memory access (RDMA); (b) in response to receiving the request, assigning processing of the request to one core of a plurality of processing cores of the computing device, wherein assigning includes the one core receiving a completion signal from a shared completion queue (Shared CQ) of the computing device, the Shared CQ being shared between the plurality of cores; and (c) in response to assigning, performing, by the one core, a procedure described by the request. An apparatus, system, and computer program product for performing a similar method are also provided.
    Type: Grant
    Filed: January 21, 2021
    Date of Patent: November 22, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Leonid Ravich, Yuri Chernyavsky
  • Patent number: 11507420
    Abstract: Systems and methods for scheduling tasks using sliding time windows are provided. In certain embodiments, a system for scheduling the execution of tasks includes at least one processing unit configured to execute multiple tasks, wherein each task in the multiple tasks is scheduled to execute within a scheduler instance in multiple scheduler instances, each scheduler instance in the multiple scheduler instances being associated with a set of time windows in multiple time windows and with a set of processing units in the at least one processing unit in each time window, time windows in the plurality of time windows having a start time and an allotted duration and the scheduler instance associated with the time windows begins executing associated tasks no earlier than the start time and executes for no longer than the allotted duration, and wherein the start time is slidable to earlier moments in time.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: November 22, 2022
    Assignee: Honeywell International Inc.
    Inventors: Srivatsan Varadarajan, Larry James Miller, Arthur Kirk McCready, Aaron R. Larson, Richard Frost, Ryan Lawrence Roffelsen
  • Patent number: 11481598
    Abstract: A computer-implemented method for creating an auto-scaled predictive analytics model includes determining, via a processor, whether a queue size of a service master queue is greater than zero. Responsive to determining that the queue size is greater than zero, the processor fetches a count of requests in a plurality of requests in the service master queue and a type for each of the requests. The processor derives a value for time required for each of the requests and retrieves a number of available processing nodes based on the time required for each of the requests. The processor then auto-scales a processing node number responsive to determining that a total execution time for all of the requests in the plurality of requests exceeds a predetermined time value and outputs an auto-scaled predictive analytics model based on the processing node number and queue size.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: October 25, 2022
    Assignee: International Business Machines Corporation
    Inventors: Mahadev Khapali, Shashank V. Vagarali
  • Patent number: 11467908
    Abstract: A distributed storage places data units and parity units constituting a stripe formed by divided data into storage nodes in a distributed manner. In reference to determination formulas, either a full-stripe parity calculation method or an RPM parity calculation method is selected so as to minimize an amount of network traffic.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: October 11, 2022
    Assignee: HITACHI, LTD.
    Inventors: Kazushi Nakagawa, Mitsuo Hayasaka, Yuto Kamo
  • Patent number: 11442869
    Abstract: A cache memory includes a first cache area corresponding to even addresses, and a second cache area corresponding to odd addresses, wherein each of the first and second cache areas includes a plurality of cache sets, and each cache set includes a data set field suitable for storing data corresponding to an address among the even and odd addresses, and a pair field suitable for storing information on a location where data corresponding to an adjacent address which is adjacent to an address corresponding to the stored data is stored.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: September 13, 2022
    Assignee: SK hynix Inc.
    Inventor: Seung-Gyu Jeong
  • Patent number: 11442868
    Abstract: A caching system including a first sub-cache and a second sub-cache in parallel with the first sub-cache, wherein the second sub-cache includes: line type bits configured to store an indication that a corresponding cache line of the second sub-cache is configured to store write-miss data, and an eviction controller configured to evict a cache line of the second sub-cache storing write-miss data based on an indication that the cache line has been fully written.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: September 13, 2022
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Hippleheuser
  • Patent number: 11431790
    Abstract: The present description is directed towards systems and methods for directing a user request for content over a network to a given content server on the basis of one or more rules. Methods and systems implemented in accordance with the present description comprise receiving a request for content form a user, the request for content including a profile of the user identifying one or more characteristics associated with the user. One or more rules are retrieved for identifying a content server to which a request for content is to be delivered, the one or more rules including at least one of business rules, network rules, and user profile rules. The one or more retrieved rules are applied to the request for content to identify a content server to which the request for content is to be delivered and the request for content is delivered to the identified content server.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: August 30, 2022
    Assignee: R2 Solutions, LLC
    Inventors: Selvaraj Rameshwara Prathaban, Dorai Ashok S.A., Mahadevaswamy G. Kakoor, Bhargavaram B. Gade, Matthew Nicholas Petach
  • Patent number: 11422891
    Abstract: Example apparatus and methods control a data storage system to store data in a self-describing logical data storage capsule using a logical cylindrical recording format. Example apparatus and methods assign a searchable, globally unique identifier to the capsule and associate the globally unique identifier with a user. The logical data storage capsule is migrated from a first data storage medium to a second data storage medium without translating or reformatting the data storage capsule. The data storage capsule contains information describing to a data storage device how to migrate the capsule without translating or reformatting the data storage capsule. Example apparatus and methods dynamically select an error correction approach for storing data in the data storage capsule, de-duplicate, and encrypt the data storage capsule. The data storage capsule may be local, or may be part of a cloud-based storage system.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: August 23, 2022
    Assignee: QUANTUM CORPORATION
    Inventor: George Saliba
  • Patent number: 11422938
    Abstract: A system includes a multi-core shared memory controller (MSMC). The MSMC includes a snoop filter bank, a cache tag bank, and a memory bank. The cache tag bank is connected to both the snoop filter bank and the memory bank. The MSMC further includes a first coherent slave interface connected to a data path that is connected to the snoop filter bank. The MSMC further includes a second coherent slave interface connected to the data path that is connected to the snoop filter bank. The MSMC further includes an external memory master interface connected to the cache tag bank and the memory bank. The system further includes a first processor package connected to the first coherent slave interface and a second processor package connected to the second coherent slave interface. The system further includes an external memory device connected to the external memory master interface.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: August 23, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Matthew David Pierson, Kai Chirca, Timothy David Anderson
  • Patent number: 11416405
    Abstract: A circuit and corresponding method map memory addresses onto cache locations within set-associative (SA) caches of various cache sizes. The circuit comprises a modulo-arithmetic circuit that performs a plurality of modulo operations on an input memory address and produces a plurality of modulus results based on the plurality of modulo operations performed. The plurality of modulo operations performed are based on a cache size associated with an SA cache. The circuit further comprises a multiplexer circuit and an output circuit. The multiplexer circuit outputs selected modulus results by selecting modulus results from among the plurality of modulus results produced. The selecting is based on the cache size. The output circuit outputs a cache location within the SA cache based on the selected modulus results and the cache size. Such mapping of the input memory address onto the cache location is performed at a lower cost relative to a general-purpose divider.
    Type: Grant
    Filed: February 5, 2021
    Date of Patent: August 16, 2022
    Assignee: MARVELL ASIA PTE LTD
    Inventor: Albert Ma
  • Patent number: 11397680
    Abstract: A technique is provided for controlling eviction from a storage structure. An apparatus has a storage structure with a plurality of entries to store data. The apparatus also has eviction control circuitry configured to maintain eviction control information in accordance with an eviction policy, the eviction policy specifying how the eviction control information is to be updated in response to accesses to the entries of the storage structure. The eviction control circuitry is responsive to a victim selection event to employ the eviction policy to select, with reference to the eviction control information, one of the entries to be a victim entry whose data is to be discarded from the storage structure. The eviction control circuitry is further configured to maintain, for each of one or more groups of entries in the storage structure, an indication of a most-recent entry. The most-recent entry is an entry in that group that was most recently subjected to at least a given type of access.
    Type: Grant
    Filed: October 6, 2020
    Date of Patent: July 26, 2022
    Assignee: Arm Limited
    Inventor: Joseph Michael Pusdesris
  • Patent number: 11392498
    Abstract: An apparatus includes first CPU and second CPU cores, a L1 cache subsystem coupled to the first CPU core and comprising a L1 controller, and a L2 cache subsystem coupled to the L1 cache subsystem and to the second CPU core. The L2 cache subsystem includes a L2 memory and a L2 controller configured to operate in an aliased mode in response to a value in a memory map control register being asserted. In the aliased mode, the L2 controller receives a first request from the first CPU core directed to a virtual address in the L2 memory, receives a second request from the second CPU core directed to the virtual address in the L2 memory, directs the first request to a physical address A in the L2 memory, and directs the second request to a physical address B in the L2 memory.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: July 19, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Abhijeet Ashok Chachad, Timothy David Anderson, Pramod Kumar Swami, Naveen Bhoria, David Matthew Thompson, Neelima Muralidharan
  • Patent number: 11379370
    Abstract: In a multi-node system, each node includes tiles. Each tile includes a cache controller, a local cache, and a snoop filter cache (SFC). The cache controller responsive to a memory access request by the tile checks the local cache to determine whether the data associated with the request has been cached by the local cache of the tile. The cached data from the local cache is returned responsive to a cache-hit. The SFC is checked to determine whether any other tile of a remote node has cached the data associated with the memory access request. If it is determined that the data has been cached by another tile of a remote node and if there is a cache-miss by the local cache, then the memory access request is transmitted to the global coherency unit (GCU) and the snoop filter to fetch the cached data. Otherwise an interconnected memory is accessed.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: July 5, 2022
    Assignee: Marvell Asia Pte Ltd
    Inventors: Pranith Kumar Denthumdas, Rabin Sugumar, Isam Wadih Akkawi
  • Patent number: 11360891
    Abstract: A method of dynamic cache configuration includes determining, for a first clustering configuration, whether a current cache miss rate exceeds a miss rate threshold. The first clustering configuration includes a plurality of graphics processing unit (GPU) compute units clustered into a first plurality of compute unit clusters. The method further includes clustering, based on the current cache miss rate exceeding the miss rate threshold, the plurality of GPU compute units into a second clustering configuration having a second plurality of compute unit clusters fewer than the first plurality of compute unit clusters.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: June 14, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mohamed Assem Ibrahim, Onur Kayiran, Yasuko Eckert, Gabriel H. Loh
  • Patent number: 11354307
    Abstract: There is provided a database management system, comprising: a multicore processor, a shared memory, a partitioned memory, and a database engine adapted to execute at least one transaction worker thread managing transaction states and database indexes in the shared memory using a cache coherency mechanism, and execute at least one partition manager thread for handling database access actions submitted by the at least one transaction worker thread to access a database in the partitioned memory, the cache coherency mechanism being disabled in the partitioned memory; wherein the at least one transaction worker thread and the at least one partition manager thread are executed simultaneously on the multicore processor.
    Type: Grant
    Filed: March 21, 2018
    Date of Patent: June 7, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Israel Gold, Hillel Avni, Antonios Iliopoulos
  • Patent number: 11350145
    Abstract: A method, Delivery Node, DN, and Content Delivery Network, CDN, are optimized to deliver content of different categories. The DN is operative to receive a request for a content, obtain a determination of whether or not a category of content is associated with the requested content and responsive to the determination that no category of content is associated with the requested content, forward the request for the content towards an origin server and upon receiving a response from the origin server serve the content. The CDN comprises a plurality of DNs, a data processing service operative to obtain, from the DNs, and assemble, data sets into training and validation data formatted to be used for training and validating a Neural Network, NN. The CDN comprises a NN training service, operative to train and validate the NN and a configuration service, operative to configure the plurality of DNs with the trained and validated NN.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: May 31, 2022
    Assignee: Telefonaktiebolaget L M Ericsson (publ)
    Inventor: Zhongwen Zhu
  • Patent number: 11334494
    Abstract: Techniques for caching data are provided that include receiving, by a caching system, a write memory command for a memory address, the write memory command associated with a first color tag, determining, by a first sub-cache of the caching system, that the memory address is not cached in the first sub-cache, determining, by second sub-cache of the caching system, that the memory address is not cached in the second sub-cache, storing first data associated with the first write memory command in a cache line of the second sub-cache, storing the first color tag in the second sub-cache, receiving a second write memory command for the cache line, the write memory command associated with a second color tag, merging the second color tag with the first color tag, storing the merged color tag, and evicting the cache line based on the merged color tag.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: May 17, 2022
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Hippleheuser
  • Patent number: 11327887
    Abstract: Techniques related to a server-side extension of client-side caches are provided. A storage server computer receives, from a database server computer, an eviction notification indicating that a data block has been evicted from the database server computer's cache. The storage server computer comprises a memory hierarchy including a volatile cache and a persistent cache. Upon receiving the eviction notification, the storage server computer retrieves the data block from the persistent cache and stores it in the volatile cache. When the storage server computer receives, from the database server computer, a request for the data block, the storage server computer retrieves the data block from the volatile cache. Furthermore, the storage server computer sends the data block to the database server computer, thereby causing the data block to be stored in the database server computer's cache. Still further, the storage server computer evicts the data block from the volatile cache.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: May 10, 2022
    Assignee: Oracle International Corporation
    Inventors: Jia Shi, Wei Zhang, Kothanda Umamageswaran, Neil J. S. MacNaughton, Vijayakrishnan Nagarajan
  • Patent number: 11321146
    Abstract: The present disclosure relates to a method for a computer system comprising a plurality of processor cores, including a first processor core and a second processor core, wherein a cached data item is assigned to a first processor core, of the plurality of processor cores, for exclusively executing an atomic primitive. The method includes receiving, from a second processor core at a cache controller, a request for accessing the data item, and in response to determining that the execution of the atomic primitive is not completed by the first processor core, returning a rejection message to the second processor core.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: May 3, 2022
    Assignee: International Business Machines Corporation
    Inventors: Ralf Winkelmann, Michael Fee, Matthias Klein, Carsten Otte, Edward W. Chencinski, Hanno Eichelberger
  • Patent number: 11320890
    Abstract: Techniques and apparatuses are described that enable power-conserving cache memory usage. Main memory constructed using, e.g., DRAM can be placed in a low-power mode, such as a self-refresh mode, for longer time periods using the described techniques and apparatuses. A hierarchical memory system includes a supplemental cache memory operatively coupled between a higher-level cache memory and the main memory. The main memory can be placed in the self-refresh mode responsive to the supplemental cache memory being selectively activated. The supplemental cache memory can be implemented with a highly- or fully-associative cache memory that is smaller than the higher-level cache memory. Thus, the supplemental cache memory can handle those cache misses by the higher-level cache memory that arise because too many memory blocks are mapped to a single cache line. In this manner, a DRAM implementation of the main memory can be kept in the self-refresh mode for longer time periods.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: May 3, 2022
    Assignee: Google LLC
    Inventor: Christopher J. Phoenix
  • Patent number: 11316947
    Abstract: A method, computer system, and a computer program product for execution of a stateless service on a node in a workload execution environment is provided. The present invention may include defining for each node a workload container including a cache component of a cache-mesh. The present invention may include, upon receiving a state request from a stateless requesting service from one of the cache components of the cache-mesh in an execution container, determining whether a requested state is present in the cache component of a related execution container. The present invention may include, upon a cache miss, broadcasting the state request to other cache components of the cache-mesh, determining, by the other cache components, whether the requested state is present in respective caches, and upon any cache component identifying the requested state, sending the requested state to the requesting service using a protocol for communication.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: April 26, 2022
    Assignee: International Business Machines Corporation
    Inventors: Sven Sterbling, Christian Habermann, Sachin Lingadahalli Vittal
  • Patent number: 11316694
    Abstract: A computing device's trusted platform module (TPM) is configured with a cryptographic watchdog timer which forces a device reset if the TPM fails to solve a cryptographic challenge before the expiration of the timer. The computing device's TPM is configured to generate the cryptographic challenge, to which the computing device does not possess the cryptographic token for resolution. While the watchdog timer counts down, the computing device requests a cryptographic token from a remote service to solve the challenge. The remote service transmits the cryptographic token to the computing device so long as the remote service identifies no reason to withhold the token, such as the computing device being infected with malware. The interoperability of the computing device and remote service enables the remote service to exercise control and reset capabilities over the computing device.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: April 26, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stefan Thom, Brian Clifford Telfer, Paul England, Dennis James Mattoon, Marcus Peinado
  • Patent number: 11294675
    Abstract: A method for accessing a memory of a multi-core system, a related apparatus, a system, and a storage medium involve obtaining data from a system memory according to a prefetch instruction, and sending a message to a core that carries the to-be-accessed data. Each segment of data is stored in an intra-core cache based on the prefetch instruction.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: April 5, 2022
    Assignee: HUAWEI TECHNOLGOIES CO., LTD.
    Inventors: Jun Yao, Yasuhiko Nakashima, Tao Wang, Wei Zhang, Zuqi Liu, Shuzhan Bi
  • Patent number: 11294932
    Abstract: A massively parallel database management system includes an index store and a payload store including a set of storage systems of different temperatures. Both the stores each include a list of clusters. Each cluster includes a set of nodes with storage devices forming a group of segments. Nodes and clusters are connected over high speed links. The list of clusters within the payload store includes clusters of different temperatures. The payload store transitions data of a segment group from a higher temperature to a segment group in a lower temperature cluster in parallel. A node moves data of a segment in the higher temperature cluster to a corresponding node's segment in the lower temperature cluster. Once the data is written in the destination segment in the lower temperature cluster, the source segment is freed to store other data. The temperatures include blazing, hot, warm and cold.
    Type: Grant
    Filed: July 9, 2020
    Date of Patent: April 5, 2022
    Assignee: Ocient Inc.
    Inventors: George Kondiles, Rhett Colin Starr, Joseph Jablonski
  • Patent number: 11288211
    Abstract: A method for moving data includes identifying, by a staging manager in a container, a trigger condition associated with data being used by an application external to the container, performing an analysis on the trigger condition, making a first determination, based on the analysis, that the trigger condition is satisfied, and processing, based on the first determination, a data movement action.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: March 29, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Jean-Pierre Bono, Marc A. De Souter, Adrian Michaud
  • Patent number: 11288205
    Abstract: A processor maintains an access log indicating a stream of cache misses at a cache of the processor. In response to each of at least a subset of cache misses at the cache, the processor records a corresponding entry in the access log, indicating a physical memory address of the memory access request that resulted in the corresponding miss. In addition, the processor maintains an address translation log that indicates a mapping of physical memory addresses to virtual memory addresses. In response to an address translation (e.g., a page walk) that translates a virtual address to a physical address, the processor stores a mapping of the physical address to the corresponding virtual address at an entry of the address translation log. Software executing at the processor can use the two logs for memory management.
    Type: Grant
    Filed: June 23, 2015
    Date of Patent: March 29, 2022
    Assignees: Advanced Micro Devices, Inc., ATI TECHNOLOGIES ULC
    Inventors: Benjamin T. Sander, Mark Fowler, Anthony Asaro, Gongxian Jeffrey Cheng, Mike Mantor
  • Patent number: 11271949
    Abstract: The disclosure herein pertains to a security vulnerability scanner. The security vulnerability scanner parses a URL into a network portion and a fragment portion. The security vulnerability scanner then runs the URL on a network-side browser to generate processed results. Advantageously, the security vulnerability scanner is able to mimic a client side browser by running various fragment portions in order to analyze security risks.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: March 8, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: William Frederick Kruse, Ryan Pickren, Guifre Ruiz Utges, Zak Aaron Edwards
  • Patent number: 11271992
    Abstract: Described herein are technologies directed to lazy lock queue reduction in computing clusters. The disclosed lazy lock queue reduction techniques can be performed in preparation for cluster group changes. Prior to a cluster group change operation, such as a merge or a split of a node with a group, a notification of a planned group change operation can be sent to the nodes of a group. In response to the notification, the nodes of the group can perform lazy lock queue reduction techniques disclosed herein. In one disclosed lazy lock queue reduction technique, a node can set a drain goal for a lazy lock queue, and the node can drain the lazy lock queue according to the drain goal. In another disclosed lazy lock queue reduction technique, a node can set an age limit for lazy lock queue entries, and the node can remove lazy lock queue entries which are expired or over the age limit.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: March 8, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Antony Richards, Douglas Kilpatrick
  • Patent number: 11269785
    Abstract: A cache system includes a cache memory having a plurality of blocks, a dirty line list storing status information of a predetermined number of dirty lines among dirty lines in the plurality of blocks, and a cache controller controlling a data caching operation of the cache memory and providing statuses and variation of statuses of the dirty lines, according to the data caching operation, to the dirty line list. The cache controller performs a control operation to always store status information of a least-recently-used (LRU) dirty line into a predetermined storage location of the dirty line list.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: March 8, 2022
    Assignee: SK hynix Inc.
    Inventors: Seung Gyu Jeong, Dong Gun Kim
  • Patent number: 11265268
    Abstract: The technology described in this document can be embodied in an integrated circuit device comprises a first data processing unit comprising one or more input ports for receiving incoming data, one or more inter-unit data links that couple the first data processing unit to one or more other data processing units, a first ingress management module connected to the one or more inter-unit data links, the first ingress management module configured to store the incoming data, and forward the stored data to the one or more inter-unit data links as multiple data packets, and a first ingress processing module. The integrated circuit device also comprises a second data processing unit comprising one or more output ports for transmitting outgoing data, and a second ingress management module connected to the one or more inter-unit data links.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: March 1, 2022
    Assignee: Innovium, Inc.
    Inventors: Ajit K. Jain, Avinash Gyanendra Mani, Mohammad Kamel Issa
  • Patent number: 11263143
    Abstract: A fabric controller is provided for a coherent accelerator fabric. The coherent accelerator fabric includes a host interconnect, a memory interconnect, and an accelerator interconnect. The host interconnect communicatively couples to a host device. The memory interconnect communicatively couples to an accelerator memory. The accelerator interconnect communicatively couples to an accelerator having a last-level cache (LLC). An LLC controller is provided that is configured to provide a bias check for memory access operations on the fabric.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: March 1, 2022
    Assignee: Intel Corporation
    Inventors: Ritu Gupta, Aravindh V. Anantaraman, Stephen R. Van Doren, Ashok Jagannathan
  • Patent number: 11256750
    Abstract: Techniques herein accelerate graph querying by caching neighbor vertices (NVs) of super-node vertices. In an embodiment, a computer receives a graph query (GQ) to extract result paths from a graph in a database. The GQ has a sequence of query vertices (QVs) and a sequence of query edges (QEs). The computer successively traverses each QE and QV to detect paths of the graph that match the GQ. Traversing each QE and QV entails retrieving NVs of a current graph vertex (CGV) of a current traversal path. If the CGV is a key in a cache whose keys are graph vertices having an excessive degree, then the computer retrieves NVs from the cache. Otherwise, the computer retrieves NVs from the database. If the degree is excessive, and the CGV is not a key in the cache, then the computer stores, into the cache, the CGV as a key for the NVs.
    Type: Grant
    Filed: August 14, 2019
    Date of Patent: February 22, 2022
    Assignee: Oracle International Corporation
    Inventors: Oskar Van Rest, Jinha Kim, Xuming Meng, Sungpack Hong, Hassan Chafi
  • Patent number: 11256629
    Abstract: Techniques are disclosed relating to filtering cache accesses. In some embodiments, a control unit is configured to, in response to a request to process a set of data, determine a size of a portion of the set of data to be handled using a cache. In some embodiments, the control unit is configured to determine filtering parameters indicative of a set of addresses corresponding to the determined size. In some embodiments, the control unit is configured to process one or more access requests for the set of data based on the determined filter parameters, including: using the cache to process one or more access requests having addresses in the set of addresses and bypassing the cache to access a backing memory directly, for access requests having addresses that are not in the set of addresses. The disclosed techniques may reduce average memory bandwidth or peak memory bandwidth.
    Type: Grant
    Filed: September 21, 2020
    Date of Patent: February 22, 2022
    Assignee: Apple Inc.
    Inventors: Karthik Ramani, Fang Liu, Steven Fishwick, Jonathan M. Redshaw
  • Patent number: 11249899
    Abstract: Techniques for filesystem management for cloud object storage are described. In one embodiment, a method includes writing, by a filesystem layer, a plurality of entries to a log structured file tree, including filesystem metadata and filesystem data. The method includes performing a flush operation of the entries from the filesystem layer to one or more objects in a distributed cloud object storage layer. The method includes storing the filesystem metadata and the filesystem data to the one or more objects in the distributed cloud object storage layer. The method further includes storing flush metadata generated during each flush operation, including a flush sequence number associated with each flush operation. Each object of the one or more objects in the distributed cloud object storage layer is identified by a key that identifies the flush sequence number, an object identifier, and a rebirth identifier.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: February 15, 2022
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Shravan Gaonkar, Mayuresh Vartak
  • Patent number: 11251966
    Abstract: Disclosed herein are computer-implemented methods; computer-implemented systems; and non-transitory, computer-readable media, for sending cross-chain messages. One computer-implemented method includes storing, through consensus of blockchain nodes of a first blockchain network, an authenticable message (AM) associated with a first account to a blockchain associated with the first blockchain network, where the AM comprises an identifier of the first blockchain network, information of the first account, information of a recipient of the AM, and content of the AM. The AM and location information is transmitted to a relay to be forwarded to the recipient located outside of the first blockchain network, where the location information identifies a location of the AM in the blockchain and the recipient includes one or more accounts outside of the first blockchain network.
    Type: Grant
    Filed: February 10, 2020
    Date of Patent: February 15, 2022
    Assignee: Advanced New Technologies Co., Ltd.
    Inventor: Honglin Qiu
  • Patent number: 11238949
    Abstract: Memory devices including a controller for access of an array of memory cells that is configured to accept a sequence of commands to cause the memory device to read a first set of data from the array of memory cells into a first register, load the first set of data into a first portion of a second register, write a set of test data to a second portion of the second register during a reading of a second set of data from the array of memory cells to the first register, read the set of test data from the second portion of the second register during the reading of the second set of data, and output the set of test data from the memory device during the reading of the second set of data.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: February 1, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Terry Grunzke
  • Patent number: 11226819
    Abstract: A processing unit includes a plurality of processing elements and one or more caches. A first thread executes a program that includes one or more prefetch instructions to prefetch information into a first cache. Prefetching is selectively enabled when executing the first thread on a first processing element dependent upon whether one or more second threads previously executed the program on the first processing element. The first thread is then dispatched to execute the program on the first processing element. In some cases, a dispatcher receives the first thread four dispatching to the first processing element. The dispatcher modifies the prefetch instruction to disable prefetching into the first cache in response to the one or more second threads having previously executed the program on the first processing element.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: January 18, 2022
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Brian Emberling, Michael Mantor
  • Patent number: 11216379
    Abstract: A processor system includes a processor core, a cache, a cache controller, and a cache assist controller. The processor core issues a read/write command for reading data from or writing data to a memory. The processor core also outputs an address range specifying addresses for which the cache assist controller can return zero fill, e.g., an address range for the read/write command. The cache controller transmits a cache request to the cache assist controller based on the read/write command. The cache assist controller receives the address range output by the processor core and compares the address range to the cache request. If a memory address in the cache request falls within the address range, the cache assist controller returns a string of zeroes, rather than fetching and returning data stored at the memory address.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: January 4, 2022
    Assignee: Analog Devices International Unlimited Company
    Inventors: Thirukumaran Natrayan, Saurbh Srivastava
  • Patent number: 11216338
    Abstract: A storage device includes a nonvolatile memory device that includes a plurality of pages, each of which includes a plurality of memory cells, and a controller that receives first write data expressed by 2m states (m being an integer greater than 1) from an external host device. The controller in a first operating mode shapes the first write data to second write data, which are expressed by “k” states (k being an integer greater than 2) smaller in number than the 2m states, performs first error correction encoding on the second write data to generate third write data expressed by the “k” states, and transmits the third write data to the nonvolatile memory device for writing at a selected page from the plurality of pages.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: January 4, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Youngjun Hwang, Dong-Min Shin, Changkyu Seol, Jaeyong Son, Hong Rak Son
  • Patent number: 11216387
    Abstract: A hybrid cache memory and a method for controlling the same are provided. The method for controlling a cache includes: receiving a request for data; determining whether the requested data is present in a first portion of the cache, a second portion of cache, or not in the cache, wherein the first portion of cache has a smaller access latency than the second portion of cache; loading the requested data from a memory of a next level into the first portion of the cache and the second portion of the cache if the requested data is not in the cache, and retrieving the requested data from the first portion of the cache; and retrieving the requested data from the first portion of the cache or the second portion of the cache without writing data to the second portion of the cache if the requested data is in the cache.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: January 4, 2022
    Assignee: Taiwan Semiconductor Manufacturing Company, Ltd.
    Inventor: Shih-Lien Linus Lu
  • Patent number: 11216374
    Abstract: A router device may receive a request for access to a file from a user device, wherein a master version of the file is stored in a data structure associated with a server device. The router device may generate, based on the request, a copy of a cached version of the file, wherein the cached version of the file is stored in a data structure associated with the router device. The router device may send the copy of the cached version of the file to the user device.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: January 4, 2022
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Jonathan Emerson Hirko, Rory Liam Connolly, Wei G. Tan, Nikolay Kulikaev, Manian Krishnamoorthy
  • Patent number: 11210232
    Abstract: A processor includes a page table walk cache that stores address translation information, and a page table walker. The page table walker fetches first output addresses indicated by first indexes of a first input address by looking up the address translation information and at least a part of page tables, and compares a matching level between second indexes of a second input address and the first indexes of the first input address with a walk cache hit level obtained by looking up the page table walk cache using the second indexes.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: December 28, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sung-Boem Park, Moinul Syed, Ju-Hee Choi
  • Patent number: 11204867
    Abstract: There is disclosed in an example a peripheral component interconnect express (PCIe) controller to provide coherent memory mapping between an accelerator memory and a host memory address space. The PCIe controller may include extensions to provide a coherent accelerator interconnect (CAI) to provide bias-based coherency tracking between the accelerator memory and the host memory address space. The extensions may include: a mapping engine to provide opcode mapping between PCIe instructions and on-chip system fabric (OSF) instructions for the CAI, a tunneling engine to provide scalable memory interconnect (SMI) tunneling of host memory operations to the accelerator memory via the CAI, host-bias-to-device-bias (HBDB) flip engine to enable the accelerator to flush a host cache line, and a QoS engine comprising a plurality of virtual channels.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: December 21, 2021
    Assignee: Intel Corporation
    Inventors: Ishwar Agarwal, Stephen R. Van Doren, Ramacharan Sundararaman
  • Patent number: 11200177
    Abstract: A data processing system (2) incorporates a first exclusive cache memory (8, 10) and a second exclusive cache memory (14). A snoop filter (18) located together with the second exclusive cache memory on one side of the communication interface (12) serves to track entries within the first exclusive cache memory. The snoop filter includes retention data storage circuitry to store retention data for controlling retention of cache entries within the second exclusive cache memory. Retention data transfer circuitry (20) serves to transfer the retention data to and from the retention data storage circuitry within the snoop filter and the second cache memory as the cache entries concerned are transferred between the second exclusive cache memory and the first exclusive cache memory.
    Type: Grant
    Filed: October 19, 2016
    Date of Patent: December 14, 2021
    Assignee: ARM LIMITED
    Inventors: Alex James Waugh, Dimitrios Kaseridis, Klas Magnus Bruce, Michael Filippo, Joseph Michael Pusdesris, Jamshed Jalal
  • Patent number: 11194729
    Abstract: A caching system including a first sub-cache and a second sub-cache in parallel with the first sub-cache, wherein the second sub-cache includes a set of cache lines, line type bits configured to store an indication that a corresponding cache line of the set of cache lines is configured to store write-miss data, and an eviction controller configured to flush stored write-miss data based on the line type bits.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: December 7, 2021
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Hippleheuser
  • Patent number: 11188342
    Abstract: An apparatus and method for a speculative conditional move instruction. A processor comprising: a decoder to decode a first speculative conditional move instruction; a prediction storage to store prediction data related to previously executed speculative conditional move instructions; and execution circuitry to read first prediction data associated with the speculative conditional move instruction and to execute the speculative conditional move instruction either speculatively or non-speculatively based on the first prediction data.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: November 30, 2021
    Assignee: Intel Corporation
    Inventors: Amjad Aboud, Gadi Haber, Jared Warner Stark, IV
  • Patent number: 11182188
    Abstract: Techniques for replicating virtual machine data is provided. A plurality of compute nodes running on a primary cluster determine the amount of virtual machine data cached within each compute node. Based on the amount of virtual machine data for a particular virtual machine, a particular compute node is assigned to replicate the data to a secondary cluster. The amount of particular virtual machine data copied to the secondary cluster is based on updated virtual machine data that belongs to a particular state of the virtual machine. The destination of the particular virtual machine data is based on available cache space and prior replication statistics for target compute nodes on the secondary cluster.
    Type: Grant
    Filed: April 18, 2018
    Date of Patent: November 23, 2021
    Assignee: VMware, Inc.
    Inventors: Boris Weissman, Sazzala Reddy
  • Patent number: 11169920
    Abstract: A system includes a first memory component of a first memory type, a second memory component of a second memory type with a higher access latency than the first memory component, and a third memory component of a third memory type with a higher access latency than the first and second memory components. The system further includes a processing device to identify a section of a data page stored in the first memory component, and access patterns associated with the data page and the section of the data page. The processing device determines to cache the data page at the second memory component based on the access patterns, copying the section of the data page stored in the first memory component to the second memory component. The processing device then copies additional sections of the data page stored at the third memory component to the second memory component.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: November 9, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Paul Stonelake, Horia C. Simionescu, Samir Mittal, Robert M. Walker, Anirban Ray, Gurpreet Anand
  • Patent number: 11169924
    Abstract: An apparatus includes a CPU core, a first memory cache with a first line size, and a second memory cache having a second line size larger than the first line size. Each line of the second memory cache includes an upper half and a lower half. A memory controller subsystem is coupled to the CPU core and to the first and second memory caches. Upon a miss in the first memory cache for a first target address, the memory controller subsystem determines that the first target address resulting in the miss maps to the lower half of a line in the second memory cache, retrieves the entire line from the second memory cache, and returns the entire line from the second memory cache to the first memory cache.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: November 9, 2021
    Assignee: Texas Instruments Incorporated
    Inventors: Bipin Prasad Heremagalur Ramaprasad, David Matthew Thompson, Abhijeet Ashok Chachad, Hung Ong
  • Patent number: 11169812
    Abstract: Systems, apparatuses, and methods for arbitrating threads in a computing system are disclosed. A computing system includes a processor with multiple cores, each capable of simultaneously processing instructions of multiple threads. When a thread throttling unit receives an indication that a shared cache has resource contention, the throttling unit sets a threshold number of cache misses for the cache. If the number of cache misses exceeds this threshold, then the throttling unit notifies a particular upstream computation unit to throttle the processing of instructions for the thread. After a time period elapses, if the cache continues to exceed the threshold, then the throttling unit notifies the upstream computation unit to more restrictively throttle the thread by performing one or more of reducing the selection rate and increasing the time period. Otherwise, the unit notifies the upstream computation unit to less restrictively throttle the thread.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: November 9, 2021
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Paul James Moyer, Douglas Benson Hunt, Kai Troester