Coherency Patents (Class 711/141)
  • Patent number: 11126615
    Abstract: Systems for prosecuting Internet messaging campaigns. Two or more data sources are determined where at least one of the data sources comprise demographic attributes corresponding to shared IDs such as recipient IDs. A first join operation is performed over matching instances of the shared IDs in the two or more data sources. The first join operation results in a personalization table comprising rows having at least recipient IDs, respective external addresses, and at least one of the demographic attributes. The personalization table is transformed into a key-value data structure that is published to a caching subsystem. The caching subsystem is used to select a first set of recipients determined without performing a second join operation. Personalized messages to at least some of the first and second set of recipients are formed using the message template and the key-value data structures.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: September 21, 2021
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Jeffrey Taihana Tuatini, Bradley Harold Sergeant, Raghu Upadhyayula, Qing Zou
  • Patent number: 11119926
    Abstract: Systems, apparatuses, and methods for maintaining a region-based cache directory are disclosed. A system includes multiple processing nodes, with each processing node including a cache subsystem. The system also includes a cache directory to help manage cache coherency among the different cache subsystems of the system. In order to reduce the number of entries in the cache directory, the cache directory tracks coherency on a region basis rather than on a cache line basis, wherein a region includes multiple cache lines. Accordingly, the system includes a region-based cache directory to track regions which have at least one cache line cached in any cache subsystem in the system. The cache directory includes a reference count in each entry to track the aggregate number of cache lines that are cached per region. If a reference count of a given entry goes to zero, the cache directory reclaims the given entry.
    Type: Grant
    Filed: December 18, 2017
    Date of Patent: September 14, 2021
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Kevin M. Lepak, Amit P. Apte, Ganesh Balakrishnan, Eric Christopher Morton, Elizabeth M. Cooper, Ravindra N. Bhargava
  • Patent number: 11113250
    Abstract: Techniques for activity tracking, data classification, and in-database archiving are described. Activity tracking refers to techniques that collect statistics related to user access patterns, such as the frequency or recency with which users access particular database elements. The statistics gathered through activity tracking can be supplied to data classification techniques to automatically classify the database elements or to assist users with manually classifying the database elements. Then, once the database elements have been classified, in-database archiving techniques can be employed to move database elements to different storage tiers based on the classifications. However, although the techniques related to activity tracking, data classification, and in-database archiving may be used together as described above; each technique may also be practiced separately.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: September 7, 2021
    Assignee: Oracle International Corporation
    Inventors: Liang Guo, Vivekanandhan Raja, Amit Ganesh, Joshua Gould
  • Patent number: 11106608
    Abstract: A processing unit includes a processor core that executes a store-conditional instruction that generates a store-conditional request specifying a store target address. The processing unit further includes a reservation register that records shared memory addresses for which the processor core has obtained reservations and a cache that services the store-conditional request by conditionally updating the shared memory with the store data based on the reservation register indicating a reservation for the store target address. The cache includes a blocking state machine configured to protect the store target address against access by any conflicting memory access request snooped on a system interconnect during a protection window extension following servicing of the store-conditional request.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: August 31, 2021
    Assignee: International Business Machines Corporation
    Inventors: Derek E. Williams, Guy L. Guthrie, Hugh Shen, Sanjeev Ghai
  • Patent number: 11100058
    Abstract: In accordance with an embodiment, described herein is a system and method for connection concentration in a database environment. A transparency engine provided between client applications and a database can include a connection pool (e.g., UCP connection pool). The transparency engine can operate as a proxy engine for the database and as a session abstraction layer for the client applications, to enable the client applications to utilize features provided by the connection pool without code changes. The transparency engine can receive application connections from the client applications, and concentrate the application connections on a smaller number of database connections maintained in the connection pool.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: August 24, 2021
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Pablo Silberkasten, Carol Colrain, Kevin Neel, Michael McMahon, Saurabh Verma, Jean De Lavarene
  • Patent number: 11099919
    Abstract: Methods testing a data coherency algorithm via a simulated multi-processor environment are provided, which include implementing: (i) a transactional footprint keeping the address of each cache line used by the processor core, (ii) a reference model operating on and keeping a set of timestamps for a cache line, the set including a construction date representing a global timestamp when new data arrives at a private cache hierarchy and an expiration date representing another global timestamp when a cross-invalidation hits the private cache hierarchy, (iii) a core observed timestamp representing a global timestamp of an oldest construction date of data used before, and (iv) interface events monitoring instruction sequences guaranteed by transactional execution to ensure atomicity of a transaction. Upon detecting a transaction end event and finding a cache line of the transactional footprint having an expiration date older than or equal to a core observed time, an error is reported.
    Type: Grant
    Filed: January 6, 2020
    Date of Patent: August 24, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christian Habermann, Gerrit Koch, Martin Recktenwald, Ralf Winkelmann
  • Patent number: 11093287
    Abstract: Data management for edge architected computing systems extends current storage and memory schemes of edge resources to expose interfaces to allow a device, such as an endpoint or client device, or another edge resource, to specify criteria for managing data originating from the device and stored in an edge resource, and extends the storage and memory controllers to manage data in accordance with the criteria, including removing stored data that no longer satisfies the criteria. The criteria includes a temporal hint to specify a time after which the data can be removed, a physical hint to specify a list of edge resources outside of which the data can be removed, an event-based hint to specify an event after which the data can be removed, and a quality of service condition to modify the time specified in the temporal hint based on a condition, such as memory and storage capacity of the edge resource in which the data is managed.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: August 17, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Ramanathan Sethuraman, Karthik Kumar, Mark A. Schmisseur, Brinda Ganesh
  • Patent number: 11093405
    Abstract: A network processor includes a memory subsystem serving a plurality of processor cores. The memory subsystem includes a hierarchy of caches. A mid-level instruction cache provides for caching instructions for multiple processor cores. Likewise, a mid-level data cache provides for caching data for multiple cores, and can optionally serve as a point of serialization of the memory subsystem. A low-level cache is partitionable into partitions that are subsets of both ways and sets, and each partition can serve an independent process and/or processor core.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: August 17, 2021
    Assignee: MARVELL ASIA PTE, LTD.
    Inventors: Shubhendu S. Mukherjee, David H. Asher, Richard E. Kessler, Srilatha Manne
  • Patent number: 11073892
    Abstract: An apparatus to switch a central processing unit between operational modes includes, in one embodiment, a central processing unit (“CPU”) having at least a first operation mode and a second operation mode, where the second operation mode is a higher performance operation mode than the first operation mode. The apparatus also includes a switching unit that switches a state of the CPU to the second operation mode in response to starting one of an operating system or an application program based on a user operation in a state in which the first operation mode is set, and switches the state of the CPU to the first operation mode in response to a determination that a condition is met.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: July 27, 2021
    Assignee: Lenovo (Singapore) PTE. LTD.
    Inventors: Kazuhiro Kosugi, Takuroh Kamimura
  • Patent number: 11076020
    Abstract: A system and method dynamically transitions the file system role of compute nodes in a distributed clustered file system for an object that includes an embedded compute engine (a storlet). Embodiments of the invention overcome prior art problems of a storlet in a distributed storage system with a storlet engine having a dynamic role module which dynamically assigns or changes a file system role served by the node to a role which is more optimally suited for a computation operation in the storlet. The role assignment is made based on a classification of the computation operation and the appropriate filesystem role that matches computation operation. For example, a role could be assigned which helps reduce storage needs, communication resources, etc.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: July 27, 2021
    Assignee: International Business Machines Corporation
    Inventors: Duane M. Baldwin, Sasikanth Eda, John T. Olson, Sandeep R. Patil
  • Patent number: 11048630
    Abstract: A symmetrical multi-processing (SMP) node, a distributed SMP (DSMP) system comprising a plurality of SMP nodes, and a method implemented in the SMP node are disclosed. The SMP node comprises: a plurality of processors, a memory coupled to the plurality of processors, and a memory coherent proxy coupled to the plurality of processors through a coherent accelerator interface. The memory coherent proxy is configured to manage statuses of cache lines in the memory.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: June 29, 2021
    Assignee: International Business Machines Corporation
    Inventors: Zhen Peng Zuo, Peng Fei Gou, Yang Fan Liu, Yang Liu, Hua Xin Yao
  • Patent number: 11042515
    Abstract: Embodiments are directed towards managing and tracking item identification of a plurality of items to determine if an item is a new or existing item, where an existing item has been previously processed. In some embodiments, two or more item identifiers may be generated. In one embodiment, generating the two or more item identifiers may include analyzing the item using a small item size characteristic, a compressed item, or for an identifier collision. The two or more item identifiers may be employed to determine if the item is a new or existing item. In one embodiment, the two or more item identifiers may be compared to a record about an existing item to determine if the item is a new or existing item. If the item is an existing item, then the item may be further processed to determine if the existing item has actually changed.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: June 22, 2021
    Assignee: Splunk Inc.
    Inventors: Amritpal Singh Bath, Mitchell Neuman Blank, Vishal Patel, Stephen Phillip Sorkin
  • Patent number: 11030115
    Abstract: An apparatus for using a dataless cache entry includes a cache memory and a cache controller configured to identify a first cache entry in cache memory as a potential cache entry to be replaced according to a cache replacement algorithm, compare a data value of the first cache entry to a predefined value, and write a memory address tag and state bits of the first cache entry to a dataless cache entry in response to the data value of the first cache entry matching the predefined value, wherein the dataless cache entry in the cache memory stores a memory address tag and state bits associated with the memory address, wherein the dataless cache entry represents the predefined value, and wherein the dataless cache entry occupies fewer bits than the first cache entry.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: June 8, 2021
    Assignee: LENOVO Enterprise Solutions (Singapore) PTE. LTD
    Inventor: Daniel J Colglazier
  • Patent number: 11016896
    Abstract: Maintaining multiple cache areas in a storage device having multiple processors includes loading data from a specific portion of non-volatile storage into a local cache slot in response to a specific processor of a first subset of the processors performing a read operation to the specific portion of non-volatile storage, where the local cache slot is accessible to the first subset of the processors and is inaccessible to a second subset of the processors that is different than the first subset of the processors and includes converting the local cache slot into a global cache slot in response to one of the processors performing a write operation to the specific portion of non-volatile storage, wherein the global cache area is accessible to the first subset of the processors and to the second subset of the processors. Different ones of the processors may be placed on different directors.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: May 25, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Venkata Khambam, Jeffrey R. Nelson, Brian Asselin, Rong Yu
  • Patent number: 11016913
    Abstract: In one embodiment, a cache coherent system includes one or more agents (e.g. coherent agents) that may cache data used by the system. The system may include a point of coherency in a memory controller in the system, and thus the agents may transmit read requests to the memory controller to coherently read data. The point of coherency may determine if the data is cached in another agent, and may transmit a copy back request to the other agent if the other agent has modified the data. The system may include an interconnect between the agents and the memory controller. At a point on the interconnect at which traffic from the agents converges, a copy back response may be converted to a fill for the requesting agent.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: May 25, 2021
    Assignee: Apple Inc.
    Inventors: Harshavardhan Kaushikkar, Christopher D. Shuler, Srinivasa Rangan Sridharan, Yu Zhang, Kaushik Kannan, Deniz Balkan
  • Patent number: 11010296
    Abstract: A memory arrangement having a memory, a first buffer memory, a first buffer memory controller which, during the storage of memory contents from the memory in the first buffer memory, is configured to invalidate the memory contents in the memory by means of a modification, a second buffer memory and a second buffer memory controller which is configured to read memory contents from the memory, to check whether the memory contents read from the memory are valid and, if the memory contents read from the memory are invalid, to apply a reversal of the modification to the read memory contents.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: May 18, 2021
    Assignee: Infineon Technologies AG
    Inventor: Steffen Sonnekalb
  • Patent number: 11003585
    Abstract: A method for determining affinity domain information based on virtual memory address in a computing system where access to memory is non-uniform includes receiving a request to identify an affinity domain associated with a specified virtual memory address. The affinity domain includes a cluster of processors and memory local to the cluster of processors. A physical memory page corresponding to the specified virtual memory address is determined using a page table mapping a plurality of virtual memory addresses to a plurality of physical addresses. An affinity domain associated with the determined physical memory page is identified. Affinity domain information is provided for the identified affinity domain.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: May 11, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: William F. Quinn, Anil Kalavakolanu, Douglas Griffith, Sreenivas Makineedi, Mathew Accapadi
  • Patent number: 11005912
    Abstract: Managing the timing of publication of new webpages and in particular new versions of existing webpages. New webpages are uploaded into a data repository for storing them before they are made available for access externally. A dependency processor is provided to process these new webpages to assess their readiness for publication by checking for dependencies on other webpages, the dependencies including a mutual dependency; locating any of the other webpages; and ascertaining whether each such dependency is satisfied. If dependencies are satisfied, then the new webpage is deemed ready for publication and is published. The satisfied dependencies include the content being accessible. In the case that the new webpage is a new version of an existing webpage, it replaces the old version. If dependencies are not satisfied, then the new webpage is held back until such time as they are met.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: May 11, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gary S. Bettesworth, Andreas Martens, Sam Rogers, Paul S. Thorpe
  • Patent number: 10997082
    Abstract: According to various aspects, a memory system may include: a memory having a memory address space associated therewith to access the memory; a cache memory assigned to the memory; one or more processors configured to generate a dummy address space in addition to the memory address space, each address of the dummy address space being distinct from any address of the memory address space, and generate one or more invalid cache entries in the cache memory, the one or more invalid cache entries referencing one or more dummy addresses of the dummy address space.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: May 4, 2021
    Assignee: INTEL CORPORATION
    Inventors: Andy Rudoff, Tiffany J. Kasanicky, Wei P. Chen, Rajat Agarwal, Chet R. Douglas
  • Patent number: 10996982
    Abstract: A transaction is detected. The transaction has a begin-transaction indication and an end-transaction indication. If it is determined that the begin-transaction indication is not a no-speculation indication, then the transaction is processed.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: May 4, 2021
    Assignee: International Business Machines Corporation
    Inventors: Fadi Y. Busaba, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum
  • Patent number: 10990393
    Abstract: Address-based filtering for load/store speculation includes maintaining a filtering table including table entries associated with ranges of addresses; in response to receiving an ordering check triggering transaction, querying the filtering table using a target address of the ordering check triggering transaction to determine if an instruction dependent upon the ordering check triggering transaction has previously been generated a physical address; and in response to determining that the filtering table lacks an indication that the instruction dependent upon the ordering check triggering transaction has previously been generated a physical address, bypassing a lookup operation in an ordering violation memory structure to determine whether the instruction dependent upon the ordering check triggering transaction is currently in-flight.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: April 27, 2021
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: John Kalamatianos, Krishnan V. Ramani, Susumu Mashimo
  • Patent number: 10983798
    Abstract: Embodiments of the invention are directed to methods for handling cache. The method includes retrieving a plurality of instructions from a cache. The method further includes placing the plurality of instructions into an instruction fetch buffer. The method includes retrieving a first instruction of the plurality of instructions from the instruction fetch buffer. The method includes executing the first instruction. The method includes retrieving a second instruction from the plurality of instructions from the instruction fetch buffer unless a back invalidate is received from the cache. Thereafter executing the second instruction without refreshing the instruction fetch buffer from the cache.
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: April 20, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Shakti Kapoor
  • Patent number: 10977040
    Abstract: Methods, systems and computer program products for heuristically invalidating non-useful entries in an array are provided. Aspects include receiving an instruction that is associated with an operand store compare (OSC) prediction for at least one of a store function and a load function. The OSC prediction is stored in an entry of an OSC history table (OHT). Aspects also include executing the instruction. Responsive to determining, based on the execution of the instruction, that data forwarding did not occur, aspects include incrementing a useless OSC prediction counter. Responsive to determining that the useless OSC prediction counter is equal to a predetermined value, aspects also include invalidating the entry of the OHT associated with the instruction.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: April 13, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: James Raymond Cuffney, Adam Collura, James Bonanno, Jang-Soo Lee, Eyal Naor, Yair Fried, Brian Robert Prasky
  • Patent number: 10977560
    Abstract: A method for object classification in a decision tree based adaptive boosting (AdaBoost) classifier implemented on a single-instruction multiple-data (SIMD) processor is provided that includes receiving feature vectors extracted from N consecutive window positions in an image in a memory coupled to the SIMD processor and evaluating the N consecutive window positions concurrently by the AdaBoost classifier using the feature vectors and vector instructions of the SIMD processor, in which the AdaBoost classifier concurrently traverses decision trees for the N consecutive window positions until classification is complete for the N consecutive window positions.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: April 13, 2021
    Assignee: Texas Instruments Incorporated
    Inventors: Shyam Jagannathan, Pramod Kumar Swami
  • Patent number: 10977043
    Abstract: Embodiments of the invention are directed to methods for handling cache. The method includes retrieving a plurality of instructions from a cache. The method further includes placing the plurality of instructions into an instruction fetch buffer. The method includes retrieving a first instruction of the plurality of instructions from the instruction fetch buffer. The method includes executing the first instruction. The method includes retrieving a second instruction from the plurality of instructions from the instruction fetch buffer unless a back invalidate is received from the cache. Thereafter executing the second instruction without refreshing the instruction fetch buffer from the cache.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: April 13, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Shakti Kapoor
  • Patent number: 10970213
    Abstract: An apparatus, system, and method of enforcing cache coherency in a multiprocessor shared memory system are disclosed. A request is received from a node controller, to process a cache coherent operation on a memory block in a shared memory. Based on the information included in the request, a determination is made as to whether the request was transmitted from a processor that is remote relative to the memory that includes the memory block referenced in the request. If the request is from a remote processor, a hardware-based cache coherency of the system is disabled, and request is processed according to software-based cache coherency protocols and mechanisms. A coherent read request may be translated to a non-coherent request, such as an immediate read request, which does not trigger tracking or storing state and ownership information of the requested memory block, or trigger communications with processors other than those involved with request.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: April 6, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Thomas McGee, Michael S. Woodacre, Michael Malewicki
  • Patent number: 10963028
    Abstract: According to one embodiment of the invention, a processor includes a power control unit, an interface to software during runtime that permits the software to set a plurality of power management constraint parameters for the power control unit during runtime of the processor without a reboot of the processor, and a storage element to store a respective lock bit for each of the plurality of power management constraint parameters to disable the interface from changing a respective constraint parameter when set.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: March 30, 2021
    Assignee: Intel Corporation
    Inventors: Ryan D. Wells, Sanjeev S. Jahagirdar, Inder M. Sodhi, Jeremy J. Shrall, Stephen H. Gunther, Daniel J. Ragland, Nicholas J. Adams
  • Patent number: 10949945
    Abstract: One embodiment provides for a general-purpose graphics processing device comprising a general-purpose graphics processing compute block to process a workload including graphics or compute operations, a first cache memory, and a coherency module enable the first cache memory to coherently cache data for the workload, the data stored in memory within a virtual address space, wherein the virtual address space shared with a separate general-purpose processor including a second cache memory that is coherent with the first cache memory.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: March 16, 2021
    Assignee: Intel Corporation
    Inventors: Joydeep Ray, Altug Koker, James A. Valerio, David Puffer, Abhishek R. Appu, Stephen Junkins
  • Patent number: 10942680
    Abstract: A data writing method, a memory storage device, and a memory control circuit unit are provided. The method includes: receiving a first data and writing the first data to at least one first physical programming unit of a first physical erasing unit; receiving a second data; temporarily storing the second data to a temporary storage area if a data length of the second data is less than a predefined value; receiving a third data; writing the third data to at least one second physical programming unit of the first physical erasing unit if a logical address storing the first data is consecutive with a logical address storing the third data; and moving the second data from the temporary storage area to at least one second physical programming unit of the first physical erasing unit if the logical address storing the first data is not consecutive with the logical address storing the third data.
    Type: Grant
    Filed: July 4, 2019
    Date of Patent: March 9, 2021
    Assignee: PHISON ELECTRONICS CORP.
    Inventors: Ping-Chuan Lin, Yi-Hsuan Lin, Bing-Hong Wu
  • Patent number: 10942853
    Abstract: A method, computer program product, and computer system are disclosed that in one or more embodiments includes issuing, from an issuing processor in the computer system, an address translation invalidation instruction with a return marker, wherein the address translation invalidation instruction is to invalidate one or more address translation entries in one or more storage locations in the computer system and wherein the return marker comprises an instruction to return information to the issuing processor indicating the identity of each processor where an invalidated entry was located.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: March 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: John A. Schumann, Debapriya Chatterjee, Bryant Cockcroft, Lawrence Leitner, Karen Yokum
  • Patent number: 10938559
    Abstract: Security key identifier remapping includes associating a system-level security key identifier to a local-level identifier requiring fewer bits of storage space. The remapped security key identifiers are used to receive, at a first compute complex of a processing system, a memory access request including a memory address value and a system-level security key identifier. The compute complex responds to the memory access request based on a determination of whether a security key identifier map of the first compute complex includes a mapping of the system-level security key identifier to a local-level security key identifier. In response to determining that the security key identifier map of the first compute complex does not include a mapping of the system-level security key identifier to the local-level security key identifier, a cache miss message may be returned without probing caches of the first compute complex.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: March 2, 2021
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Douglas Benson Hunt
  • Patent number: 10929431
    Abstract: Methods and systems for collision handling during an asynchronous replication are provided. A system includes a cache memory system comprising a number of cache memory pages. A collision detector detects when a host is attempting to overwrite a cache memory page that has not been completely replicated. A revision page tagger copies the cache memory page to a free page and tags the copied page as protected.
    Type: Grant
    Filed: August 28, 2015
    Date of Patent: February 23, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Mark Doherty, Siamak Nazari, Jin Wang, Srinivasa D. Murthy, Paul Kinnaird, Pierre Labat, Jonathan Stewart
  • Patent number: 10922237
    Abstract: Systems, apparatuses, and methods for accelerating accesses to private regions in a region-based cache directory scheme are disclosed. A system includes multiple processing nodes, one or more memory devices, and one or more region-based cache directories to manage cache coherence among the nodes' cache subsystems. Region-based cache directories track coherence on a region basis rather than on a cache line basis, wherein a region includes multiple cache lines. The cache directory entries for regions that are only accessed by a single node are cached locally at the node. Updates to the reference count for these entries are made locally rather than sending updates to the cache directory. When a second node accesses a first node's private region, the region is now considered shared, and the entry for this region is transferred from the first node back to the cache directory.
    Type: Grant
    Filed: September 12, 2018
    Date of Patent: February 16, 2021
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Amit P. Apte, Ganesh Balakrishnan
  • Patent number: 10915326
    Abstract: A cache system, having a first cache, a second cache, and a logic circuit coupled to control the first cache and the second cache according to an execution type of a processor. When an execution type of a processor is a first type indicating non-speculative execution of instructions and the first cache is configured to service commands from a command bus for accessing a memory system, the logic circuit is configured to copy a portion of content cached in the first cache to the second cache. The cache system can include a configurable data bit. The logic circuit can be coupled to control the caches according to the bit. Alternatively, the caches can include cache sets. The caches can also include registers associated with the cache sets respectively. The logic circuit can be coupled to control the cache sets according to the registers.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: February 9, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Steven Jeffrey Wallach
  • Patent number: 10915506
    Abstract: In accordance with an embodiment, described herein is a system and method for row buffering in a database environment. A transparency engine can be provided between client applications and a database, and can operate as a proxy engine for the database and as a session abstraction layer for the client applications, to enable the client applications to utilize database features provided by the connection pool without code changes to the client applications. The transparency engine can maintain a plurality of local row buffers to store rows fetched from a database. The local buffers can be filled by rows pre-fetched from the database. When a client application requests rows from the database, the transparency engine can first check whether the rows exist in a local buffer. If the rows are present in the local buffer, the transparency engine sends the rows to the requesting client application, without going to the database.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: February 9, 2021
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Pablo Silberkasten, Michael McMahon, Saurabh Verma, Jean De Lavarene
  • Patent number: 10901767
    Abstract: In one example, a method of data localization in a hyperconverged virtual computing platform is described, which includes, determining whether a logical block address (LBA) associated with a storage request received by a node maps to another one of the plurality of nodes. The page associated with the storage request is then migrated from the other one of the plurality of nodes to the node based on a recent page hit count associated with the storage request when the LBA associated with the storage request is from another one of the plurality of nodes. Mapping layers residing in each of the plurality of nodes including the remapped LBA associated with the storage request are then updated. The storage request is resolved at the node if the LBA associated with the storage is found in the updated mapping layer associated with the node.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: January 26, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Rajiv Madampath
  • Patent number: 10896135
    Abstract: Facilitating page table entry (PTE) maintenance in processor-based devices is disclosed. In this regard, a processor-based device includes processing elements (PEs) configured to support two new coherence states: walker-readable (W) and modified walker accessible (MW). The W coherence state indicates that read access to a corresponding coherence granule by hardware table walkers (HTWs) is permitted, but all write operations and all read operations by non-HTW agents are disallowed. The MW coherence state indicates that cached copies of the coherence granule visible only to HTWs may exist in other caches. In some embodiments, each PE is also configured to support a special page table entry (SP-PTE) field store instruction for modifying SP-PTE fields of a PTE, indicating to the PE's local cache that the corresponding coherence granule should transition to the MW state, and indicating to remote local caches that copies of the coherence granule should update their coherence state.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: January 19, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Eric Francis Robinson, Jason Panavich, Thomas Philip Speier
  • Patent number: 10884740
    Abstract: A processing unit for a data processing system includes a cache memory having reservation logic and a processor core coupled to the cache memory. The processor includes an execution unit that executes instructions in a plurality of concurrent hardware threads of execution including at least first and second hardware threads. The instructions include, within the first hardware thread, a first load-reserve instruction that identifies a target address for which a reservation is requested. The processor core additionally includes a load unit that records the target address of the first load-reserve instruction and that, responsive to detecting, in the second hardware thread, a second load-reserve instruction identifying the target address recorded by the load unit, blocks the second load-reserve instruction from establishing a reservation for the target address in the reservation logic.
    Type: Grant
    Filed: November 8, 2018
    Date of Patent: January 5, 2021
    Assignee: International Business Machines Corporation
    Inventors: Derek E. Williams, Guy L. Guthrie, Kimberly M. Fernsler, Hugh Shen
  • Patent number: 10860485
    Abstract: The disclosure relates to embodiments, implemented at least partially in microcode, that use cache misses to trigger logging to a processor trace. One embodiment relies on tracking bits in a processor cache. During a transition from a non-logged context to a logged context, this embodiment invalidates or evicts cache lines whose tracking bits are not set. When logging, this first embodiment logs during cache misses, and sets tracking bits for logged cache lines. Another embodiment relies on way-locking. This second embodiment assigns first ways to a logged entity and second ways to a non-logged entity. The second embodiment ensures the logged entity cannot read cache lines from the second logging ways by flushing the second way during transitions from non-logging to logging, ensures the logged entity cannot read non-logged cache lines from the first ways, and logs based on cache misses into the first ways while executing a logged context.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: December 8, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Jordi Mola
  • Patent number: 10860481
    Abstract: Techniques perform data recovery. The techniques involve: in response to receiving to-be-written data at a first cache module, storing metadata in the data into a first non-volatile cache of the first cache module. The techniques further involve storing user data in the data into a first volatile cache of the first cache module. The techniques further involve sending the metadata and the user data to a second cache module for performing data recovery on the user data. Accordingly, a larger and better guaranteed data storage space may be provided to a cache data backup/recovery system without a need to increase the battery supply capacity and even without a battery.
    Type: Grant
    Filed: January 16, 2019
    Date of Patent: December 8, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Xinlei Xu, Jian Gao, Lifeng Yang, Haiying Tang
  • Patent number: 10853177
    Abstract: Salvaging renderable content includes providing a set of salvaging instructions including a digital pattern associated with digital content to be salvaged, and a predetermined minimum threshold of usefulness of the digital content. A digital data source includes digital content to be salvaged. The digital content is simultaneously read by reviewing the multiple types of digital content independently of one another using separate software salvaging modules to review each specific type of digital content. The digital content is filtered by identifying potentially recoverable digital content. The digital pattern is compared to the filtered digital content to indicate matches between the filtered digital content and the digital pattern. The digital content is reassembled and/or repaired. The matched digital content is validated by determining whether the salvaged digital content is in a form that meets the predetermined minimum threshold of usefulness. The validated digital content is displayed/rendered.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: December 1, 2020
    Assignee: United States of America as represented by the Secretary of the Air Force
    Inventor: Eoghan Casey
  • Patent number: 10853276
    Abstract: A technology for implementing a method for distributed memory operations. A method of the disclosure includes obtaining distributed channel information for an algorithm to be executed by a plurality of spatially distributed processing elements. For each distributed channel in the distributed channel information, the method further associates one or more of the plurality of spatially distributed processing elements with the distributed channel based on the algorithm.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: December 1, 2020
    Assignee: Intel Corporation
    Inventors: Bushra Ahsan, Michael C. Adler, Neal C. Crago, Joel S. Emer, Aamer Jaleel, Angshuman Parashar, Michael I. Pellauer
  • Patent number: 10846235
    Abstract: An integrated circuit for a coherent data processing system includes a first communication interface for communicatively coupling the integrated circuit with the coherent data processing system, a second communication interface for communicatively coupling the integrated circuit with an accelerator unit including an effective address-based accelerator cache for buffering copies of data from a system memory of the coherent data processing system, and a real address-based directory inclusive of contents of the accelerator cache. The real address-based directory assigns entries based on real addresses utilized to identify storage locations in the system memory. The integrated circuit further includes request logic that communicates memory access requests and request responses with the accelerator unit via the second communication interface.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: November 24, 2020
    Assignee: International Business Machines Corporation
    Inventors: Bartholomew Blaner, Michael S. Siegel, Jeffrey A. Stuecheli, William J. Starke, Kenneth M. Valk, John D. Irish, Lakshminarayana Arimilli
  • Patent number: 10819611
    Abstract: Techniques for implementing dynamic timeout-based fault detection in a distributed system are provided. In one set of embodiments, a node of the distributed system can set a timeout interval to a minimum value and transmit poll messages to other nodes in the distributed system. The node can further wait for acknowledgement messages from all of the other nodes, where the acknowledgement messages are responsive to the poll messages, and can check whether it has received the acknowledgement messages from all of the other nodes within the timeout interval. If the node has failed to receive an acknowledgement message from at least one of the other nodes within the timeout interval and if the timeout interval is less than a maximum value, the node can increment the timeout interval by a delta value and can repeat the setting, the transmitting, the waiting, and the checking steps.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: October 27, 2020
    Assignee: VMware, Inc.
    Inventors: Zeeshan Lokhandwala, Medhavi Dhawan, Dahlia Malkhi, Michael Wei, Maithem Munshed, Ragnar Edholm
  • Patent number: 10817425
    Abstract: Methods and apparatus implementing Hardware/Software co-optimization to improve performance and energy for inter-VM communication for NFVs and other producer-consumer workloads. The apparatus include multi-core processors with multi-level cache hierarchies including and L1 and L2 cache for each core and a shared last-level cache (LLC). One or more machine-level instructions are provided for proactively demoting cachelines from lower cache levels to higher cache levels, including demoting cachelines from L1/L2 caches to an LLC. Techniques are also provided for implementing hardware/software co-optimization in multi-socket NUMA architecture system, wherein cachelines may be selectively demoted and pushed to an LLC in a remote socket. In addition, techniques are disclosure for implementing early snooping in multi-socket systems to reduce latency when accessing cachelines on remote sockets.
    Type: Grant
    Filed: December 26, 2014
    Date of Patent: October 27, 2020
    Assignee: Intel Corporation
    Inventors: Ren Wang, Andrew J. Herdrich, Yen-cheng Liu, Herbert H. Hum, Jong Soo Park, Christopher J. Hughes, Namakkal N. Venkatesan, Adrian C. Moga, Aamer Jaleel, Zeshan A. Chishti, Mesut A. Ergin, Jr-shian Tsai, Alexander W. Min, Tsung-yuan C. Tai, Christian Maciocco, Rajesh Sankaran
  • Patent number: 10819823
    Abstract: Disclosed herein are an in-network caching apparatus and method. The in-network caching method using the in-network caching apparatus includes receiving content from a second node in response to a request from a first node; checking a Conditional Leave Copy Everywhere (CLCE) replication condition depending on a number of requests for the content; checking a priority condition based on a result value of a priority function for the content; checking a partition depending on the number of requests for the content; performing a cache replacement operation for the content depending on a result of checking the partition for the content; and transmitting the content to the first node.
    Type: Grant
    Filed: August 3, 2017
    Date of Patent: October 27, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Muhammad Bilal, Shin-Gak Kang, Wook Hyun, Sung-Hei Kim, Ju-Young Park, Mi-Young Huh
  • Patent number: 10802982
    Abstract: An apparatus includes an interface and memory acquisition circuitry. The interface is configured to communicate over a bus operating in accordance with a bus protocol, which supports address-translation transactions that translate between bus addresses in an address space of the bus and physical memory addresses in an address space of a memory. The memory acquisition circuitry is configured to read data from the memory by issuing over the bus, using the bus protocol, one or more requests that (i) specify addresses to be read in terms of the physical memory addresses, and (ii) indicate that the physical memory addresses in the requests have been translated from corresponding bus addresses even though the addresses were not obtained by any address-translation transaction over the bus.
    Type: Grant
    Filed: April 8, 2018
    Date of Patent: October 13, 2020
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Ahmad Atamlh, Ofir Arkin, Peter Paneah
  • Patent number: 10795817
    Abstract: Example distributed storage systems, file system interfaces, and methods provide cache coherence management. A system receives a file data request including a file data reference and identifies a data cache location with a coherence value for the file data reference. The system queries a reference data store for a coherence reference corresponding to the file data reference and compares the coherence value to the coherence reference. In response to the coherence value matching the coherence reference, the system executes the file data request using the data cache location.
    Type: Grant
    Filed: November 16, 2018
    Date of Patent: October 6, 2020
    Assignee: Western Digital Technologies, Inc.
    Inventors: Bruno Keymolen, Arne Vansteenkiste, Wim Michel Marcel De Wispelaere, Stijn Devriendt
  • Patent number: 10795824
    Abstract: Speculative data return in parallel with an exclusive invalidate request. A requesting processor requests data from a shared cache. The data is owned by another processor. Based on the request, an invalidate request is sent to the other processor requesting the other processor to release ownership of the data. Concurrent to the invalidate request being sent to the other processor, the data is speculatively provided to the requesting processor.
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: October 6, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna P. Berger, Christian Jacobi, Robert J. Sonnelitter, III, Craig R. Walters
  • Patent number: 10790862
    Abstract: Systems and methods in accordance with various embodiments of the present disclosure provide approaches for mapping entries to a cache using a function, such as cyclic redundancy check (CRC). The function can calculate a colored cache index based on a main memory address. The function may cause consecutive address cache indexes to be spread throughout the cache according to the indexes calculated by the function. In some embodiments, each data context may be associated with a different function, enabling different types of packets to be processed while sharing the same cache, reducing evictions of other data contexts and improving performance. Various embodiments can identify a type of packet as the packet is received, and lookup a mapping function based on the type of packet. The function can then be used to lookup the corresponding data context for the packet from the cache, for processing the packet.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: September 29, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Ofer Frishman, Erez Izenberg, Guy Nakibly