Coherency Patents (Class 711/141)
  • Patent number: 10534598
    Abstract: Embodiments for performing rolling software upgrades in a disaggregated computing environment. A rolling upgrade manager is provided for upgrading one or more disaggregated servers. A designated memory area is used for storing an updated software component, and a disaggregated server is switched to the designated memory area from a currently assigned memory area when performing the software upgrade.
    Type: Grant
    Filed: January 4, 2017
    Date of Patent: January 14, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Valentina Salapura, John A. Bivens, Min Li, Ruchi Mahindru, HariGovind V. Ramasamy, Yaoping Ruan, Eugen Schenfeld
  • Patent number: 10528468
    Abstract: A storage controlling apparatus includes a processor, wherein the processor: controls a first counter configured to count, among data stored in a cache memory and relating to an access request, a number of data which are not written in storage volumes of a target of the access request, for each storage volume; determines, in response to reception of a first access request, whether or not a first ratio of a counter value of the first counter to a number of data allocated already to the cache memory into a first storage volume exceeds a first threshold value, the counter value of the first counter corresponding to the first storage volume which is a target of the first access request; and performs a write back process of data from the cache memory into the first storage volume where the first ratio exceeds the first threshold value.
    Type: Grant
    Filed: August 29, 2016
    Date of Patent: January 7, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Shigeru Akiyama, Yasuhiro Ogasawara, Tsukasa Matsuda, Hidetoshi Nishi, Hitoshi Kosokabe
  • Patent number: 10515045
    Abstract: A computing system comprises one or more core processors coupled to a communication network among the cores via a switch in each core and switching circuitry to forward data among cores and switches. Features include a programmable classification processor for directing packets, techniques for managing virtual functions on an IO accelerator card, packet scheduling techniques, multi-processor communication using shared FIFOs, programmable duty cycle adjustment and delay adjustment circuits, a new class of instructions that use a ready bit, and cache coherence and memory ordering techniques.
    Type: Grant
    Filed: August 29, 2017
    Date of Patent: December 24, 2019
    Assignee: Mellanox Technologies Ltd.
    Inventor: Matthew Mattina
  • Patent number: 10514847
    Abstract: A data storage system includes multiple head nodes and multiple data storage sleds mounted in a rack. For a particular volume or volume partition one of the head nodes is designated as a primary head node for the volume or volume partition. The primary head node is configured to store data for the volume in a data storage of the primary head node and cause the data to be replicated to a secondary head node. The primary head node is also configured to cause the data for the volume to be stored in a plurality of respective mass storage devices each in different ones of the plurality of data storage sleds of the data storage system.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: December 24, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Norbert P. Kusters, Nachiappan Arumugam, Christopher Nathan Watson, Marc John Brooker, David R. Richardson, Danny Wei, John Luther Guthrie, II
  • Patent number: 10515014
    Abstract: According to one embodiment, a data processing system includes a plurality of processors, each of the processors being coupled to each of remaining processors via a processor interconnect, a plurality of memory controllers, each memory controller corresponding to one of the processors, a plurality of memory targets, each memory target includes one or more branches and a plurality of memory leaves for storing data, and an Ethernet switch fabric coupled to the memory controllers and the memory targets. When a first of the memory controllers writes data to a first of the memory leaves, the first memory controller sends a cache coherence message to remaining ones of the memory controllers to indicate that the data stored in the first memory leaf has been updated, such that any of the remaining memory controllers can update its cache by fetching the data from the first memory leaf.
    Type: Grant
    Filed: June 21, 2017
    Date of Patent: December 24, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Mark Himelstein, Kevin Rowett, Bruce Wilford, Richard Van Gaasbeck, Todd Wilde, Rick Carlson, Vikram Venkataraghavan, Vishwas Durai, Blair Barnett
  • Patent number: 10505988
    Abstract: A computer implemented method and apparatus comprises detecting a file content update on a first client computer system, the file to be synchronized on a plurality of different types of client computer systems in a plurality of formats. The method further comprises associating a security policy with the file, wherein the security policy includes restrictions to limit one or more actions that can be performed with the file, and synchronizing the file to a second client computing system while applying the security policy to provide controls for enforcement of the restrictions at the second client computer system.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: December 10, 2019
    Assignee: BlackBerry Limited
    Inventors: Adi Ruppin, Doron Peri, Yigal Ben-Natan, Gil S. Shidlansik, Miron Liram, Ori Saporta, David Potashinsky, Uri Yulevich, Timothy Choi
  • Patent number: 10489292
    Abstract: Embodiments of the present invention are directed to a computer-implemented method for ownership tracking updates across multiple simultaneous operations. A non-limiting example of the computer-implemented method includes receiving, by a cache directory control circuit, a message to update a cache directory entry. The method further includes, in response, updating, by the cache directory control circuit, the cache directory entry, and generating a reverse compare signal including an updated ownership vector of a memory line corresponding to the cache directory entry. The method further includes sending the reverse compare signal to a cache controller associated with the cache directory entry.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: November 26, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Timothy C. Bronson, Ashraf ElSharif, Kenneth D. Klapproth, Vesselina K. Papazova, Guy G. Tracy
  • Patent number: 10482015
    Abstract: Embodiments of the present invention are directed to a computer-implemented method for ownership tracking updates across multiple simultaneous operations. A non-limiting example of the computer-implemented method includes receiving, by a cache directory control circuit, a message to update a cache directory entry. The method further includes, in response, updating, by the cache directory control circuit, the cache directory entry, and generating a reverse compare signal including an updated ownership vector of a memory line corresponding to the cache directory entry. The method further includes sending the reverse compare signal to a cache controller associated with the cache directory entry.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: November 19, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Timothy C. Bronson, Ashraf ElSharif, Kenneth D. Klapproth, Vesselina K. Papazova, Guy G. Tracy
  • Patent number: 10474835
    Abstract: Provided is a process of operating a zero-knowledge encrypted database, the process including: obtaining a request for data in a database stored by an untrusted computing system, wherein the database is stored in a graph that includes a plurality of connected nodes, each of the nodes including: an identifier, accessible to the untrusted computing system, that distinguishes the respective node from other nodes in the graph; and an encrypted collection of data stored in encrypted form, wherein: the untrusted computing system does not have access to an encryption key to decrypt the collections of data, the encrypted collections of data in at least some of the plurality of nodes each include a plurality of keys indicating subsets of records in the database accessible via other nodes in the graph and corresponding pointers to identifiers of the other nodes.
    Type: Grant
    Filed: April 9, 2018
    Date of Patent: November 12, 2019
    Assignee: ZeroDB, Inc.
    Inventors: Mikhail Egorov, MacLane Scott Wilkison, Mohammad Ali Khan
  • Patent number: 10474218
    Abstract: In one embodiment, the present invention is directed to a processor having a plurality of cores and a cache memory coupled to the cores and including a plurality of partitions. The processor can further include a logic to dynamically vary a size of the cache memory based on a memory boundedness of a workload executed on at least one of the cores. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: November 12, 2019
    Assignee: Intel Corporation
    Inventors: Avinash N. Ananthakrishnan, Efraim Rotem, Eliezer Weissmann, Doron Rajwan, Nadav Shulman, Alon Naveh, Hisham Abu-Salah
  • Patent number: 10467139
    Abstract: A cache coherence system manages both internode and intranode cache coherence in a cluster of nodes. Each node in the cluster of nodes is either a collection of processors running an intranode coherence protocol between themselves, or a single processor. A node comprises a plurality of coherence ordering units (COUs) that are hardware circuits configured to manage intranode coherence of caches within the node and/or internode coherence with caches on other nodes in the cluster. Each node contains one or more directories which tracks the state of cache line entries managed by the particular node. Each node may also contain one or more scoreboards for managing the status of ongoing transactions. The internode cache coherence protocol implemented in the COUs may be used to detect and resolve communications errors, such as dropped message packets between nodes, late message delivery at a node, or node failure.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: November 5, 2019
    Assignee: Oracle International Corporation
    Inventors: Paul N. Loewenstein, Damien Walker, Priyambada Mitra, Ali Vahidsafa, Matthew Cohen, Josephus Ebergen, Andrew Brock
  • Patent number: 10467092
    Abstract: Providing space-efficient storage for dynamic random access memory (DRAM) cache tags is provided. In one aspect, a DRAM cache management circuit provides a plurality of cache entries, each of which contains a tag storage region, a data storage region, and an error protection region. The DRAM cache management circuit is configured to store data to be cached in the data storage region of each cache entry. The DRAM cache management circuit is also configured to use an error detection code (EDC) instead of an error correcting code (ECC), and to store a tag and the EDC for each cache entry in the error protection region of the cache entry. In this manner, the capacity of a DRAM cache can be increased by avoiding the need for the tag storage region for each cache entry, while still providing error detection for the cache entry.
    Type: Grant
    Filed: March 30, 2016
    Date of Patent: November 5, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Natarajan Vaidhyanathan, Mattheus Cornelis Antonius Adrianus Heddes, Colin Beaton Verrilli
  • Patent number: 10469577
    Abstract: A caching method based on a cache cluster is provided, the method including determining a partition number of a caching partition corresponding to to-be-written data; querying, with a view service node according to the partition number, primary node information of the caching partition corresponding to the to-be-written data; receiving the primary node information that is of the caching partition corresponding to the to-be-written data and that is returned by a view service node, and sending a write request to a primary node of the caching partition corresponding to the to-be-written data; writing the to-be-written data into a local write-back cache according to the write request; and obtaining information about each secondary node of the caching partition corresponding to the to-be-written data from the view service node, and copying the to-be-written data to each secondary node of the caching partition corresponding to the to-be-written data.
    Type: Grant
    Filed: May 11, 2018
    Date of Patent: November 5, 2019
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Lixie Liu, Weikang Kong
  • Patent number: 10452272
    Abstract: A system and method are disclosed with the ability to track usage of information, which patterns, and determine the most frequently used patterns to be stored and updated in a directory, thereby controlling and reducing the size allocated to storing information in the directory. The size is reduced by limiting address bits thereby allowing subsystems to avoid transmitting, storing, and operating upon excessive address information.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: October 22, 2019
    Assignee: ARTERIS, INC.
    Inventor: Parimal Gaikwad
  • Patent number: 10445211
    Abstract: Methods and systems are disclosed for logging trace data generated by executing program code at an instruction level. In aspects, high volumes of trace data are generated during certain time periods, e.g., immediately following a start of the tracing. Processors operating at normal speeds are often unable to log such high volumes of trace data. The issue of such high volumes of trace data may be addressed by selectively and dynamically controlling logging of outstanding trace data. For example, a rate of generating the trace may be reduced by slowing processor speeds, logging of outstanding trace data may be suspended for a period, and logging of non-urgent trace data may be selectively delayed.
    Type: Grant
    Filed: August 28, 2017
    Date of Patent: October 15, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Jordi Mola
  • Patent number: 10437732
    Abstract: In an embodiment, a processor includes at least one core and a first cache memory including a first plurality of sets having a first plurality of cache lines and associated metadata to store address information, recency information and a first indicator to indicate whether the cache line is associated with an oversubscribed set of a second cache memory. A first cache controller may be configured to base an eviction decision with regard to a first set of the first plurality of sets including a first cache line at least in part on the first indicator of the first cache line. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: October 8, 2019
    Assignee: Intel Corporation
    Inventor: Daniel Greenspan
  • Patent number: 10402287
    Abstract: According to an example, data corruption and single point of failure is prevented in a fault-tolerant memory fabric with multiple redundancy controllers by granting, by a parity media controller, a lock of a stripe to a redundancy controller to perform a sequence on the stripe. The lock may be broken in response to determining a failure of the redundancy controller prior to completing the sequence. In response to breaking the lock, the parity cacheline of the stripe may be flagged as invalid. Also, a journal may be updated to document the breaking of the lock.
    Type: Grant
    Filed: January 30, 2015
    Date of Patent: September 3, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Derek Alan Sherlock, Harvey Ray, Chris Michael Brueggen
  • Patent number: 10402273
    Abstract: The disclosed technology is generally directed to IoT device update failure recovery. In one example of the technology, after writing an updated release to memory, a determination is made whether the updated release is valid. The updated release includes a plurality of image binaries. If the updated release is determined to be valid, the updated release is made the current release. A determination is made as to whether the current release is stable. Upon determining that the current release is unstable, an auto-rollback is performed. Performing the auto-rollback includes, via at least one processor, automatically: obtaining an uncompressed backup of a previous release; making the uncompressed backup of the previous release the current release; and executing the uncompressed backup.
    Type: Grant
    Filed: February 27, 2017
    Date of Patent: September 3, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Reuben R. Olinsky, Edmund B. Nightingale
  • Patent number: 10394492
    Abstract: According to one embodiment, a system includes a media storage device, a processor, and logic integrated with and/or executable by the processor. The logic is configured to cause the processor to determine a write rate for the media storage device or a portion thereof based on one or more factors, the write rate ranging from zero to a maximum possible write rate for the media storage device or the portion thereof. The logic is also configured to cause the processor to receive a write request to write data to the media storage device or the portion thereof and write the data to the media storage device using the determined write rate. Other systems, methods, and computer program products for defending against ransomware attacks are presented according to more embodiments.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: August 27, 2019
    Assignee: Lenovo Enterprise Solutions (Singapore) Pte. Ltd.
    Inventors: John Michael Petersen, Gary David Cudak, Shareef Fathi Alshinnawi, Ajay Dholakia
  • Patent number: 10387310
    Abstract: A data processing system includes first and second coherency domains and employs a snoop-based coherence protocol. In response to receipt by the first coherency domain of a memory access request originating from a master in the second coherency domain, a plurality of coherence participants in the first coherency domain provides partial responses for the memory access request to an early combined response generator. Based on the partial responses, the early combined response generator generates and transmits, to a memory controller of a system memory in the first coherency domain, an early combined response of only the first coherency domain. Based on the early combined response, the memory controller transmits, to the master prior to receipt by the memory controller of a systemwide combined response for the memory access request, data associated with a target memory address and/or coherence permission for the target memory address.
    Type: Grant
    Filed: January 17, 2018
    Date of Patent: August 20, 2019
    Assignee: International Business Machines Corporation
    Inventors: Eric E. Retter, Michael S. Siegel, Jeffrey A. Stuecheli, Derek E. Williams
  • Patent number: 10379856
    Abstract: A data processing system implementing a weak memory model includes a plurality of processing units coupled to an interconnect fabric. In response execution of a multicopy atomic store instruction, an initiating processing unit broadcasts a store request on the interconnect fabric to obtain coherence ownership of a target cache line. The initiating processing unit posts a kill request to at least one of the plurality of processing units to request invalidation of a copy of the target cache line. In response to successful posting of the kill request, the initiating processing unit broadcasts a store complete request on the interconnect fabric to enforce completion of the invalidation of the copy of the target cache line. In response to the store complete request receiving a coherence response indicating success, the initiating processing unit permits an update to the target cache line requested by the multicopy atomic store instruction to be atomically visible.
    Type: Grant
    Filed: June 4, 2017
    Date of Patent: August 13, 2019
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Derek E. Williams
  • Patent number: 10380099
    Abstract: A computer-implemented method is provided for managing and sharing picture files. In one embodiment of the present invention, the method comprises providing a server platform and providing a datastore on the server platform for maintaining full resolution copies of the files shared between a plurality of sharing clients. A synchronization engine is provided on the server platform and is configured to send real-time updates to a plurality of sharing clients when at least one of the sharing clients updates or changes one of said files. A web interface may also be provided that allows a user to access files in the datastore through the use of a web browser.
    Type: Grant
    Filed: October 2, 2015
    Date of Patent: August 13, 2019
    Assignee: DROPBOX, INC.
    Inventors: Jack Benjamin Strong, Gibu Thomas
  • Patent number: 10373285
    Abstract: One embodiment provides for a general-purpose graphics processing device comprising a general-purpose graphics processing compute block to process a workload including graphics or compute operations, a first cache memory, and a coherency module enable the first cache memory to coherently cache data for the workload, the data stored in memory within a virtual address space, wherein the virtual address space shared with a separate general-purpose processor including a second cache memory that is coherent with the first cache memory.
    Type: Grant
    Filed: April 9, 2017
    Date of Patent: August 6, 2019
    Assignee: Intel Corporation
    Inventors: Joydeep Ray, Altug Koker, James A. Valerio, David Puffer, Abhishek R. Appu, Stephen Junkins
  • Patent number: 10372638
    Abstract: A method for modifying an address in a multi-processor system may include performing a first transaction to modify an address between a first processor and an interconnect agent associated with the first processor and storing data for the address on the interconnect agent. The method may further include performing a second transaction to modify an address between the interconnect agent and a memory associated with a second processor and storing the data in the memory.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: August 6, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Thomas E. McGee
  • Patent number: 10360054
    Abstract: File mapping and converting for dynamic disk personalization for multiple platforms are provided. A volatile file operation is detected in a first platform. The file supported by the first platform. A determination is made that the file is sharable with a second platform. The volatile operation is performed on the file in the first platform and the modified file is converted to a second file supported by the second platform. The modified file and second file are stored in a personalized disk for a user. The personalized disk is used to modify base images for VMs of the user when the user accesses the first platform or second platform. The modified file is available within the first platform and the second file is available within the second platform.
    Type: Grant
    Filed: April 27, 2016
    Date of Patent: July 23, 2019
    Assignee: Micro Focus Software Inc.
    Inventors: Nathaniel Brent Kranendonk, Jason Allen Sabin, Lloyd Leon Burch, Jeremy Ray Brown, Kal A. Larsen, Michael John Jorgensen
  • Patent number: 10362143
    Abstract: A system and method dynamically transitions the file system role of compute nodes in a distributed clustered file system for an object that includes an embedded compute engine (a storlet). Embodiments of the invention overcome prior art problems of a storlet in a distributed storage system with a storlet engine having a dynamic role module which dynamically assigns or changes a file system role served by the node to a role which is more optimally suited for a computation operation in the storlet. The role assignment is made based on a classification of the computation operation and the appropriate filesystem role that matches computation operation. For example, a role could be assigned which helps reduce storage needs, communication resources, etc.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: July 23, 2019
    Assignee: International Business Machines Corporation
    Inventors: Duane M. Baldwin, Sasikanth Eda, John T. Olson, Sandeep R. Patil
  • Patent number: 10353601
    Abstract: A memory system of a data processing system includes one or more storage devices and a data rearrangement engine for moving data between memory regions of the plurality of memory regions. The data rearrangement engine is configured to rearrange data stored at non-contiguous addresses in a source memory region into contiguous address in a destination region responsive to a rearrangement specified by a host processing unit of the data processing system. A description of the rearranged data is maintained in a metadata memory region. Rearranged data may be accessed by one or more host processing units. Write-back of data from the destination to the source region may be reduced by use of Bloom filter or the like.
    Type: Grant
    Filed: November 28, 2016
    Date of Patent: July 16, 2019
    Assignee: Arm Limited
    Inventor: Jonathan Curtis Beard
  • Patent number: 10346082
    Abstract: A storage system manages control information, which is information related to responses corresponding to prescribed types of commands, for each of a plurality of logical units associated with a logical device, said logical units being provided to one or more host systems. The prescribed types of commands indicating the logical units provided to a first host system, which is one of the one or more host systems, are received from the first host system by the storage system. Responses based on the control information corresponding to the logical units indicated by the received prescribed types of commands are returned to the first host system by the storage system as responses to the received prescribed types of commands.
    Type: Grant
    Filed: June 24, 2015
    Date of Patent: July 9, 2019
    Assignee: HITACHI LTD.
    Inventors: Azusa Jin, Hideo Saito, Shunji Kawamura, Kenji Muraoka, Kunihiko Nashimoto
  • Patent number: 10346091
    Abstract: Methods and apparatus related to fabric resiliency support for atomic writes of many store operations to remote nodes are described. In one embodiment, non-volatile memory stores data corresponding to a plurality of write operations. A first node includes logic to perform one or more operations (in response to the plurality of write operations) to cause storage of the data at a second node atomically. The plurality of write operations are atomically bound to a transaction and the data is written to the non-volatile memory in response to release of the transaction. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: July 9, 2019
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Thomas Willhalm, Karthik Kumar, Martin P. Dimitrov, Raj K. Ramanujan
  • Patent number: 10339060
    Abstract: System, method, and processor for enabling early deallocation of tracker entries which track memory accesses are described herein. One embodiment of a method includes: maintaining an RSF corresponding to a first processing unit of a plurality of processing units to track cache lines, wherein a cache line is tracked by the RSF if the cache line is stored in both a memory and one or more other processing unit, the memory is coupled to and shared by the plurality of processing units; receiving a request to access a target cache line from a processing core of the first processing unit; allocating a tracker entry corresponding to the request, the tracker entry used to track a status of the request; performing a lookup in the RSF for the target cache line; and deallocating the tracker entry responsive to a detection that the target cache line is not tracked the RSF.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: July 2, 2019
    Assignee: Intel Corporation
    Inventors: Bahaa Fahim, Ashok Jagannathan, Jeffrey D. Chamberlain, Samuel D. Strom
  • Patent number: 10320904
    Abstract: A computer-implemented method is provided for managing and sharing picture files. In one embodiment of the present invention, the method comprises providing a server platform and providing a datastore on the server platform for maintaining full resolution copies of the files shared between a plurality of sharing clients. A synchronization engine is provided on the server platform and is configured to send real-time updates to a plurality of sharing clients when at least one of the sharing clients updates or changes one of said files. A web interface may also be provided that allows a user to access files in the datastore through the use of a web browser.
    Type: Grant
    Filed: October 2, 2015
    Date of Patent: June 11, 2019
    Assignee: DROPBOX, INC.
    Inventors: Jack Benjamin Strong, Gibu Thomas
  • Patent number: 10303602
    Abstract: A processing system includes at least one central processing unit (CPU) core, at least one graphics processing unit (GPU) core, a main memory, and a coherence directory for maintaining cache coherence. The at least one CPU core receives a CPU cache flush command to flush cache lines stored in cache memory of the at least one CPU core prior to launching a GPU kernel. The coherence directory transfers data associated with a memory access request by the at least one GPU core from the main memory without issuing coherence probes to caches of the at least one CPU core.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: May 28, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Onur Kayiran, Gabriel H. Loh, Yasuko Eckert
  • Patent number: 10289553
    Abstract: Disclosed aspects relate to accelerator sharing among a plurality of processors through a plurality of coherent proxies. The cache lines in a cache associated with the accelerator are allocated to one of the plurality of coherent proxies. In a cache directory for the cache lines used by the accelerator, the status of the cache lines and the identification information of the coherent proxies to which the cache lines are allocated are provided. Each coherent proxy maintains a shadow directory of the cache directory for the cache lines allocated to it. In response to receiving an operation request, a coherent proxy corresponding to the request is determined. The accelerator communicates with the determined coherent proxy for the request.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: May 14, 2019
    Assignee: International Business Machines Corporation
    Inventors: Peng Fei Bg Gou, Yang Liu, Yang Fan El Liu, Yong Lu
  • Patent number: 10282299
    Abstract: Partition information includes entries that each include an entity identifier and associated cache configuration information. A controller manages memory requests originating from processor cores, including: comparing at least a portion of an address included in a memory request with tags stored in a cache to determine whether the memory request results in a hit or a miss, and comparing an entity identifier included in the memory request with stored entity identifiers to determine a matched entry. The cache configuration information associated with the entity identifier in a matched entry is updated based at least in part on a hit or miss result. The associated cache configuration information includes cache usage information that tracks usage of the cache by an entity associated with the particular entity identifier, and partition descriptors that each define a different group of one or more of the regions.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: May 7, 2019
    Assignee: Cavium, LLC
    Inventors: Shubhendu Sekhar Mukherjee, David Asher, Wilson P. Snyder, II
  • Patent number: 10261904
    Abstract: Operations associated with a memory and operations associated with one or more functional units may be received. A dependency between the operations associated with the memory and the operations associated with one or more of the functional units may be determined. A first ordering may be created for the operations associated with the memory. Furthermore, a second ordering may be created for the operations associated with one or more of the functional units based on the determined dependency and the first operating of the operations associated with the memory.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: April 16, 2019
    Assignee: Intel Corporation
    Inventors: Chunhui Zhang, George Z. Chrysos, Edward T. Grochowski, Ramacharan Sundararaman, Chung-Lun Chan, Federico Ardanaz
  • Patent number: 10255305
    Abstract: Technologies for object-based data consistency in a fabric architecture includes a network switch communicatively coupled to a plurality of computing nodes. The network switch is configured to receive an object read request that includes an object identifier and a data consistency threshold from one of the computing nodes. The network switch is additionally configured to perform a lookup for a value of an object in the cache memory as a function of the object identifier and determine whether a condition of the value of the object violates the data consistency threshold in response to a determination that the lookup successfully returned the value of the object. The network switch is further configured to transmit the value of the object to the computing node in response to a determination that the condition of the value of the object does not violate the data consistency threshold. Other embodiments are described herein.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: April 9, 2019
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Thomas Willhalm, Karthik Kumar, Raj K. Ramanujan, Daniel Rivas Barragan
  • Patent number: 10255118
    Abstract: A system and method of allocating resources among cores in a multi-core system is disclosed. The system and method determine cores that are able to process tasks to be performed, and use history of usage information to select a core to process the tasks. The system may be a heterogeneous multi-core processing system, and may include a system on chip (SoC).
    Type: Grant
    Filed: October 24, 2013
    Date of Patent: April 9, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ki Soo Yu, Kyung Il Sun, Chang Hwan Youn
  • Patent number: 10255182
    Abstract: A method of managing a cache includes storing first data of an upper level cache in a lower level cache, predicting a reuse distance level of second data having a same signature as the first data based on access information about the first data, and storing the second data in one of the lower level cache and a main memory based on the predicted reuse distance level of the second data.
    Type: Grant
    Filed: February 9, 2016
    Date of Patent: April 9, 2019
    Assignees: SAMSUNG ELECTRONICS CO., LTD., SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
    Inventors: Namhyung Kim, Junwhan Ahn, Kiyoung Choi, Woong Seo
  • Patent number: 10254990
    Abstract: Method, system and product for direct access to de-duplicated data units in memory-based file systems. The method comprising: updating a page entry in a page table of a process to include a direct access pointer to a de-duplicated data unit retained by the memory-based file system, wherein the page entry is set to be write protected; detecting a page fault occurring due to the process performing a store instruction to the de-duplicated data unit; and in response to said detecting: allocating a new data unit; copying content of the de-duplicated data unit to the new data unit; and replacing the direct access pointer to the de-duplicated data unit with a direct access pointer to the new data unit.
    Type: Grant
    Filed: May 13, 2016
    Date of Patent: April 9, 2019
    Assignee: NETAPP, INC.
    Inventors: Amit Golander, Yigal Korman, Boaz Harrosh
  • Patent number: 10248325
    Abstract: Memory is to store cache lines, where the cache lines include data and directory information to indicate a directory state of the corresponding cache line. A command is received from a processor over a link, the command including an address. The address is determined to correspond to a particular cache line and the particular cache line is identified to have a particular directory state from the corresponding directory information of the particular cache line. A type of the command is identified and a determination is made that that the directory state of the particular cache line is to change from the particular state to a new state based on the type of the command. The directory information of the particular cache line is changed to reflect the new state and a response is generated to the command.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: April 2, 2019
    Assignee: Intel Corporation
    Inventor: Robert G. Blankenship
  • Patent number: 10241945
    Abstract: In a data processing system implementing a weak memory model, a lower level cache receives, from a processor core, a plurality of copy-type requests and a plurality of paste-type requests that together indicate a memory move to be performed. The lower level cache also receives, from the processor core, a barrier request that requests enforcement of ordering of memory access requests prior to the barrier request with respect to memory access requests after the barrier request. Prior to completion of processing of the barrier request by the lower level cache, the lower level cache speculatively issues a request on the interconnect fabric to obtain a copy of a data granule specified by a memory access request among the pluralities of requests that follows the barrier request in program order.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: March 26, 2019
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Derek E. Williams
  • Patent number: 10229024
    Abstract: An apparatus for coherent shared memory across multiple clusters is described herein. The apparatus includes a fabric memory controller and one or more nodes. The fabric memory controller manages access to a shared memory region of each node such that each shared memory region is accessible using load store semantics, even in response to failure of the node. The apparatus also includes a global memory, wherein each shared memory region is mapped to the global memory by the fabric memory controller.
    Type: Grant
    Filed: June 8, 2016
    Date of Patent: March 12, 2019
    Assignee: Intel Corporation
    Inventors: Debendra Das Sharma, Mohan J. Kumar, Balint Fleischer
  • Patent number: 10223266
    Abstract: A load store unit (LSU) in a processor core detects that new data produced by the processor core is ready to be drained to an L2 cache. In response to the LSU detecting that an earlier version of the new data is not stored in L1 cache, a memory controller sends the new data as L1 cache missed data to a store queue (STQ), where the STQ makes data available for deallocation from the STQ to the L2 cache. In response to determining that there is no newer data waiting to be stored in the STQ, or no cache line invalidate to the line containing the store data in the STQ that misses the cache, the memory controller maintains the new data in the STQ with a zombie stat bit that indicates that the new data is a zombie store entry that can be utilized by the processor core.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: March 5, 2019
    Assignee: International Business Machines Corporation
    Inventors: Robert A. Cordes, Hung Q. Le, Brian W. Thompto
  • Patent number: 10223186
    Abstract: A coherency error detection and reporting mechanism monitors for coherency errors in a processor and between processors. When a requestor broadcasts a memory address in a command and a coherency error is detected, information regarding the command that caused the coherency error is logged, and the coherency error is reported a system error handler. The information logged for the coherency error may include the address of the coherency error, the requestor, the command, the response to the command, the scope of the coherency error, the error type, etc. Logging information relating to the coherency error provides more information to a person analyzing the processor for failures to more easily track down the cause of coherency errors.
    Type: Grant
    Filed: February 1, 2017
    Date of Patent: March 5, 2019
    Assignee: International Business Machines Corporation
    Inventors: John T. Hollaway, Jr., Charles F. Marino, Michael S. Siegel
  • Patent number: 10216781
    Abstract: Techniques are described for maintaining coherency of a portion of a database object populated in the volatile memories of multiple nodes in a database cluster. The techniques involve maintaining a local invalidation bitmap for chunks of data stored in memory in each particular node in the cluster by tracking locks granted by a lock manager. During a pre-loading operation, each given node requests a set of shared locks associated with the chunks of data to be store in the given node's memory. When a request to release one of these shared locks occurs, the in-memory copy of those data items may be invalidated in the node releasing its shared lock.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: February 26, 2019
    Assignee: Oracle International Corporation
    Inventors: Sanket Hase, Neil MacNaughton, Vivekanandhan Raja, Atrayee Mullick, Vineet Marwah, Amit Ganesh
  • Patent number: 10216633
    Abstract: There is provided a data processing device including an output port to transmit a request value to an interconnect arranged to implement a coherency protocol, to indicate a request to be subjected to the coherency protocol. An input port receives an acknowledgement value from the interconnect in response to the request value and coherency administration circuitry defines behavior rules for the data processing device in accordance with the coherency protocol and in dependence on the request value and the acknowledgement value. Storage circuitry administers data in accordance with the behavior rules. There is also provided an interconnect including an input port to receive a request value, issued by a data processing device having storage circuitry, to indicate a request for the data processing to be subjected to a coherency protocol.
    Type: Grant
    Filed: April 29, 2016
    Date of Patent: February 26, 2019
    Assignee: Arm Limited
    Inventors: Dominic William Brown, Ashley John Crawford
  • Patent number: 10216580
    Abstract: Methods, system and computer program product for backup and restore mainframe data onto an object storage, the methods comprising a backup operation and a restore operation, the backup operation comprising: receiving a request for backing up a data set; splitting the data set into chunks, each chunk having a predetermined size; creating a mapping object; repeating for each chunk: allocating a sender thread to the chunk; transmitting using an object storage API, the chunk having the predetermined size as an object, to the object storage by the sender thread; and updating the mapping object with details of the chunk; subject to the data set being fully split and no more chunks to be transmitted, transmitting the mapping object to the object storage by the sender thread; and writing an identifier of the data set and meta data of the mapping object to a database.
    Type: Grant
    Filed: March 29, 2018
    Date of Patent: February 26, 2019
    Assignee: MODEL9 SOFTWARE LTD.
    Inventors: Gil Peleg, Yuval Kashtan, Tomer Zelberzvig, Dori Polotsky, Offer Baruch
  • Patent number: 10216519
    Abstract: A data processing system implementing a weak memory model includes a plurality of processing units coupled to an interconnect fabric. In response execution of a multicopy atomic store instruction, an initiating processing unit broadcasts a store request on the interconnect fabric to obtain coherence ownership of a target cache line. The initiating processing unit posts a kill request to at least one of the plurality of processing units to request invalidation of a copy of the target cache line. In response to successful posting of the kill request, the initiating processing unit broadcasts a store complete request on the interconnect fabric to enforce completion of the invalidation of the copy of the target cache line. In response to the store complete request receiving a coherence response indicating success, the initiating processing unit permits an update to the target cache line requested by the multicopy atomic store instruction to be atomically visible.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: February 26, 2019
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Derek E. Williams
  • Patent number: 10216413
    Abstract: Techniques are provided by which memory pages may be migrated among PPU memories in a multi-PPU system. According to the techniques, a UVM driver determines that a particular memory page should change ownership state and/or be migrated between one PPU memory and another PPU memory. In response to this determination, the UVM driver initiates a peer transition sequence to cause the ownership state and/or location of the memory page to change. Various peer transition sequences involve modifying mappings for one or more PPU, and copying a memory page from one PPU memory to another PPU memory. Several steps in peer transition sequences may be performed in parallel for increased processing speed.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: February 26, 2019
    Assignee: NVIDIA CORPORATION
    Inventors: Jerome F. Duluk, Jr., John Mashey, Mark Hairgrove, Chenghuan Jia, Cameron Buschardt, Lucien Dunning, Brian Fahs
  • Patent number: 10216692
    Abstract: A multiprocessor system on a chip (MPSoC) implements parallel processing and include a plurality of cores with inter-core communication. This communication is implemented by an on-chip switch fabric in communication with each core, or by shared memory in communication with each core. In another embodiment, a parallel processing system is implemented as a Howard Cascade and uses shared memory for implementing inter-chip communication. The parallel processing system includes a plurality of chips, each formed as an MPSoC, and implements communication between the chips using shared memory.
    Type: Grant
    Filed: June 17, 2010
    Date of Patent: February 26, 2019
    Assignee: Massively Parallel Technologies, Inc.
    Inventor: Kevin D. Howard