Coherency Patents (Class 711/141)
  • Patent number: 10817425
    Abstract: Methods and apparatus implementing Hardware/Software co-optimization to improve performance and energy for inter-VM communication for NFVs and other producer-consumer workloads. The apparatus include multi-core processors with multi-level cache hierarchies including and L1 and L2 cache for each core and a shared last-level cache (LLC). One or more machine-level instructions are provided for proactively demoting cachelines from lower cache levels to higher cache levels, including demoting cachelines from L1/L2 caches to an LLC. Techniques are also provided for implementing hardware/software co-optimization in multi-socket NUMA architecture system, wherein cachelines may be selectively demoted and pushed to an LLC in a remote socket. In addition, techniques are disclosure for implementing early snooping in multi-socket systems to reduce latency when accessing cachelines on remote sockets.
    Type: Grant
    Filed: December 26, 2014
    Date of Patent: October 27, 2020
    Assignee: Intel Corporation
    Inventors: Ren Wang, Andrew J. Herdrich, Yen-cheng Liu, Herbert H. Hum, Jong Soo Park, Christopher J. Hughes, Namakkal N. Venkatesan, Adrian C. Moga, Aamer Jaleel, Zeshan A. Chishti, Mesut A. Ergin, Jr-shian Tsai, Alexander W. Min, Tsung-yuan C. Tai, Christian Maciocco, Rajesh Sankaran
  • Patent number: 10819611
    Abstract: Techniques for implementing dynamic timeout-based fault detection in a distributed system are provided. In one set of embodiments, a node of the distributed system can set a timeout interval to a minimum value and transmit poll messages to other nodes in the distributed system. The node can further wait for acknowledgement messages from all of the other nodes, where the acknowledgement messages are responsive to the poll messages, and can check whether it has received the acknowledgement messages from all of the other nodes within the timeout interval. If the node has failed to receive an acknowledgement message from at least one of the other nodes within the timeout interval and if the timeout interval is less than a maximum value, the node can increment the timeout interval by a delta value and can repeat the setting, the transmitting, the waiting, and the checking steps.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: October 27, 2020
    Assignee: VMware, Inc.
    Inventors: Zeeshan Lokhandwala, Medhavi Dhawan, Dahlia Malkhi, Michael Wei, Maithem Munshed, Ragnar Edholm
  • Patent number: 10819823
    Abstract: Disclosed herein are an in-network caching apparatus and method. The in-network caching method using the in-network caching apparatus includes receiving content from a second node in response to a request from a first node; checking a Conditional Leave Copy Everywhere (CLCE) replication condition depending on a number of requests for the content; checking a priority condition based on a result value of a priority function for the content; checking a partition depending on the number of requests for the content; performing a cache replacement operation for the content depending on a result of checking the partition for the content; and transmitting the content to the first node.
    Type: Grant
    Filed: August 3, 2017
    Date of Patent: October 27, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Muhammad Bilal, Shin-Gak Kang, Wook Hyun, Sung-Hei Kim, Ju-Young Park, Mi-Young Huh
  • Patent number: 10802982
    Abstract: An apparatus includes an interface and memory acquisition circuitry. The interface is configured to communicate over a bus operating in accordance with a bus protocol, which supports address-translation transactions that translate between bus addresses in an address space of the bus and physical memory addresses in an address space of a memory. The memory acquisition circuitry is configured to read data from the memory by issuing over the bus, using the bus protocol, one or more requests that (i) specify addresses to be read in terms of the physical memory addresses, and (ii) indicate that the physical memory addresses in the requests have been translated from corresponding bus addresses even though the addresses were not obtained by any address-translation transaction over the bus.
    Type: Grant
    Filed: April 8, 2018
    Date of Patent: October 13, 2020
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Ahmad Atamlh, Ofir Arkin, Peter Paneah
  • Patent number: 10795817
    Abstract: Example distributed storage systems, file system interfaces, and methods provide cache coherence management. A system receives a file data request including a file data reference and identifies a data cache location with a coherence value for the file data reference. The system queries a reference data store for a coherence reference corresponding to the file data reference and compares the coherence value to the coherence reference. In response to the coherence value matching the coherence reference, the system executes the file data request using the data cache location.
    Type: Grant
    Filed: November 16, 2018
    Date of Patent: October 6, 2020
    Assignee: Western Digital Technologies, Inc.
    Inventors: Bruno Keymolen, Arne Vansteenkiste, Wim Michel Marcel De Wispelaere, Stijn Devriendt
  • Patent number: 10795824
    Abstract: Speculative data return in parallel with an exclusive invalidate request. A requesting processor requests data from a shared cache. The data is owned by another processor. Based on the request, an invalidate request is sent to the other processor requesting the other processor to release ownership of the data. Concurrent to the invalidate request being sent to the other processor, the data is speculatively provided to the requesting processor.
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: October 6, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna P. Berger, Christian Jacobi, Robert J. Sonnelitter, III, Craig R. Walters
  • Patent number: 10790862
    Abstract: Systems and methods in accordance with various embodiments of the present disclosure provide approaches for mapping entries to a cache using a function, such as cyclic redundancy check (CRC). The function can calculate a colored cache index based on a main memory address. The function may cause consecutive address cache indexes to be spread throughout the cache according to the indexes calculated by the function. In some embodiments, each data context may be associated with a different function, enabling different types of packets to be processed while sharing the same cache, reducing evictions of other data contexts and improving performance. Various embodiments can identify a type of packet as the packet is received, and lookup a mapping function based on the type of packet. The function can then be used to lookup the corresponding data context for the packet from the cache, for processing the packet.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: September 29, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Ofer Frishman, Erez Izenberg, Guy Nakibly
  • Patent number: 10783033
    Abstract: A computing device includes a main memory, a processor, and a cache. The main memory stores data and parity, for checking an error of the data, and sends and receives the data and parity with a reference size. The processor accesses the main memory, and the cache memory caches the data. If the processor requests a write operation for current data, the current data are stored to the cache memory and the cache memory changes the stored current data to the reference size and outputs the current data changed to the reference size to the main memory. A size of the current data is smaller than the reference size.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: September 22, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyungwoo Choi, Hyungjoon Park, Jinwoong Lee
  • Patent number: 10778660
    Abstract: Systems and method for incorporating state machine information for tracking processing ownership of messages received by the network service providers. As individual messages are received, the state machine provides any previously tracked ownership state. If the message has not been previously allocated to a specific message processing system, a state can be updated that designates processing ownership. The processing ownership can be allocated based on the allocations among the message processing systems.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: September 15, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Sasanka Rajaram, Deep Dixit, Raghunathan Kothandaraman, Peng Tea
  • Patent number: 10769067
    Abstract: A cache interconnect and method of operating a cache interconnect are disclosed. In the cache interconnect snoop circuitry stores a table containing an entry, for each of a plurality of cache lines, comprising a cache line identifier, an indication of a most recent processing element of a plurality of processing elements associated with the cache interconnect to access the cache line, and an indication of a data item in the cache line which was identified by the most recent processing element to be accessed. In response to a request from a requesting processing element of the plurality of processing elements, the request identifying a requested data item, the snoop circuitry determines a requested cache line identifier corresponding to the requested data item and looks up that identifier in the table.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: September 8, 2020
    Assignee: Arm Limited
    Inventor: Alasdair Grant
  • Patent number: 10761987
    Abstract: An apparatus and method are provided for processing ownership upgrade requests in relation to cached data. The apparatus has a plurality of processing units, at least some of which have associated cache storage. A coherent interconnect couples the plurality of master units with memory, the coherent interconnect having a snoop unit used to implement a cache coherency protocol when a request received by the coherent interconnect identifies a cacheable memory address within the memory. Contention management circuitry is provided to control contended access to a memory address by two or more processing units within the plurality of processing units.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: September 1, 2020
    Assignee: Arm Limited
    Inventors: Jamshed Jalal, Mark David Werkheiser, Michael Filippo, Klas Magnus Bruce, Paul Gilbert Meyer
  • Patent number: 10754777
    Abstract: Data units are stored in private caches in nodes of a multiprocessor system, each node containing at feast one processor (CPU), at least one cache private to the node and at least one cache location buffer {CLB} private to the node. In each CLB location information values are stored, each location information value indicating a location associated with a respective data unit, wherein each location information value stored in a given CLB indicates the location to be either a location within the private cache disposed in the same node as the given CLB, to be a location in one of the other nodes, or to be a location in a main memory. Coherence of values of the data units is maintained using a cache coherence protocol The location information values stored in the CLBs are updated by the cache coherence protocol in accordance with movements of their respective data units.
    Type: Grant
    Filed: November 4, 2016
    Date of Patent: August 25, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Erik Hagersten, Andreas Sembrant, David Black-Schaffer
  • Patent number: 10754559
    Abstract: A first storage system is configured to participate in a replication process with a second storage system using an active-active configuration. A request for a time-to-live (TTL) grant is received in the first storage system from the second storage system. The first storage system computes an estimate of a difference between local times in the respective first and second storage systems, utilizes the computed estimate in the first storage system to determine a TTL expiration time in the local time in the second storage system, and sends the TTL grant with the TTL expiration time to the second storage system in response to the request. The computed estimate of the difference between the local times in the respective first and second storage systems is illustratively utilized in the first storage system to determine a range for the local time in the second storage system.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: August 25, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: David Meiri, Anton Kucherov
  • Patent number: 10740239
    Abstract: A multiprocessor data processing system includes a processor core having a translation structure for buffering a plurality of translation entries. In response to receipt of a translation invalidation request, the processor core determines from the translation invalidation request that the translation invalidation request does not require draining of memory referent instructions for which address translation has been performed by reference to a translation entry to be invalidated. Based on the determination, the processor core invalidates the translation entry in the translation structure and confirms completion of invalidation of the translation entry without regard to draining from the processor core of memory access requests for which address translation was performed by reference to the translation entry.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: August 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: Derek E. Williams, Guy L. Guthrie, Hugh Shen
  • Patent number: 10740235
    Abstract: A technique includes, in response to a cache miss occurring with a given processing node of a plurality of processing nodes, using a directory-based coherence system for the plurality of processing nodes to regulate snooping of an address that is associated with the cache miss. Using the directory-based coherence system to regulate whether the address is included in a snooping domain is based at least in part on a number of cache misses associated with the address.
    Type: Grant
    Filed: July 31, 2015
    Date of Patent: August 11, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Alexandros Daglis, Paolo Faraboschi, Qiong Cai, Gary Gostin
  • Patent number: 10740260
    Abstract: The present invention relates Control circuitry that includes a circuit configured to receive a system level cache (SLC) dirty-set request comprising a dirty set flag, a memory address, and an address of a cache line (LA) in a SLC data array. The circuitry converts the memory address to a dynamic random-access memory (DRAM) page address (PA) which identifies a DRAM bank and a DRAM page and identifies either a hit, or no hit, is present according to whether the DRAM PA matches with PA address in any valid entry in a dirty line links cache (DLL$).
    Type: Grant
    Filed: May 12, 2017
    Date of Patent: August 11, 2020
    Assignee: LG ELECTRONICS INC.
    Inventors: Arkadi Avrukin, Seungyoon Song, Yongjae Hong, Michael Frank, Hoshik Kim, Jungsook Lee
  • Patent number: 10733102
    Abstract: A processor core executes a first instruction indicating a first coherence state update policy that biases the cache memory to retain write authority, thereafter executes a second instruction indicating a second coherence state update policy that biases the cache memory to transfer write authority, and executes a store instruction following the first instruction in program order to generate a store request. A cache memory stores the cache line in association with a coherence state field set to a first modified coherence state. In response to the store request, the cache memory updates data of the cache line. If the store instruction is executed prior to the second instruction, the cache memory refrains from updating the coherence state field, but if the store instruction is executed after the second instruction, the cache memory updates the coherence state field from the first modified coherence state to a second modified coherence state.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: August 4, 2020
    Assignee: International Business Machines Corporation
    Inventors: Derek E. Williams, Guy L. Guthrie
  • Patent number: 10721719
    Abstract: Methods and systems for optimized caching of data in a network of nodes are described herein. A server node of a plurality of server nodes may receive, from a device (e.g., a client device), a request for data. The request may be transmitted to the server node via a load balancing device. The server node may retrieve the data requested by the device. The server node may cache, at a cache location internal to the server node, the data requested by the device. The method may comprise transmitting, by the server node, a request to update a data mapping table to indicate a mapping of the server node and the data requested by the device.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: July 21, 2020
    Assignee: Citrix Systems, Inc.
    Inventor: Shaunak Mistry
  • Patent number: 10705958
    Abstract: A processor partitions a coherency directory into different regions for different processor cores and manages the number of entries allocated to each region based at least in part on monitored recall costs indicating expected resource costs for reallocating entries. Examples of monitored recall costs include a number of cache evictions associated with entry reallocation, the hit rate of each region of the coherency directory, and the like, or a combination thereof. By managing the entries allocated to each region based on the monitored recall costs, the processor ensures that processor cores associated with denser memory access patterns (that is, memory access patterns that more frequently access cache lines associated with the same memory pages) are assigned more entries of the coherency directory.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: July 7, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Michael W. Boyer, Gabriel H. Loh, Yasuko Eckert, William L. Walker
  • Patent number: 10698825
    Abstract: In a system-on-chip there is a local interconnect to connect local devices on the chip to one another, a gateway to connect the chip to a remote chip of a plurality of chips in a cache-coherent multi-chip system via an inter-chip interconnect, and a cache-coherent device. The cache-coherent device has a cache-coherency look-up table having entries for shared cache data lines. When a data access request is received via the inter-chip interconnect and the local interconnect a system-unique identifier for a request source of the data access request is generated in dependence on an inter-chip request source identifier used on the inter-chip interconnect and an identifier indicative of the remote chip. The bit-set used to express the system-unique identifier is larger than the bit-set used to express the inter-chip request source identifier.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: June 30, 2020
    Assignee: Arm Limited
    Inventors: Gurunath Ramagiri, Ashok Kumar Tummala, Mark David Werkheiser, Jamshed Jalal, Premkishore Shivakumar, Paul Gilbert Meyer
  • Patent number: 10691599
    Abstract: A data processing system includes a processor core and a cache memory storing a cache line associated with a coherence state field set to a first of multiple modified coherence states. The processor core executes a store instruction including a field having a setting that indicates a coherence state update policy and, based on the store instruction, generates a corresponding store request including the setting, store data, and a target address. Responsive to the store request, the cache memory updates data of the cache line utilizing the store data. The cache memory refrains from updating the coherence state field based on the setting indicating a first coherence state update policy and updates the coherence state field from the first modified coherence state to a second modified coherence state based on the setting indicating a second coherence state update policy.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: June 23, 2020
    Assignee: International Business Machines Corporation
    Inventors: Derek E. Williams, Guy L. Guthrie
  • Patent number: 10691550
    Abstract: A storage control apparatus includes a memory configured to store meta-information for associating addresses of a logical area and a physical area with each other, and a processor coupled to the memory and configured to read out first meta-information corresponding to a first logical area from the memory, specify a first address of the physical area corresponding to a copy source address of the data based on the first meta-information, read out second meta-information corresponding to a second logical area that is set as a copy destination of the data in the logical area from the memory, specify a second address of the physical area corresponding to a copy destination address of the data based on the second meta-information, and execute copy of the data by associating the first address and the second address with each other as storage areas of the data.
    Type: Grant
    Filed: April 18, 2018
    Date of Patent: June 23, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Yoshinari Shinozaki, Takeshi Watanabe, Norihide Kubota, Yoshihito Konta, Toshio Kikuchi, Naohiro Takeda, Yusuke Kurasawa, Yuji Tanaka, Marino Kajiyama, Yusuke Suzuki
  • Patent number: 10691375
    Abstract: In one example, a memory network may control access to a shared memory that is by multiple compute nodes. The memory network may control the access to the shared memory by receiving a memory access request originating from an application executing on the multiple compute nodes and determining a priority for processing the memory access request. The priority determined by the memory network may correspond to a memory address range in the memory that is specifically used by the application.
    Type: Grant
    Filed: January 30, 2015
    Date of Patent: June 23, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Vanish Talwar, Paolo Faraboschi, Daniel Gmach, Yuan Chen, Al Davis, Adit Madan
  • Patent number: 10691667
    Abstract: In accordance with embodiments, there are provided mechanisms and methods for selecting amongst a plurality of processes to send a message (e.g. a message for updating an endpoint system, etc.). These mechanisms and methods for selecting amongst a plurality of processes to send a message can enable embodiments to utilize more than one queue for sending such message. The ability of embodiments to provide such multi-process feature can, in turn, prevent latency that typically accompanies a mounting number of messages.
    Type: Grant
    Filed: April 6, 2017
    Date of Patent: June 23, 2020
    Assignee: salesforce.com, inc.
    Inventors: Benji Jasik, Simon Zak Fell
  • Patent number: 10681180
    Abstract: A system and method dynamically transitions the file system role of compute nodes in a distributed clustered file system for an object that includes an embedded compute engine (a storlet). Embodiments of the invention overcome prior art problems of a storlet in a distributed storage system with a storlet engine having a dynamic role module which dynamically assigns or changes a file system role served by the node to a role which is more optimally suited for a computation operation in the storlet. The role assignment is made based on a classification of the computation operation and the appropriate filesystem role that matches computation operation. For example, a role could be assigned which helps reduce storage needs, communication resources, etc.
    Type: Grant
    Filed: March 16, 2019
    Date of Patent: June 9, 2020
    Assignee: International Business Machines Corporation
    Inventors: Duane M. Baldwin, Sasikanth Eda, John T. Olson, Sandeep R. Patil
  • Patent number: 10657057
    Abstract: A data processing system includes a processor, a cache memory, a speculative cache memory, and a control circuit. The processor is for executing instructions. The cache memory is coupled to the processor and is for storing the instructions and related data. A speculative cache is coupled to the processor and is for storing only speculative instructions and related data. The control circuit is coupled to the processor, to the cache memory, and to the speculative cache. The control circuit is for causing speculative instructions to be stored in the speculative cache in response to receiving an indication from the processor. Also, a method is provided for speculative execution in the data processing system.
    Type: Grant
    Filed: April 4, 2018
    Date of Patent: May 19, 2020
    Assignee: NXP B.V.
    Inventor: Nikita Veshchikov
  • Patent number: 10656983
    Abstract: Methods and apparatus to generate a shadow setup based on a cloud environment and upgrade the shadow setup to identify upgrade-related errors are disclosed. An example apparatus includes a topology deployment determiner to deploy a shadow setup corresponding to a replica version of a live cloud environment; an upgrade coordinator to upgrade one or more components of the shadow setup; and a reporter to generate a report corresponding to the upgrade.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: May 19, 2020
    Assignee: NICIRA, INC.
    Inventors: Prashant Shelke, Sharwari Phadnis, Kiran Bhalgat, Yogesh Vhora, Kartiki Kale, Dipesh Bhatewara
  • Patent number: 10649853
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: May 12, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Guy Lynn Guthrie, Naresh Nayar, Geraint North, Hugh Shen, William Starke, Phillip Williams
  • Patent number: 10649684
    Abstract: An apparatus has a monitoring data store for storing monitoring data indicating regions of a memory address space to be monitored for changes, which can include at least two non-contiguous regions. Processing circuitry updates the monitoring data in response to an update monitor instruction. Monitoring circuitry monitors accesses to the memory system and provides a notification to the processing circuitry when data associated with one of the monitored regions has changed. This improves performance and energy efficiency by reducing the overhead of polling changes to multiple regions.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: May 12, 2020
    Assignee: ARM Limited
    Inventors: Geoffrey Wyman Blake, Pavel Shamis
  • Patent number: 10635588
    Abstract: A processing system includes a first set of one or more processing units including a first processing unit, a second set of one or more processing units including a second processing unit, and a memory having an address space shared by the first and second sets. The processing system further includes a distributed coherence directory subsystem having a first coherence directory to support a first subset of one or more address regions of the address space and a second coherence directory to support a second subset of one or more address regions of the address space. In some implementations, the first coherence directory is implemented in the system so as to have a lower access latency for the first set, whereas the second coherence directory is implemented in the system so as to have a lower access latency for the second set.
    Type: Grant
    Filed: June 5, 2018
    Date of Patent: April 28, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Yasuko Eckert, Maurice B. Steinman, Steven Raasch
  • Patent number: 10635603
    Abstract: An address translation facility is provided for multiple virtualization levels, where a guest virtual address may be translated to a guest non-virtual address, the guest non-virtual address corresponding without translation to a host virtual address, and the host virtual address may be translated to a host non-virtual address, where translation within a virtualization level may be specified as a sequence of accesses to address translation tables. The address translation facility may include a first translation engine and a second translation engine, where the first and second translation engines each have capacity to perform address translation within a single virtualization level of the multiple virtualization levels.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: April 28, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Uwe Brandt, Markus Helms, Christian Jacobi, Markus Kaltenbach, Thomas Koehler, Frank Lehnert
  • Patent number: 10628053
    Abstract: The present disclosure provides methods and systems for improving a data transfer rate from an intelligent electronic device (IED) to external PC clients, via a network interface. In one embodiment, an FTP based approach is disclosed which allows for significant optimization of download speeds providing as much a 100 times the download speed capability. In accordance with one aspect of present disclosure, an improved data rate is achieved by utilizing a high-speed transfer protocol, such as the FTP protocol in conjunction with a novel file system incorporated into the IED.
    Type: Grant
    Filed: July 13, 2015
    Date of Patent: April 21, 2020
    Assignee: Electro Industries/Gauge Tech
    Inventors: Joseph Spanier, Wei Wang, Dulciane Siqueira da Silva
  • Patent number: 10628312
    Abstract: A data processing system including a cache operably coupled to an interconnect and a cache controller. The cache is accessible by each bus initiator of a plurality of bus initiators. The cache includes a plurality of entries. Each entry includes a status field having coherency bits. When an entry of the plurality of entries is in a first protocol mode, the cache controller uses the coherency bits of the entry in implementing a first cache coherency protocol for data of the entry. When the entry is in a second protocol mode, the cache controller uses the coherency bits of the entry in implementing a second cache coherency protocol. The second cache coherency protocol is utilized in implementing a paced data transfer operation between a first bus initiator of the plurality of bus initiators and a second bus initiator of the plurality of bus initiators using the cache entry.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: April 21, 2020
    Assignee: NXP USA, Inc.
    Inventors: Paul Kimelman, Brian Christopher Kahne, Ehud Kalekin
  • Patent number: 10620848
    Abstract: An aspect concerns an electronic cryptographic device (100), comprising a cache memory configured to cache a further memory, a mask storage configured for storing a mask, a mask generator configured to generate the mask and to store the mask in the mask storage, a cache write mechanism configured to write a content of the further memory in the cache memory masked with the mask stored in the mask storage, a cache read mechanism configured to read a content of the cache memory unmasked with the mask stored in the mask storage.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: April 14, 2020
    Assignee: INTRINSIC ID B.V.
    Inventors: Petrus Wijnandus Simons, Svennius Leonardus Maria Goossens
  • Patent number: 10621105
    Abstract: An address translation facility is provided for multiple virtualization levels, where a guest virtual address may be translated to a guest non-virtual address, the guest non-virtual address corresponding without translation to a host virtual address, and the host virtual address may be translated to a host non-virtual address, where translation within a virtualization level may be specified as a sequence of accesses to address translation tables. The address translation facility may include a first translation engine and a second translation engine, where the first and second translation engines each have capacity to perform address translation within a single virtualization level of the multiple virtualization levels.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: April 14, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Uwe Brandt, Markus Helms, Christian Jacobi, Markus Kaltenbach, Thomas Koehler, Frank Lehnert
  • Patent number: 10616076
    Abstract: The network asset management apparatus includes a receiver module, an inquiry module, a translator module, and a sending module. The receiver module receives a request from a host to manage a network asset. The request has a first command format corresponding to the host. The inquiry module determines a second command format compatible with a target of the request. The translator module translates the request from the first command format to the second command format. The sending module provides the translated request for communication to the target of the request.
    Type: Grant
    Filed: May 30, 2017
    Date of Patent: April 7, 2020
    Assignee: International Business Machines Corporation
    Inventor: Akshat Mithal
  • Patent number: 10613940
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: April 7, 2020
    Assignee: International Business Machines Corporation
    Inventors: Guy Lynn Gutherie, Naresh Nayar, Geraint North, Hugh Shen, William Starke, Phillip Williams
  • Patent number: 10594742
    Abstract: Exemplary methods for performing web service calls include receiving, from a first client application associated with a first thread, a first request to establish a first connection with a first service endpoint providing a first service, the first request including a first connection key. The methods further include in response to the first request, identifying a first stub manager object that corresponds to the first connection key and the first thread, the first stub manager object representing a first instance of a stub manager. The methods further include providing, exclusively to the first client application of the first thread, the first stub manager object, wherein the first client application of the first thread is to use the first stub manager object for communicating with the first service endpoint.
    Type: Grant
    Filed: March 9, 2015
    Date of Patent: March 17, 2020
    Assignee: EMC IP Holding Company LLC
    Inventor: Daniel Fowler
  • Patent number: 10592493
    Abstract: A database engine may maintain a collection of data on a first storage device. A workflow manager node may receive a request to bulk load data into the collection. The workflow manager may instruct a control plane node to allocate and configure a secondary database node and to make operable thereon a second database using a second storage device. Data may be bulk loaded to the second storage device using a schema and storage unit format compatible with the collection of data. Storage units from the second storage device may be transferred to the first storage device and integrated into the collection of data.
    Type: Grant
    Filed: June 27, 2016
    Date of Patent: March 17, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Ammon Sutherland, Stefano Stefani
  • Patent number: 10579529
    Abstract: Maintaining multiple cache areas in a storage device having multiple processors includes loading data from a specific portion of non-volatile storage into a local cache slot in response to a specific processor of a first subset of the processors performing a read operation to the specific portion of non-volatile storage, where the local cache slot is accessible to the first subset of the processors and is inaccessible to a second subset of the processors that is different than the first subset of the processors and includes converting the local cache slot into a global cache slot in response to one of the processors performing a write operation to the specific portion of non-volatile storage, wherein the global cache area is accessible to the first subset of the processors and to the second subset of the processors. Different ones of the processors may be placed on different directors.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: March 3, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Venkata Khambam, Jeffrey R. Nelson, Brian Asselin, Rong Yu
  • Patent number: 10579644
    Abstract: Data synchronization includes receiving an update request from a client system for a first record set, wherein the update request includes search criteria used to initially determine the first record set and hash summaries of records of the first record set, and searching a data storage device for records matching the search criteria. The searching generates a second record set of records having hash summaries. Record identifiers of records of the second record set may be compared with record identifiers of the hash summaries of the first record set.
    Type: Grant
    Filed: November 17, 2015
    Date of Patent: March 3, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kenneth L. Greenlee, Thomas T. Hanis, Sunil K. Mishra, Donnie A. Smith, Jr.
  • Patent number: 10579692
    Abstract: Disclosed are examples of systems, apparatus, methods and computer program products for providing a web application builder framework in a database system. A database system maintains a multi-tenant non-relational database associated with a number of enterprises, a number of records, and a number of data objects for each of the enterprises. A dynamic virtual table is maintained as well, associated with the number of records and number of data objects. A user request is received to define a composite key for a data object. A metadata model is generated representing the data object, and a data definition script is generated. The dynamic virtual table is updated to include one or more virtual columns corresponding to the data definition script, and one or more columns of a shared table in the non-relational database are updated to match the virtual columns.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: March 3, 2020
    Assignee: salesforce.com, inc.
    Inventors: Eli Levine, Samarpan Jain, James Ferguson, Jan Asita Fernando
  • Patent number: 10572347
    Abstract: A computer program product is provided for managing point in time copies of data in object storage. The computer program product comprises a computer readable storage medium having program instructions embodied therewith. The program instructions are executable by a processor to cause the processor to create point in time copies of data, and send the point in time copies of the data to an object storage system. Also, the program instructions are executable by the processor to cause the processor to send a directive for manipulating the point in time copies of the data.
    Type: Grant
    Filed: September 23, 2015
    Date of Patent: February 25, 2020
    Assignee: International Business Machines Corporation
    Inventors: Robert B. Basham, Joseph W. Dain, Matthew J. Fairhurst
  • Patent number: 10564699
    Abstract: In one embodiment, the present invention is directed to a processor having a plurality of cores and a cache memory coupled to the cores and including a plurality of partitions. The processor can further include a logic to dynamically vary a size of the cache memory based on a memory boundedness of a workload executed on at least one of the cores. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: February 18, 2020
    Assignee: Intel Corporation
    Inventors: Avinash N. Ananthakrishnan, Efraim Rotem, Eliezer Weissmann, Doron Rajwan, Nadav Shulman, Alon Naveh, Hisham Abu-Salah
  • Patent number: 10565354
    Abstract: An apparatus and method for protecting content in a graphics processor. For example, one embodiment of an apparatus comprises: encode/decode circuitry to decode protected audio and/or video content to generate decoded audio and/or video content; a graphics cache of a graphics processing unit (GPU) to store the decoded audio and/or video content; first protection circuitry to set a protection attribute for each cache line containing the decoded audio and/or video data in the graphics cache; a cache coherency controller to generate a coherent read request to the graphics cache; second protection circuitry to read the protection attribute to determine whether the cache line identified in the read request is protected, wherein if it is protected, the second protection circuitry to refrain from including at least some of the data from the cache line in a response.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: February 18, 2020
    Assignee: Intel Corporation
    Inventors: Joydeep Ray, Abhishek R. Appu, Pattabhiraman K, Balaji Vembu, Altug Koker
  • Patent number: 10567388
    Abstract: A policy/resource decommissioning service determines whether a resource has been inactive for a period of time greater than at least one period of time threshold for decommissioning. If the resource has been inactive greater than a first period of time threshold, the service disables the resource such that requests to access the resource are denied. If the resource has been inactive for a period of time greater than a second threshold, longer than the first period of time threshold, the service archives the resource. The service deletes the resource if the inactivity period of the resource is greater than a third period of time threshold, where the third period of time threshold is longer than the first and the second period of time thresholds.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: February 18, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: William Frederick Hingle Kruse, Jeffrey John Wierer, Nima Sharifi Mehr, Ashish Rangole, Kunal Chadha, Bharath Mukkati Prakash, Radu Mihai Berciu, Kai Zhao, Hardik Nagda, Chenxi Zhang
  • Patent number: 10565141
    Abstract: Systems and methods are provided that may be implemented to hide operating system kernel data in system management mode memory. An information handling system includes a system memory, central processing unit (CPU), and Basic Input Output System (BIOS). The CPU is operable in a system management mode and is programmable to specify an SMM region of the system memory that is only accessible when the CPU is operating in the SMM. The BIOS is programmed to save kernel data from a non-SMM region of the system memory to the SMM region and then clear the kernel data from the non-SMM region in response to an operating system (OS) generating a system management interrupt (SMI) and to restore the kernel data to the non-SMM region of the system memory from the SMM region in response to the OS generating a SMI.
    Type: Grant
    Filed: August 28, 2018
    Date of Patent: February 18, 2020
    Assignee: Dell Products L.P.
    Inventors: Craig L. Chaiken, Michael W. Arms, Ricardo L. Martinez
  • Patent number: 10559345
    Abstract: A decoder is disclosed that is used to select an area of address space in an Integrated Circuit. The decoder uses a hardware shifting module that performs shift operations on constants. Such a structure reduces an overall area consumption of the shifting module. Additionally, the decoder can perform a multi-bit shift operation in a single clock cycle.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: February 11, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Ron Diamant, Jonathan Cohen, Elad Valfer
  • Patent number: 10540184
    Abstract: Stores and/or loads are coalesced. In one example, a store request to store an architected register is obtained. A determination is made as to whether the store request is a potential start of a store sequence. Based on determining the store request is a potential start of a store sequence, a snapshot request to create a snapshot is initiated. The snapshot is to map architected registers with physical registers. Based on determining the store request is not a potential start of the store sequence, the store is performed.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: January 21, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10540291
    Abstract: Translation lookaside buffer (TLB) tracking and managing technologies are described. A processing device comprises a translation lookaside buffer (TLB) and a processing core to execute a virtual machine monitor (VMM), the VMM to manage a virtual machine (VM) including virtual processors. The processing core to execute, via the VM, a plurality of conversion instructions on at least one of the virtual processors to convert a plurality of non-secure pages to a plurality of secure pages. The processing core also to execute, via the VM, one or more allocation instructions on the at least one of the virtual processors to allocate at least one secure page of the plurality of secure pages, execution of the one or more allocation instructions to include determining whether the TLB is cleared of mappings to the at least one secure page prior to allocating the at least one secure page.
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: January 21, 2020
    Assignee: Intel Corporation
    Inventors: Krystof C. Zmudzinski, Carlos V. Rozas, Francis X. McKeen, Rebekah M. Leslie-Hurd, Meltem Ozsoy, Somnath Chakrabarti, Mona Vij