Private Caches Patents (Class 711/121)
  • Patent number: 11755481
    Abstract: Techniques for universal cache management are described. In an example embodiment, a plurality of caches are allocated, in volatile memory of a computing device, to a plurality of data-processing instances, where each one of the plurality of caches is exclusively allocated to a separate one of the plurality of data-processing instances. A common cache is allocated in the volatile memory of the computing device, where the common cache is shared by the plurality of data-processing instances. Each instance of the plurality of data-processing instances is configured to: identify a data block in the particular cache allocated to that instance, where the data block has not been changed since the data block was last persistently written to one or more storage devices; cause the data block to be stored in the common cache; and remove the data block from the particular cache. Data blocks in the common cache are maintained without being persistently written to the one or more storage devices.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: September 12, 2023
    Assignee: Oracle International Corporation
    Inventors: Prasad V. Bagal, Rich Long
  • Patent number: 11733932
    Abstract: Example implementations relate to managing data on a memory module. Data may be transferred between a first NVM and a second NVM on a memory module. The second NVM may have a higher memory capacity and a longer access latency than the first NVM. A mapping between a first address and a second address may be stored in an NVM on the memory module. The first address may refer to a location at which data is stored in the first NVM. The second address may refer to a location, in the second NVM, from which the data was copied.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: August 22, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Gregg B Lesartre, Andrew R Wheeler
  • Patent number: 11720493
    Abstract: System and methods are disclosed include a memory device and a processing device coupled to the memory device. The processing device can determine an amount of valid management units in a memory device of a memory sub-system. The processing device can then determine a surplus amount of valid management units on the memory device based on the amount of valid management units. The processing device can then configure a size of a cache of the memory device based on the surplus amount of valid management units.
    Type: Grant
    Filed: January 21, 2022
    Date of Patent: August 8, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Kevin R. Brandt, Peter Feeley, Kishore Kumar Muchherla, Yun Li, Sampath K. Ratnam, Ashutosh Malshe, Christopher S. Hale, Daniel J. Hubbard
  • Patent number: 11714760
    Abstract: Methods, apparatus, systems and articles of manufacture to reduce bank pressure using aggressive write merging are disclosed. An example apparatus includes a first cache storage; a second cache storage; a store queue coupled to at least one of the first cache storage and the second cache storage and operable to: receive a first memory operation; process the first memory operation for storing the first set of data in at least one of the first cache storage and the second cache storage; receive a second memory operation; and prior to storing the first set of data in the at least one of the first cache storage and the second cache storage, merge the first memory operation and the second memory operation.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: August 1, 2023
    Assignee: Texas Instmments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Patent number: 11461234
    Abstract: A cache coherent node controller at least includes one or more network interface controllers, each network interface controller includes at least one network interface, and at least two coherent interfaces each configured for communication with a microprocessor. A computer system includes one or more of nodes wherein each node is connected to at least one network switch, each node at least includes a cache coherent node controller.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: October 4, 2022
    Assignee: Numascale AS
    Inventors: Thibaut Palfer-Sollier, Einar Rustad, Steffen Persvold
  • Patent number: 11327889
    Abstract: The invention relates to a method for managing a buffer memory space associated with a persistent data storage system of a computing machine. The buffer memory space is suitable for temporarily storing in the RAM of the machine one or more portions of a single data file of the persistent data storage system that was previously accessed by one or more processes executed on the machine. The operating system of the machine tracks each of the portions of the file that are projected in the buffer memory space by a descriptor belonging to a plurality of buffer memory projection descriptors which are all associated with the tracking of one or more portions of the file projected in the buffer memory space.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: May 10, 2022
    Assignee: BULL SAS
    Inventors: Jean-Olivier Gerphagnon, Frédéric Saunier, Grégoire Pichon
  • Patent number: 11210263
    Abstract: Embodiments are described for a multi-node file system, such as a clustered or distributed file system, with a file system buffer cache and an additional host-side tier non-volatile storage cache such as 3DXP storage. Cache coherency can be maintained by one of three models: (i) host-side tier management, (ii) file system management, or (iii) storage array management. performing a storage tier-specific file system action in a file system that comprises a namespace that spans multiple tiers of storage.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: December 28, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Stephen Smaldone, Ian Wigmore, Arieh Don
  • Patent number: 11167775
    Abstract: Disclosed are devices, systems and methods for an audio assistant in an autonomous or semi-autonomous vehicle. In one aspect the informational audio assistant receives a first set of data from a vehicle sensor and identifies an object or condition using the data from the vehicle sensor. Audio is generated representative of a perceived danger of an object or condition. A second set of data from the vehicle sensor subsystem is received and the informational audio assistant determines whether an increased danger exists based on a comparison of the first set of data to the second set of data. The informational audio assistant will apply a sound profile to the generated audio based on the increased danger.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: November 9, 2021
    Assignee: TUSIMPLE, INC.
    Inventors: Cheng Zhang, Xiaodi Hou, Sven Kratz
  • Patent number: 11151039
    Abstract: An apparatus is provided for receiving requests from a plurality of processing units, at least some of which may have associated cache storage. A snoop unit implements a cache coherency protocol when a request received by the apparatus identifies a cacheable memory address. Snoop filter storage is provided comprising an N-way set associative storage structure with a plurality of entries. Each entry stores coherence data for an associated address range identifying a memory block, and the coherence data is used to determine which cache storages need to be subjected to a snoop operation when implementing the cache coherency protocol in response to a received request. The snoop filter storage stores coherence data for memory blocks of at least a plurality P of different size granularities, and is organised as a plurality of at least P banks that are accessible in parallel, where each bank has entries within each of the N-ways of the snoop filter storage.
    Type: Grant
    Filed: March 17, 2020
    Date of Patent: October 19, 2021
    Assignee: Arm Limited
    Inventors: Joshua Randall, Jesse Garrett Beu
  • Patent number: 11061751
    Abstract: A processing device can determine a configuration parameter to be used in an error correction code (ECC) operation. The configuration parameter is based on a memory type of a memory component that is associated with a controller. Data can be received from a host system. The processing device can generate a code word for the data by using the ECC operation that is based on the configuration parameter. The code word can be sent to a sequencer that is external to the controller.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: July 13, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Samir Mittal, Ying Yu Tai, Cheng Yuan Wu
  • Patent number: 11016907
    Abstract: Increasing the scope of local purges of structures associated with address translation. A hardware thread of a physical core of a machine configuration issues a purge request. A determination is made as to whether the purge request is a local request. Based on the purge request being a local request, entries of a structure associated with address translation are purged on at least multiple hardware threads of a set of hardware threads of the machine configuration.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: May 25, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Fadi Y. Busaba, Lisa Cranton Heller
  • Patent number: 11016695
    Abstract: A disclosed example method to perform memory copy operations includes copying a first portion of data from a source location to a destination location, the first portion of the data being less than all of the data intended to be copied from the source location to the destination location; determining a cache miss measure indicative of an amount of the first portion of the data that is located in a cache; selecting a type of memory copy operation based on the cache miss measure; and initiating a memory copy operation based on the selected type of memory copy operation to copy a second portion of the data from the source location to the destination location.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: May 25, 2021
    Assignee: Intel Corporation
    Inventors: Dmitry Durnov, Sergey Gubanov, Sergey Kazakov
  • Patent number: 11010054
    Abstract: According to one embodiment, a data processing system includes a plurality of processing units, each processing unit having one or more processor cores. The system further includes a plurality of memory roots, each memory root being associated with one of the processing units. Each memory root includes one or more branches and a plurality of memory leaves to store data. Each of the branches is associated with one or more of the memory leaves and to provide access to the data stored therein. The system further includes a memory fabric coupled to each of the branches of each memory root to allow each branch to access data stored in any of the memory leaves associated with any one of remaining branches.
    Type: Grant
    Filed: June 10, 2016
    Date of Patent: May 18, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Mark Himelstein, Bruce Wilford, Richard Van Gaasbeck, Todd Wilde, Rick Carlson, Vikram Venkataraghavan, Vishwas Durai, James Yarbrough, Blair Barnett
  • Patent number: 10936210
    Abstract: The present disclosure relates to apparatuses and methods to control memory operations on buffers. An example apparatus includes a memory device and a host. The memory device includes a buffer and an array of memory cells, and the buffer includes a plurality of caches. The host includes a system controller, and the system controller is configured to control performance of a memory operation on data in the buffer. The memory operation is associated with data movement among the plurality of caches.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: March 2, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Ali Mohammadzadeh, Jung Sheng Hoei, Dheeraj Srinivasan, Terry M. Grunzke
  • Patent number: 10866894
    Abstract: Systems and methods for controlling cache usage are described and include associating, by a server computing system, a tenant in a multi-tenant environment with a cache cluster formed by a group of cache instances; associating, by the server computing system, a memory threshold and a burst memory threshold with the tenant; enabling, by the server computing system, each of the cache instances to collect metrics information based on the tenant accessing the cache cluster, the metrics information used to determine memory usage information and burst memory usage information of the cache cluster by the tenant; and controlling, by the server computing system, usage of the cache cluster by the tenant based on comparing the memory usage information with the memory threshold and comparing the burst memory usage information with the burst memory threshold.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: December 15, 2020
    Assignee: salesforce.com, inc.
    Inventors: Gopi Krishna Mudumbai, Jayant Kumar
  • Patent number: 10846164
    Abstract: A system LSI including: a first group including a first CPU and a first module; a second group including a second CPU and a second module having the same configuration as the first module has; and a shared memory including a first area for which cache coherency is maintained by an access from the first group, and a second area for which cache coherency is maintained by an access from the second group, the shared memory electrically connected to the first group and the second group. The first group includes a first bus through which cache coherency is maintained between the first CPU and the first module, and a second bus which electrically connects the first bus and the first module to each other. The second group includes a third bus through which cache coherency is maintained between the second CPU and the second module, and a fourth bus which electrically connects the third bus and the second module to each other.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: November 24, 2020
    Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA ELECTRONIC DEVICES & STORAGE CORPORATION
    Inventors: Naoaki Ohkubo, Jun Tanabe
  • Patent number: 10831658
    Abstract: Provided are an apparatus and method to cache data in a first memory that is stored in a second memory. At least one read-with-invalidate command is received to read and invalidate at least one portion of a cache line having modified data. The cache line having modified data is invalidated in response to receipt of read-with-invalidate commands for less than all of the portions of the cache line.
    Type: Grant
    Filed: January 3, 2019
    Date of Patent: November 10, 2020
    Assignee: INTEL CORPORATION
    Inventors: Yanru Li, Chia-Hung Kuo, Ali Taha
  • Patent number: 10802830
    Abstract: A computer data processing system includes a plurality of logical registers, each including multiple storage sections. A processor writes data a storage section based on a dispatched first instruction, and sets a valid bit corresponding to the storage section that receives the data. In response to each subsequent instruction, the processor sets an evictor valid bit indicating a subsequent instruction has written new data to a storage section written by the first instruction, and updates the valid bit to indicate the storage section containing the new written data. A register combination unit generates a combined evictor tag to identify a most recent subsequent instruction. The processor determines the most recent subsequent instruction based on the combined evictor tag in response to a flush event, and unsets all the evictor tag valid bits set by the most the most recent subsequent instruction along with all previous subsequent instructions.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: October 13, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan Hsieh, Gregory William Alexander, Tu-An Nguyen
  • Patent number: 10733171
    Abstract: Lock table management is provided for a lock manager of a database system, in which lock management is provided in a manner that is fast and efficient, and that conserves processing, memory, and other computational resources. For example, the lock table management can use a hashmap in which keys and values are stored in separate arrays, which can be loaded into separate CPU cache lines.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: August 4, 2020
    Assignee: SAP SE
    Inventor: Chang Gyoo Park
  • Patent number: 10733101
    Abstract: A processing node, a computer system, and a transaction conflict detection method, where the processing node includes a processor and a transactional cache. When obtaining a first operation instruction in a transaction for accessing shared data, the processor accesses the transactional cache for caching shared data of a transaction processed by the processing node. If the transactional cache determines that the first operation instruction fails to hit a cache line in the transactional cache, the transactional cache sends a first destination address in the operation instruction to a transactional cache in another processing node. After receiving status information of a cache line hit by the first destination address from the other processing node, the transactional cache determines, based on the received status information, whether the first operation instruction conflicts with a second operation instruction executed by the other processing node.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: August 4, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Hao Xiao, Yuangang Wang, Jun Xu
  • Patent number: 10725919
    Abstract: A processor of an aspect includes a plurality of logical processors each having one or more corresponding lower level caches. A shared higher level cache is shared by the plurality of logical processors. The shared higher level cache includes a distributed cache slice for each of the logical processors. The processor includes logic to direct an access that misses in one or more lower level caches of a corresponding logical processor to a subset of the distributed cache slices in a virtual cluster that corresponds to the logical processor. Other processors, methods, and systems are also disclosed.
    Type: Grant
    Filed: April 8, 2018
    Date of Patent: July 28, 2020
    Assignee: Intel Corporation
    Inventors: Herbert H. Hum, Brinda Ganesh, James R. Vash, Ganesh Kumar, Leena K. Puthiyedath, Scott J. Erlanger, Eric J. Dehaemer, Adrian C. Moga, Michelle M. Sebot, Richard L. Carlson, David Bubien, Eric Delano
  • Patent number: 10705960
    Abstract: A processor of an aspect includes a plurality of logical processors each having one or more corresponding lower level caches. A shared higher level cache is shared by the plurality of logical processors. The shared higher level cache includes a distributed cache slice for each of the logical processors. The processor includes logic to direct an access that misses in one or more lower level caches of a corresponding logical processor to a subset of the distributed cache slices in a virtual cluster that corresponds to the logical processor. Other processors, methods, and systems are also disclosed.
    Type: Grant
    Filed: April 8, 2018
    Date of Patent: July 7, 2020
    Assignee: Intel Corporation
    Inventors: Herbert H. Hum, Brinda Ganesh, James R. Vash, Ganesh Kumar, Leena K. Puthiyedath, Scott J. Erlanger, Eric J. Dehaemer, Adrian C. Moga, Michelle M. Sebot, Richard L. Carlson, David Bubien, Eric DeLano
  • Patent number: 10698822
    Abstract: A system for writing to a cache line, the system including: at least one processor; and at least one memory having stored thereon instructions that, when executed by the at least one processor, controls the at least one processor to: pre-emptively invalidate a cache line at a reader device; receive, from the reader device, a read request for the invalidated cache line; delay a response to the read request; and after the delay, output for transmission a response to the read request to the reader device.
    Type: Grant
    Filed: November 8, 2018
    Date of Patent: June 30, 2020
    Inventor: Johnny Yau
  • Patent number: 10671512
    Abstract: Storing memory reordering hints into a processor trace includes, while a system executes a plurality of machine code instructions, the system initiating execution of a particular machine code instruction that performs a load to a memory address. Based on initiation of this instruction, a system initiates storing, into the processor trace, a particular cache line in a processor cache that stores a first value corresponding to the memory address. After initiating storing of the particular cache line, and prior to committing the particular machine code instruction, the system detects an event affecting the particular cache line. Based on this detection, the system initiates storing of a memory reordering hint into the processor trace.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: June 2, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Jordi Mola
  • Patent number: 10592358
    Abstract: A distributed system implementation for cache coherence comprises distinct agent interface units, coherency controllers, and memory interface units. The agents send requests in the form of read and write transactions. The system also includes a memory that includes coherent memory regions. The memory is in communication with the agents. The system includes a coherent interconnect in communication with the memory and the agents. The system includes a second identical coherent interconnect in communication with the memory and the agents. The system also includes a comparator for comparing at least two inputs, the comparator is in communication with the two coherent interconnects.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: March 17, 2020
    Assignee: ARTERIS, INC.
    Inventors: Benoit deLescure, Jean Philippe Loison, Alexis Boutiller, Rohit Bansal, Parimal Gaikwad
  • Patent number: 10592434
    Abstract: Methods and systems for securing memory within a computing fabric are disclosed. One method includes allocating memory of one or more host computing systems in the computing fabric to a partition, the partition included among a plurality of partitions, the computing fabric including a hypervisor installed on the one or more host computing platforms and managing interactions among the plurality of partitions. The method includes defining an address range associated with the memory allocated to the partition, receiving a memory operation including an address within the address range, and, based on the memory operation including an address within the address range, issuing, by the hypervisor, an indication that the memory operation is occurring at an encrypted memory location. The method also includes performing the memory operation, and performing an encryption operation on data associated with the memory operation.
    Type: Grant
    Filed: January 20, 2016
    Date of Patent: March 17, 2020
    Assignee: Unisys Corporation
    Inventors: Robert J Sliwa, Bryan E Thompson, James R Hunter, John A Landis, David A Kershner
  • Patent number: 10585807
    Abstract: The disclosure of the present invention presents a method and system for efficiently maintaining an object cache to a maximum size by number of entries, whilst providing a means of automatically removing cache entries when the cache attempts to grow beyond its maximum size. The method for choosing which entries should be removed provides for a balance between least recently used and least frequently used policies. A flush operation is invoked only when the cache size grows beyond the maximum size and removes a fixed percentage of entries in one pass.
    Type: Grant
    Filed: April 2, 2015
    Date of Patent: March 10, 2020
    Assignee: International Business Machines Corporation
    Inventor: Andrew J. Coleman
  • Patent number: 10572392
    Abstract: Increasing the scope of local purges of structures associated with address translation. A hardware thread of a physical core of a machine configuration issues a purge request. A determination is made as to whether the purge request is a local request. Based on the purge request being a local request, entries of a structure associated with address translation are purged on at least multiple hardware threads of a set of hardware threads of the the machine configuration.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: February 25, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Fadi Y. Busaba, Lisa Cranton Heller
  • Patent number: 10565113
    Abstract: Methods and systems for managing synonyms in VIPT caches are disclosed. A method includes tracking lines of a copied cache using a directory, examining a specified bit of a virtual address that is associated with a load request and determining its status and making an entry in one of a plurality of parts of the directory based on the status of the specified bit of the virtual address that is examined. The method further includes updating one of, and invalidating the other of, a cache line that is associated with the virtual address that is stored in a first index of the copied cache, and a cache line that is associated with a synonym of the virtual address that is stored at a second index of the copied cache, upon receiving a request to update a physical address associated with the virtual address.
    Type: Grant
    Filed: August 13, 2015
    Date of Patent: February 18, 2020
    Assignee: INTEL CORPORATION
    Inventor: Karthikeyan Avudaiyappan
  • Patent number: 10504045
    Abstract: An audit schedule is determined from a database storing a master data set comprising audit events, system parameters, and resources. Audit events are grouped according to information of the master data set, for example shared units (e.g., product, service, organization, risk level, audit type, etc.). Audit groups are prioritized by factors such as unit priority and audit duration. A random audit event within the group is chosen, and then a time slot is selected according to a desired distribution (e.g., left-to-right), determining resource availability for that slot. The procedure may optionally consider additional constraints (e.g., manually added, national holidays, auditor availability) outside the master data set. The procedure shuffles through audit events of the group with the highest priority, and then through audit events of lower priority groups, filling out the audit schedule according to resource availability and constraints. Audit schedule changes are recorded in a change log data object.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: December 10, 2019
    Assignee: SAP SE
    Inventors: Maxym Gerashchenko, Gordon Muckle
  • Patent number: 10452543
    Abstract: Embodiments are described for a multi-node file system, such as a clustered or distributed file system, with a file system buffer cache and an additional host-side tier non-volatile storage cache such as 3DXP storage. Cache coherency can be maintained by one of three models: (i) host-side tier management, (ii) file system management, or (iii) storage array management. performing a storage tier-specific file system action in a file system that comprises a namespace that spans multiple tiers of storage.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: October 22, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Stephen Smaldone, Ian Wigmore, Arieh Don
  • Patent number: 10445245
    Abstract: A method, system, and apparatus may initialize a fixed plurality of page table entries for a fixed plurality of pages in memory, each page having a first size, wherein a linear address for each page table entry corresponds to a physical address and the fixed plurality of pages are aligned. A bit in each of the page table entries for the aligned pages may be set to indicate whether or not the fixed plurality of pages is to be treated as one combined page having a second page size larger than the first page size. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: October 15, 2019
    Assignee: Intel Corporation
    Inventors: Edward Grochowski, Julio Gago, Roger Gramunt, Roger Espasa, Rolf Kassa
  • Patent number: 10445244
    Abstract: A method, system, and apparatus may initialize a fixed plurality of page table entries for a fixed plurality of pages in memory, each page having a first size, wherein a linear address for each page table entry corresponds to a physical address and the fixed plurality of pages are aligned. A bit in each of the page table entries for the aligned pages may be set to indicate whether or not the fixed plurality of pages is to be treated as one combined page having a second page size larger than the first page size. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: October 15, 2019
    Assignee: Intel Corporation
    Inventors: Edward Grochowski, Julio Gago, Roger Gramunt, Roger Espasa, Rolf Kassa
  • Patent number: 10430243
    Abstract: A runtime system for distributing work between multiple threads in multi-socket shared memory machines that may support fine-grained scheduling of parallel loops. The runtime system may implement a request combining technique in which a representative thread requests work on behalf of other threads. The request combining technique may be asynchronous; a thread may execute work while waiting to obtain additional work via the request combining technique. Loops can be nested within one another, and the runtime system may provide control over the way in which hardware contexts are allocated to the loops at the different levels. An “inside out” approach may be used for nested loops in which a loop indicates how many levels are nested inside it, rather than a conventional “outside in” approach to nesting.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: October 1, 2019
    Assignee: Oracle International Corporation
    Inventor: Timothy L. Harris
  • Patent number: 10402324
    Abstract: According to an example, a processor generates a memory access request and sends the memory access request to a memory module. The processor receives data from the memory module in response to the memory access request when a memory device in the memory module for the memory access request is busy and unable to execute the memory access request.
    Type: Grant
    Filed: October 31, 2013
    Date of Patent: September 3, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Kevin T. Lim, Sheng Li, Parthasarathy Ranganathan, William C. Hallowell
  • Patent number: 10372353
    Abstract: The present disclosure relates to apparatuses and methods to control memory operations on buffers. An example apparatus includes a memory device and a host. The memory device includes a buffer and an array of memory cells, and the buffer includes a plurality of caches. The host includes a system controller, and the system controller is configured to control performance of a memory operation on data in the buffer. The memory operation is associated with data movement among the plurality of caches.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: August 6, 2019
    Assignee: Micron Technology, Inc.
    Inventors: Ali Mohammadzadeh, Jung Sheng Hoei, Dheeraj Srinivasan, Terry M. Grunzke
  • Patent number: 10303606
    Abstract: Technologies for migration of dynamic home tile mapping are described. A cache controller can receive coherence messages from other processor cores on the die. The cache controller records locations from which the coherence messages originate and determine distances between the requested home tiles and the locations from which the coherence messages originate. The cache controller determines whether an average distance between a particular home tile, whose identifier is stored in the home tile table, exceeds a threshold. When the average distance exceeds the defined threshold, the cache controller migrates the particular home tile to another location.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: May 28, 2019
    Assignee: Intel Corporation
    Inventors: Christopher J. Hughes, Daehyun Kim, Jong Soo Park, Richard M. Yoo
  • Patent number: 10289556
    Abstract: A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required.
    Type: Grant
    Filed: October 28, 2016
    Date of Patent: May 14, 2019
    Assignee: Intel Corporation
    Inventor: Sanjeev N. Trika
  • Patent number: 10223281
    Abstract: Increasing the scope of local purges of structures associated with address translation. A hardware thread of a physical core of a machine configuration issues a purge request. A determination is made as to whether the purge request is a local request. Based on the purge request being a local request, entries of a structure associated with address translation are purged on at least multiple hardware threads of a set of hardware threads of the the machine configuration.
    Type: Grant
    Filed: July 18, 2016
    Date of Patent: March 5, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Fadi Y. Busaba, Lisa Cranton Heller
  • Patent number: 10223286
    Abstract: The disclosure of the present invention presents a method and system for efficiently maintaining an object cache to a maximum size by number of entries, whilst providing a means of automatically removing cache entries when the cache attempts to grow beyond its maximum size. The method for choosing which entries should be removed provides for a balance between least recently used and least frequently used policies. A flush operation is invoked only when the cache size grows beyond the maximum size and removes a fixed percentage of entries in one pass.
    Type: Grant
    Filed: August 5, 2014
    Date of Patent: March 5, 2019
    Assignee: International Business Machines Corporation
    Inventor: Andrew J. Coleman
  • Patent number: 10146735
    Abstract: The invention relates to a method for processing real-time data in a distribution unit of a distributed computer system, the computer system comprising a plurality of node computers and distribution units, the distribution unit containing, in addition to a switching engine (SE) and a switching memory (SM), one or more application computers each with one or more application central processing units and each with one or more application memories (AM), wherein the switching engine of the distribution unit, when it receives, at one of its ports, a message intended for an application computer, forwards this message to the addressed application computer through a direct memory access (DMA) unit that is arranged between the switching memory and the application memory of the addressed application computer and that is under the control of the switching engine. The invention also relates to an expanded distribution unit and a computer system with such expanded distribution units.
    Type: Grant
    Filed: February 9, 2016
    Date of Patent: December 4, 2018
    Assignee: FTS COMPUTERTECHNIK GMBH
    Inventors: Stefan Poledna, Hermann Kopetz, Martin Schwarz
  • Patent number: 10146595
    Abstract: A computer system includes a cache unit and a first processing unit. The first processing unit runs a first program thread, and performs an instruction to store information of a signal change event into the cache unit through a cache stashing operation, where the signal change event is initiated by the first program thread for alerting a second program thread.
    Type: Grant
    Filed: May 26, 2015
    Date of Patent: December 4, 2018
    Assignee: MEDIATEK INC.
    Inventor: Chi-Chang Lai
  • Patent number: 10148784
    Abstract: A dual-mode, dual-display shared resource computing (SRC) device is usable to stream SRC content from a host SRC device while in an on-line mode and maintain functionality with the content during an off-line mode. Such remote SRC devices can be used to maintain multiple user-specific caches and to back-up cached content for multi-device systems.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: December 4, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Shahram Izadi, Behrooz Chitsaz
  • Patent number: 10067959
    Abstract: Techniques described and suggested herein include implementations of caches and scalers to handle data storage requests, and storage event status requests associated with data storage requests, in a scalable fashion. For example, a data storage system, such as a data storage system implemented by a computing resource service provider in connection with providing an archival storage service or other data storage service, may be implemented to maintain a consistent response time and backend capability for incoming data storage requests, which may be a component of ensuring a consistent customer experience for customers of an associated service, with little or no regard to peaky or high data storage request rates observed by the implementing data storage system.
    Type: Grant
    Filed: December 19, 2014
    Date of Patent: September 4, 2018
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Rishabh Animesh, Sandesh Doddameti, Ryan Charles Schmitt, Mark Christopher Seigle
  • Patent number: 10025677
    Abstract: A distributed system implementation for cache coherence comprises distinct agent interface units, coherency controllers, and memory interface units. The agents send requests in the form of read and write transactions. The system also includes a memory that includes coherent memory regions. The memory is in communication with the agents. The system includes a coherent interconnect in communication with the memory and the agents. The system includes a second identical coherent interconnect in communication with the memory and the agents. The system also includes a comparator for comparing at least two inputs, the comparator is in communication with the two coherent interconnects.
    Type: Grant
    Filed: December 21, 2016
    Date of Patent: July 17, 2018
    Assignee: ARTERIS, Inc.
    Inventors: Benoit de Lescure, Jean Philippe Loison, Alexis Boutiller
  • Patent number: 9983994
    Abstract: An arithmetic processing device includes a plurality of core units, each including a plurality of cores each having a arithmetic and logic unit, and a cache memory shared by the plurality of cores; a home agent connected to the cache memories provided respectively in the core units; and a memory access controller connected to the home agent and controls access to a main memory. The cache memories each includes a data memory having cache blocks, and a first tag which stores a first state indicating a MESI state, for each of the cache blocks, and the home agent includes a second tag which stores a second state including at least a shared modify state in which dirty data is shared by cache memories, for each of the cache blocks in the cache memories provided respectively in each of the core units.
    Type: Grant
    Filed: July 19, 2016
    Date of Patent: May 29, 2018
    Assignee: FUJITSU LIMITED
    Inventors: Hideaki Tomatsuri, Naoya Ishimura, Hiroyuki Kojima
  • Patent number: 9940237
    Abstract: A method for identifying, in a system including two or more computing devices that are able to communicate with each other, with each computing device having with a cache and connected to a corresponding memory, a computing device accessing one of the memories, includes monitoring memory access to any of the memories; monitoring cache coherency commands between computing devices; and identifying the computing device accessing one of the memories by using information related to the memory access and cache coherency commands.
    Type: Grant
    Filed: April 30, 2015
    Date of Patent: April 10, 2018
    Assignee: International Business Machines Corporation
    Inventors: Nobuyuki Ohba, Atsuya Okazaki
  • Patent number: 9928175
    Abstract: A method for identifying, in a system including two or more computing devices that are able to communicate with each other, with each computing device having with a cache and connected to a corresponding memory, a computing device accessing one of the memories, includes monitoring memory access to any of the memories; monitoring cache coherency commands between computing devices; and identifying the computing device accessing one of the memories by using information related to the memory access and cache coherency commands.
    Type: Grant
    Filed: June 23, 2015
    Date of Patent: March 27, 2018
    Assignee: International Business Machines Corporation
    Inventors: Nobuyuki Ohba, Atsuya Okazaki
  • Patent number: 9928071
    Abstract: A system includes a processor configured to: initiate atomic execution of a plurality of instruction units in a thread, starting with a beginning instruction unit in the plurality of instruction units, wherein the plurality of instruction units in the thread are not programmatically specified to be executed atomically; detect an atomicity terminating event during atomic execution of the plurality of instruction units, wherein the atomicity terminating event is triggered by a memory access by another processor; and commit at least some of the one or more memory modification instructions. The system further includes a memory coupled to the processor, configured to provide the processor with the plurality of instruction units.
    Type: Grant
    Filed: May 1, 2009
    Date of Patent: March 27, 2018
    Assignee: Azul Systems, Inc.
    Inventors: Gil Tene, Michael A. Wolf, Cliff N. Click, Jr.
  • Patent number: 9928072
    Abstract: A system includes a processor configured to: initiate atomic execution of a plurality of instruction units in a thread, starting with a beginning instruction unit in the plurality of instruction units, wherein the plurality of instruction units is not programmatically specified to be executed atomically; detect an atomicity terminating event during atomic execution of the plurality of instruction units, wherein the atomicity terminating event is triggered by a memory access by another processor; and establish an incidentally atomic sequence of instruction units based at least in part on detection of the atomicity terminating event, wherein the incidentally atomic sequence of instruction units correspond to a sequence of instruction units in the plurality of instruction units. The system further includes a memory coupled to the processor, configured to provide the processor with the plurality of instruction units.
    Type: Grant
    Filed: May 1, 2009
    Date of Patent: March 27, 2018
    Assignee: Azul Systems, Inc.
    Inventors: Gil Tene, Michael A. Wolf, Cliff N. Click, Jr.