Patents Examined by Aracelis Ruiz
  • Patent number: 11409670
    Abstract: Managing lock coordinator rebalance in distributed file systems is provided herein. A node device of a cluster of node devices can comprise a processor and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations. The operations can comprise determining an occurrence of a group change between a cluster of node devices and executing a probe function based on the occurrence of the group change. Further, the operations can comprise reasserting first locks of a group of locks based on a result of the probe function indicating reassertion of the first locks. The second locks of the group of locks, other than the first locks, are not reasserted based on the result of the probe function. The cluster of node devices can operate as a distributed file system.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: August 9, 2022
    Assignee: EMC IP Holding Company LLC
    Inventor: Ron Steinke
  • Patent number: 11403232
    Abstract: One example method includes determining a fall through threshold value for a cache, computing a length ‘s’ of a sequence that is close to LRU eviction, and the length ‘s’ is computed when a current fall through metric value is greater than the fall through threshold value, when the sequence length ‘s’ is greater than a predetermined threshold length ‘k,’ performing a first shift of an LRU position to define a protected queue of the cache, initializing a counter with a value of ‘r’, decrementing the counter each time a requested page is determined to be included in the protected queue, until ‘r’=0, and performing a second shift of the LRU position.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: August 2, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Hugo De Oliveira Barbalho, Jonas F. Dias, Vinicius Michel Gottin
  • Patent number: 11403177
    Abstract: A data processing system includes a host suitable for generating a plurality of write data grouped into transactions and a plurality of write commands including transaction information of each of the write data; and a memory system suitable for storing the write data in a normal region of a memory device in response to the write commands received from the host, and storing the transaction information included in each of the write commands in a spare region, which corresponds to the normal region, of the memory device.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: August 2, 2022
    Assignee: SK hynix Inc.
    Inventors: Hae-Gi Choi, Kyeong-Rho Kim, Su-Chang Kim, Sung-Kwan Hong
  • Patent number: 11397682
    Abstract: A network device in a communication network includes a controller and processing circuitry. The controller is configured to manage execution of an operation whose execution depends on inputs from a group of one or more work-request initiators. The processing circuitry is configured to read one or more values, which are set by the work-request initiators in one or more memory locations that are accessible to the work-request initiators and to the network device, and to trigger execution of the operation in response to verifying that the one or more values read from the one or more memory locations indicate that the work-request initiators in the group have provided the respective inputs.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: July 26, 2022
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Daniel Marcovitch, Gil Bloch, Richard Graham, Ariel Shahar, Roee Moyal, Igor Voks
  • Patent number: 11397686
    Abstract: A microprocessor includes a physically-indexed-and-tagged second-level set-associative cache. Each cache entry is uniquely identified by a set index and way number. Each store queue (SQ) entry holds store data for writing to a store physical address and a store physical address proxy (PAP) for the store physical line address. The store PAP specifies the set index and way number of the cache entry allocated to the store physical line address. A load unit obtains a load PAP for a load physical line address that specifies the set index and way number of the cache entry allocated to the load physical line address. The SQ compares the load PAP with each valid store PAP for use in identifying a candidate set of SQ entries whose store data overlaps requested load data and selects an entry from the candidate set from which to forward the store data to the load instruction.
    Type: Grant
    Filed: June 18, 2021
    Date of Patent: July 26, 2022
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 11386009
    Abstract: An example configuration system for a programmable device includes: a configuration memory read/write unit configured to receive configuration data for storage in a configuration memory of the programmable device, the configuration memory comprising a plurality of frames; a plurality of configuration memory read/write controllers coupled to the configuration memory read/write unit; a plurality of fabric sub-regions (FSRs) respectively coupled to the plurality of configuration memory read/write controllers, each FSR including a pipeline of memory cells of the configuration memory disposed between buffers and a configuration memory read/write pipeline unit coupled between the pipeline and a next one of the plurality of FSRs.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: July 12, 2022
    Assignee: XILINX, INC.
    Inventors: David P. Schultz, Weiguang Lu, Karthy Rajasekharan, Shidong Zhou, Michael Tsivyan, Jing Jing Chen, Sourabh Goyal
  • Patent number: 11386019
    Abstract: The present invention discloses data secure method, applied to a storage device, and performed by a controller of the storage device. The data secure method comprises: receiving a buffer clear command from an external processing unit, wherein the buffer clear command indicates that a first secure area corresponding to a first physical address range of a buffer memory of the storage device is required to be cleared, and a first secure key is corresponding to the first secure area for accessing the first secure area; and in response to the buffer clear command, configuring a secure unit of the storage device to cause the secure unit to use one or more second keys different from the first secure key when accessing the first physical address range.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: July 12, 2022
    Assignee: MEDIATEK INC.
    Inventors: Yu-Tien Chang, Ching-Ming Chen, Wei-Hsun Lin, Lin-Ming Hsu, Tsung-Wei Hung
  • Patent number: 11379382
    Abstract: A method for demoting a selected storage element from a cache memory includes storing favored and non-favored storage elements within a higher performance portion and lower performance portion of the cache memory. The favored storage elements are retained in the cache memory longer than the non-favored storage elements. The method maintains a first favored LRU list and a first non-favored LRU list, associated with the favored and non-favored storage elements stored within the higher performance portion of the cache. The method selects a favored or non-favored storage element to be demoted from the higher performance portion of the cache memory according to life expectancy and residency of the oldest favored and non-favored storage elements in the first LRU lists. The method demotes the selected from the higher performance portion of the cache to the lower performance portion of the cache, or to the data storage devices, according to a cache demotion policy.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: July 5, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Kevin J. Ash, Beth A. Peterson, Matthew G. Borlick
  • Patent number: 11372721
    Abstract: A data processing system includes a host suitable for generating a plurality of write data grouped into transactions and a plurality of write commands including transaction information of each of the write data; and a memory system suitable for storing the write data in a normal region of a memory device in response to the write commands received from the host, and storing the transaction information included in each of the write commands in a spare region, which corresponds to the normal region, of the memory device.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: June 28, 2022
    Assignee: SK hynix Inc.
    Inventors: Hae-Gi Choi, Kyeong-Rho Kim, Su-Chang Kim, Sung-Kwan Hong
  • Patent number: 11372777
    Abstract: A memory interface for interfacing between a memory bus addressable using a physical address space and a cache memory addressable using a virtual address space, the memory interface comprising: a memory management unit configured to maintain a mapping from the virtual address space to the physical address space; and a coherency manager comprising a reverse translation module configured to maintain a mapping from the physical address space to the virtual address space; wherein the memory interface is configured to: receive a memory read request from the cache memory, the memory read request being addressed in the virtual address space; translate the memory read request, at the memory management unit, to a translated memory read request addressed in the physical address space for transmission on the memory bus; receive a snoop request from the memory bus, the snoop request being addressed in the physical address space; and translate the snoop request, at the coherency manager, to a translated snoop request addr
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: June 28, 2022
    Assignee: Imagination Technologies Limited
    Inventors: Martin John Robinson, Mark Landers
  • Patent number: 11360905
    Abstract: A caching system including a first sub-cache, a second sub-cache, coupled in parallel with the first sub-cache, for storing write-memory commands that are not cached in the first sub-cache, the second sub-cache including privilege bits configured to store an indication that a corresponding cache line of the second sub-cache is associated with a level of privilege, and wherein the second sub-cache is further configured to receive a first write memory command for a memory address associated with a first level of privilege, store, in the second sub-cache, first data associated with the first write memory command and the level of privilege associated with the cache line, receive a second write memory command for the cache line, the second write memory command associated with a second level of privilege, merge the first level of privilege with the second level of privilege, and output the merged privilege level with the cache line.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: June 14, 2022
    Assignee: Texas Instmments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Hippleheuser
  • Patent number: 11354127
    Abstract: A computing system includes a memory controller having a plurality of bypass parameters set by a software program, a thresholds matrix to store threshold values selectable by the plurality of bypass parameters, and a bypass function to determine whether a first cache line is to be displaced with a second cache line in a first memory or the first cache line remains in the first memory and the second cache line is to be accessed by at least one of a processor core and the cache from a second memory.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: June 7, 2022
    Assignee: INTEL CORPORATION
    Inventors: Harshad S. Sane, Anup Mohan, Kshitij A. Doshi, Mark A. Schmisseur
  • Patent number: 11341036
    Abstract: A system includes a memory device and a processing device, coupled to the memory device. The processing device is to sample a first subset of data units from a set of data units of the memory device using a biased sampling process that increases a probability of sampling particular data units from the set of data units based on one or more characteristics associated with the particular data units. The processing device is to identify a first candidate data unit from the first subset of data units and perform a wear leveling operation in view of the first candidate data unit.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: May 24, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Ying Yu Tai, Jiangli Zhu
  • Patent number: 11340836
    Abstract: A variety of applications can include apparatus and/or methods of operating the apparatus in which functionalities of a memory device of the apparatus can be extended by changing data flow behaviour associated with standard commands used between a host platform and the memory device. Such functionalities can include debug capabilities. In an embodiment, a standard write command and data using a standard protocol to write to a memory device is received in the memory device, where the data is setup information to enable an extension component in the memory device. An extension component includes instructions in the memory device to execute operations on components of the memory device. The memory device can execute operations of the enabled extension component in the memory device based on the setup information. Additional apparatus, systems, and methods are disclosed.
    Type: Grant
    Filed: August 11, 2020
    Date of Patent: May 24, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Angelo Della Monica, Eric Kwok Fung Yuen, Pasquale Cimmino, Massimo Iaculo, Francesco Falanga
  • Patent number: 11334488
    Abstract: A cache management circuit that includes a predictive adjustment circuit configured to predictively generate cache control information based on a cache hit-miss indicator and the retention ranks of accessed cache lines to improve cache efficiency is disclosed. The predictive adjustment circuit stores the cache control information persistently, independent of whether the data remains in cache memory. The stored cache control information is indicative of prior cache access activity for data from a memory address, which is indicative of the data's “usefulness.” Based on the cache control information, the predictive adjustment circuit controls generation of retention ranks for data in the cache lines when the data is inserted, accessed, and evicted.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: May 17, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Rami Mohammad Al Sheikh, Arthur Perais, Michael Scott McIlvaine
  • Patent number: 11327896
    Abstract: A method and apparatus for storing and accessing sparse data is disclosed. A sparse array circuit may receive information indicative of a request to perform a read operation on a memory circuit that includes multiple banks. The sparse array circuit may compare an address included in the received information to multiple entries that correspond to address locations in the memory circuit that store sparse data. In response to a determination that that the address matches a particular entry, the sparse array may generate one or more control signals that may disable the read operation, and cause a data control circuit to transmits the sparse data pattern.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: May 10, 2022
    Assignee: Apple Inc.
    Inventors: Michael R. Seningen, Ben D. Jarrett, Edward M. McCombs, Greg M. Hess
  • Patent number: 11301389
    Abstract: An executable memory page validation system for validating one or more executable memory pages on a given endpoint, the executable memory page validation system comprising at least one processing resource configured to: obtain a plurality of vectors, each vector of the vectors being a bitmask indicative of valid hash values calculated for a plurality of executable memory pages available on the endpoint, the valid hash values being calculated using a respective distinct hash function; calculate one or more validation hash values for a given executable memory page to be loaded to a computerized memory of the endpoint for execution thereof, using one or more selected hash functions of the distinct hash functions; and determine that the given executable memory page is invalid, upon one or more of the validation hash values not being indicated as valid in the corresponding one or more vectors.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: April 12, 2022
    Assignee: SAFERIDE TECHNOLOGIES LTD.
    Inventors: Yehiel Stein, Yossi Vardi, Oshri Yahav
  • Patent number: 11294707
    Abstract: A method includes receiving, by a L2 controller, a request to perform a global operation on a L2 cache and preventing new blocking transactions from entering a pipeline coupled to the L2 cache while permitting new non-blocking transactions to enter the pipeline. Blocking transactions include read transactions and non-victim write transactions. Non-blocking transactions include response transactions, snoop transactions, and victim transactions.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: April 5, 2022
    Assignee: Texas Instruments Incorporated
    Inventors: Abhijeet Ashok Chachad, Naveen Bhoria, David Matthew Thompson, Neelima Muralidharan
  • Patent number: 11288207
    Abstract: Apparatus comprises address translation circuitry configured to access translation data defining a set of memory address translations; transaction handling circuitry to receive translation transactions and to receive invalidation transactions, each translation transaction defining one or more input memory addresses in an input memory address space to be translated to respective output memory addresses in an output memory address space, in which the transaction handling circuitry is configured to control the address translation circuitry to provide the output memory address as a translation response; in which each invalidation transaction defines at least a partial invalidation of the translation data; transaction tracking circuitry to associate an invalidation epoch, of a set of at least two invalidation epochs, with each translation transaction and with each invalidation transaction; and invalidation circuitry to store data defining a given invalidation transaction and, for translation transactions having th
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: March 29, 2022
    Assignee: Arm Limited
    Inventor: Peter Andrew Riocreux
  • Patent number: 11288197
    Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer.
    Type: Grant
    Filed: November 29, 2020
    Date of Patent: March 29, 2022
    Assignee: Silicon Motion Technology (Hong Kong) Limited
    Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu