Patents Examined by Zhuo H. Li
  • Patent number: 11966361
    Abstract: With a forever incremental snapshot configuration and a typical caching policy (e.g., least recently used), a storage appliance may evict stable data blocks of an older snapshot, perhaps unchanged data blocks of the snapshot baseline. If stable data blocks have been evicted, restore of a recent snapshot will suffer the time penalty of downloading the stable blocks for restoring the recent snapshot. Creating synthetic baseline snapshots and refreshing eviction data of stable data blocks can avoid eviction of stable data blocks and reduce the risk of violating a recovery time objective.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: April 23, 2024
    Assignee: NetApp, Inc.
    Inventors: Ajay Pratap Singh Kushwah, Ling Zheng, Sharad Jain
  • Patent number: 11954047
    Abstract: Systems, methods, and apparatuses to implement spatially unique and location independent persistent memory encryption are described. In one embodiment, a system on a chip (SoC) includes at least one persistent range register to indicate a persistent range of memory, an address modifying circuit to check if an address for a memory store request is within the persistent range indicated by the at least one persistent range register, and append a unique identifier value, for a component corresponding to the memory store request for the address, to the address to generate a modified address and output the modified address as an output address when the address is within the persistent range, and output the address as the output address when the address is not within the persistent range, and an encryption engine circuit to generate a ciphertext based on the output address.
    Type: Grant
    Filed: September 26, 2020
    Date of Patent: April 9, 2024
    Assignee: Intel Corporation
    Inventors: Mahesh Natu, Anand K. Enamandram, Manjula Peddireddy, Robert A. Branch, Tiffany J. Kasanicky, Siddhartha Chhabra, Hormuzd Khosravi
  • Patent number: 11947821
    Abstract: The present disclosure provides methods, systems, and non-transitory computer readable media for managing a primary storage unit of an accelerator. The methods include assessing activity of the accelerator; assigning, based on the assessed activity of the accelerator, a lease to a group of one or more pages of data on the primary storage unit, wherein the assigned lease indicates a lease duration; and marking, in response to the expiration of the lease duration indicated by the lease, the group of one or more pages of data as an eviction candidate.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: April 2, 2024
    Assignee: Alibaba Group Holding Limited
    Inventors: Yongbin Gu, Pengcheng Li, Tao Zhang, Yuan Xie
  • Patent number: 11941259
    Abstract: A communication method applied to a computer system that includes a first subsystem and a second subsystem. A safety level of the first subsystem is higher than a safety level of the second subsystem. The first subsystem includes a memory access checker. The method includes the memory access checker receives a memory access request from a memory access initiator, determines, based on preconfigured memory safety level division information, whether a safety level of a memory to be accessed by the memory access initiator matches a safety level of the memory access initiator, and allows the memory access initiator to access the memory address when the safety level of the memory matches the safety level of the memory access initiator.
    Type: Grant
    Filed: July 7, 2021
    Date of Patent: March 26, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Dongjiu Geng, Chuanlong Yang, Yan Sang, Qiangmin Lin
  • Patent number: 11922032
    Abstract: A content addressable memory circuit is provided that includes: multiple integrated circuit memory devices that include memory address locations that share common memory addresses; buffer circuits operatively coupled to the memory devices; a hash table that includes a plurality of hash values that each corresponds to one or more key values; one or more processor circuits configured with instructions to perform operations that include: assigning each hash value to a memory address location based upon a first portion of the hash value; storing each key value at a memory address location assigned to a first portion of a hash value that corresponds to the key value; copying a first key value from a first memory address location within a memory device to a buffer circuit operatively coupled to the memory device; copying the first key value from the buffer circuit operatively coupled to the memory device to a second memory address location of the memory device; and assigning a second portion of a hash value that co
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: March 5, 2024
    Assignee: DreamBig Semiconductor Inc.
    Inventors: Sohail A Syed, Hillel Gazit, Hon Luu, Pranab Ghosh
  • Patent number: 11914905
    Abstract: Examples describe memory refresh operations for memory subsystems. One example is a method for a memory controller, the method including entering a first state upon exiting self-refresh state, wherein the first state comprises activating a first timer. The method includes entering a second state from the first state upon detecting an end of an active period and detecting that the first timer has not expired. The method includes entering a third state from the second state upon detecting expiration of the second state, wherein the third state comprises re-entering the self-refresh state.
    Type: Grant
    Filed: July 15, 2021
    Date of Patent: February 27, 2024
    Assignee: XILINX, INC.
    Inventor: Martin Newman
  • Patent number: 11907121
    Abstract: A method for caching content, a method for reading content, a client, and a storage medium are provided. The method for caching content includes: acquiring JSON data corresponding to content to be delivered, and determining identification information corresponding to the JSON data; grouping the JSON data and storing the grouped JSON data according to the identification information to obtain a memory list corresponding to the identification information, and writing the JSON data to a target disk according to the identification information and the memory list; performing video preloading processing on the content to be delivered according to the JSON data to obtain preloaded video data, and determining address information corresponding to the preloaded video data; and storing the address information in the memory list and the target disk according to the identification information, to complete caching of the JSON data and the preloaded video data.
    Type: Grant
    Filed: August 14, 2020
    Date of Patent: February 20, 2024
    Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
    Inventors: Weiqin Lian, You Tu
  • Patent number: 11907555
    Abstract: Described are memory modules that include address-buffer components and data-buffer components that together support wide- and narrow-data modes. The address-buffer component manages communication between a memory controller and two sets of memory components. In the wide-data mode, the address-buffer enables memory components in each set and instructs the data-buffer components to communicate full-width read and write data by combining data from or to from both sets for each memory access. In the narrow-data mode, the address-buffer enables memory components in just one of the two sets and instructs the data-buffer components to half-width read and write data with one set per memory access.
    Type: Grant
    Filed: November 18, 2022
    Date of Patent: February 20, 2024
    Assignee: Rambus Inc.
    Inventors: Suresh Rajan, Abhijit M. Abhyankar, Ravindranath Kollipara, David A. Secker
  • Patent number: 11899940
    Abstract: When load requests are generated to support data processing operations, the load requests are buffered in pending load buffer circuitry prior to being carried out. Coalescing circuitry determines for a first load request whether a set of one or more subsequent load requests buffered in the pending load buffer circuitry satisfies an address proximity condition. The address proximity condition is satisfied when all data items identified by the set of one or more subsequent load requests are comprised within a series of data items which will be retrieved from the memory system in response to the first load request. When the address proximity condition is satisfied, forwarding of the set of one or more subsequent load requests is suppressed.
    Type: Grant
    Filed: October 7, 2020
    Date of Patent: February 13, 2024
    Assignee: Arm Limited
    Inventors: Mbou Eyole, Stefanos Kaxiras
  • Patent number: 11899595
    Abstract: Systems and methods for providing object versioning in a storage system may support the logical deletion of stored objects. In response to a delete operation specifying both a user key and a version identifier, the storage system may permanently delete the specified version of an object having the specified key. In response to a delete operation specifying a user key, but not a version identifier, the storage system may create a delete marker object that does not contain object data, and may generate a new version identifier for the delete marker. The delete marker may be stored as the latest object version of the user key, and may be addressable in the storage system using a composite key comprising the user key and the new version identifier. Subsequent attempts to retrieve the user key without specifying a version identifier may return an error, although the object was not actually deleted.
    Type: Grant
    Filed: September 2, 2022
    Date of Patent: February 13, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Jason G. McHugh, Praveen Kumar Gattu, Michael A. Ten-Pow, Derek Ernest Denny-Brown, II
  • Patent number: 11893242
    Abstract: Semiconductor devices, packaging architectures and associated methods are disclosed. In one embodiment, a multi-chip module (MCM) is disclosed. The MCM includes a common substrate and a first integrated circuit (IC) chip disposed on the common substrate. The first IC chip includes a first memory interface. A second IC chip is disposed on the common substrate and includes a second memory interface. A first memory device is disposed on the common substrate and includes memory and a first port coupled to the memory. The first port is configured for communicating with the first memory interface of the first IC chip. A second port is coupled to the memory and communicates with the second memory interface of the second IC chip. In-memory processing circuitry is coupled to the memory and controls transactions between the first memory device and the first and second IC chips.
    Type: Grant
    Filed: November 25, 2022
    Date of Patent: February 6, 2024
    Assignee: Eliyan Corporation
    Inventors: Ramin Farjadrad, Syrus Ziai
  • Patent number: 11892952
    Abstract: A processor of an aspect includes a plurality of packed data registers, and a decode unit to decode a no-locality hint vector memory access instruction. The no-locality hint vector memory access instruction to indicate a packed data register of the plurality of packed data registers that is to have a source packed memory indices. The source packed memory indices to have a plurality of memory indices. The no-locality hint vector memory access instruction is to provide a no-locality hint to the processor for data elements that are to be accessed with the memory indices. The processor also includes an execution unit coupled with the decode unit and the plurality of packed data registers. The execution unit, in response to the no-locality hint vector memory access instruction, is to access the data elements at memory locations that are based on the memory indices.
    Type: Grant
    Filed: July 18, 2022
    Date of Patent: February 6, 2024
    Assignee: Intel Corporation
    Inventor: Christopher J. Hughes
  • Patent number: 11886713
    Abstract: A memory control device 100 of the present invention includes a data storage processing unit 101 that stores, in an additional data area that is an area for storing additional data in memory-stored data including compressed data and the additional data to be stored in a memory, an error correcting code of the compressed data and compression information representing the degree of compression of the compressed data, and a read processing unit 102 that controls readout of the memory-stored data on the basis of the degree of compression represented by the compression information in the additional data area of the memory-stored data, when reading out the memory-stored data from the memory.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: January 30, 2024
    Assignee: NEC CORPORATION
    Inventor: Kei Kimoto
  • Patent number: 11886715
    Abstract: Apparatuses and methods for memory array accessibility can include an apparatus with an array of memory cells. The array can include a first portion accessible by a controller of the array and inaccessible to devices external to the apparatus. The array can include a second portion accessible to the devices external to the apparatus. The array can include a number of registers that store row address that indicate which portion of the array is the first portion. The apparatus can include the controller configured to access the number of registers to allow access to the second portion by the devices external to the apparatus based on the stored row addresses.
    Type: Grant
    Filed: November 19, 2021
    Date of Patent: January 30, 2024
    Inventors: Daniel B. Penney, GAry L. Howe
  • Patent number: 11861211
    Abstract: API in conjunction with a bridge chip and first and second hosts having first and second memories respectively. The bridge chip connects the memories. The API comprises key identifier registration functionality to register a key identifier for each of plural computer processes performed by the first host, thereby to define plural key identifiers; and/or access control functionality to provide at least computer process P1 performed by the first host with access, typically via the bridge chip, to at least local memory buffer M2 residing in the second memory, typically after the access control functionality first validates that process P1 has a key identifier which has been registered, e.g., via the key identifier registration functionality. Typically, the access control functionality also prevents at least computer process P2, performed by the first host, which has not registered a key identifier, from accessing local memory buffer M2, e.g., via the bridge chip.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: January 2, 2024
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Gal Shalom, Adi Horowitz, Omri Kahalon, Liran Liss, Aviad Yehezkel, Rabie Loulou
  • Patent number: 11853565
    Abstract: A data storage device includes a memory device and a controller coupled to the memory device. The controller is configured to allocate two or more zones to a first superblock of a plurality of superblocks. The controller is further configured to allocate a zone to a second superblock, where the second superblock only stores data of the zone. The first superblock has a first priority and the second superblock has a second priority, where the second priority is greater than the first priority. Data is moved from the first superblock to another superblock dedicated for a single zone after the first superblock is closed.
    Type: Grant
    Filed: October 1, 2021
    Date of Patent: December 26, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventor: Ravishankar Surianarayanan
  • Patent number: 11853214
    Abstract: A method for compressing data in a local cache of a web server is described. A local cache compression engine accesses values in the local cache and determines a cardinality of the values of the local cache. The local cache compression engine determines a compression rate of a compression algorithm based on the cardinality of the values of the local cache. The compression algorithm is applied to the cache based on the compression rate to generate a compressed local cache.
    Type: Grant
    Filed: November 29, 2022
    Date of Patent: December 26, 2023
    Assignee: eBay Inc.
    Inventor: Amit Desai
  • Patent number: 11853594
    Abstract: A neural network computing chip includes a translation circuit and a computing circuit. After input data is processed by the translation circuit, a value of each element of the data input into the computing circuit is not a negative number, thereby meeting a value limitation condition imposed by the computing circuit on the input data.
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: December 26, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Xingcheng Hua, Zhong Zeng, Leibin Ni
  • Patent number: 11816036
    Abstract: Method and system for performing data movement operations is described herein. One embodiment of a method includes: storing data for a first memory address in a cache line of a memory of a first processing unit, the cache line associated with a coherency state indicating that the memory has sole ownership of the cache line; decoding an instruction for execution by a second processing unit, the instruction comprising a source data operand specifying the first memory address and a destination operand specifying a memory location in the second processing unit; and responsive to executing the decoded instruction, copying data from the cache line of the memory of the first processing unit as identified by the first memory address, to the memory location of the second processing unit, wherein responsive to the copy, the cache line is to remain in the memory and the coherency state is to remain unchanged.
    Type: Grant
    Filed: May 6, 2022
    Date of Patent: November 14, 2023
    Assignee: Intel Corporation
    Inventors: Anil Vasudevan, Venkata Krishnan, Andrew J. Herdrich, Ren Wang, Robert G. Blankenship, Vedaraman Geetha, Shrikant M. Shah, Marshall A. Millier, Raanan Sade, Binh Q. Pham, Olivier Serres, Chyi-Chang Miao, Christopher B. Wilkerson
  • Patent number: 11809321
    Abstract: Methods and apparatus for memory management are described. In one example, this disclosure describes a method that includes executing, by a first processing unit, first work unit operations specified by a first work unit message, wherein execution of the first work unit operations includes accessing data from shared memory included within the computing system, modifying the data, and storing the modified data in a first cache associated with the first processing unit; identifying, by the computing system, a second work unit message that specifies second work unit operations that access the shared memory; updating, by the computing system, the shared memory by storing the modified data in the shared memory; receiving, by the computing system, an indication that updating the shared memory with the modified data is complete; and enabling the second processing unit to execute the second work unit operations.
    Type: Grant
    Filed: June 10, 2022
    Date of Patent: November 7, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Wael Noureddine, Jean-Marc Frailong, Pradeep Sindhu, Bertrand Serlet