Patents Examined by David Yi
  • Patent number: 11733870
    Abstract: Disclosed herein are systems having an integrated circuit device disposed within an integrated circuit package having a periphery, and within this periphery a transaction processor is configured to receive a combination of signals (e.g., using a standard memory interface), and intercept some of the signals to initiate a data transformation, and forward the other signals to one or more memory controllers within the periphery to execute standard memory access operations (e.g., with a set of DRAM devices). The DRAM devices may or may not be in within the package periphery. In some embodiments, the transaction processor can include a data plane and control plane to decode and route the combination of signals. In other embodiments, off-load engines and processor cores within the periphery can support execution and acceleration of the data transformations.
    Type: Grant
    Filed: January 16, 2019
    Date of Patent: August 22, 2023
    Assignee: Rambus Inc.
    Inventors: David Wang, Nirmal Saxena
  • Patent number: 11734198
    Abstract: The present disclosure provides methods, apparatuses, and systems for implementing and operating a memory module, for example, in a computing device that includes a network interface, which is coupled to a network to enable communication with a client device, and processing circuitry, which is coupled to the network interface via a data bus and programmed to perform operations based on user inputs received from the client device. The memory module includes memory devices, which may be non-volatile memory or volatile memory, and a memory controller coupled between the data bus and the of memory devices. The memory controller may be programmed to determine when the processing circuitry is expected to request a data block and control data storage in the memory devices.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: August 22, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Richard C. Murphy
  • Patent number: 11734185
    Abstract: A method to store a data value onto a cache of a storage hierarchy. A range of a collection of values that resides on a first tier of the hierarchy is initialized. The range is partitioned into disjointed range partitions; a first subset of which is designated as cached; a second subset is designated as uncached. The collection is partitioned into a subset of uncached data and cached data and placed into respective partitions. The range partition to which the data value belongs (i.e. the target range partition) is identified as being cached. If the cache is full, the range of the target range partition is reduced until either: the data value is excluded (if the data value is an end point of the partition range); or elements within the target range are evicted to make space for the data value.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: August 22, 2023
    Assignee: Kinaxis Inc.
    Inventor: Angela Lin
  • Patent number: 11733932
    Abstract: Example implementations relate to managing data on a memory module. Data may be transferred between a first NVM and a second NVM on a memory module. The second NVM may have a higher memory capacity and a longer access latency than the first NVM. A mapping between a first address and a second address may be stored in an NVM on the memory module. The first address may refer to a location at which data is stored in the first NVM. The second address may refer to a location, in the second NVM, from which the data was copied.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: August 22, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Gregg B Lesartre, Andrew R Wheeler
  • Patent number: 11733871
    Abstract: A request to write data corresponding to at least a first portion of a file is received. It is determined whether to perform the request either as an in-place write or as an out-of-place write. Performing the in-place write comprises performing a write to a low latency storage device, and performing the out-of-place write comprises performing a write to a higher latency storage device. The request is performed as either the in-place write or the out-of-place write based on the determination. Performing the request as the in-place write includes writing the data to a first location on a storage tier storing the first portion of the file, and performing the request as the out-of-place write includes writing the data to a second location on one of a plurality of storage tiers of a computing node, other than the first location.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: August 22, 2023
    Assignee: Cohesity, Inc.
    Inventors: Mohit Aron, Ganesha Shanmuganathan
  • Patent number: 11726706
    Abstract: A storage device includes a memory device including a plurality of sequential areas and a random area other than the plurality of sequential areas, the plurality of sequential areas storing pieces of data corresponding to consecutive logical addresses input from a host, a buffer memory device configured to temporarily store write data corresponding to a write request provided from the host and an operation controller configured to generate combined data by adding dummy data to the write data having a size less than a program unit size of the memory device, a size of the dummy data corresponding to a difference between the size of the write data and the program unit size, store the combined data in the memory device, and store combined data information, relating to the combined data stored in the memory device, in the buffer memory device.
    Type: Grant
    Filed: February 10, 2021
    Date of Patent: August 15, 2023
    Assignee: SK HYNIX INC.
    Inventors: Tae Jin Oh, Jung Ki Noh, Soon Yeal Yang
  • Patent number: 11726688
    Abstract: A storage system communicates with a host system and includes a storage device including storage medium divided into a plurality of blocks including high reliability blocks and reserve blocks, and a controller. The controller provides the host system with block information identifying the high reliability blocks among the plurality of blocks, receives a block allocation request from the host system, wherein the block allocation request is defined with reference to the block information and identifies at least one high reliability block to be used to store metadata, and allocates at least one high reliability block to a meta region in response to the block allocation request. The controller includes a bad block manager that manages an allocation operation performed in response to the block allocation request, and a repair module that repairs an error in metadata stored in one of the high reliability blocks.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: August 15, 2023
    Inventors: Jaeyoon Choi, Seokhwan Kim, Suman Prakash Balakrishnan, Dongjin Kim, Chansol Kim, Eunhee Rho, Hyejeong Jang, Walter Jun
  • Patent number: 11726922
    Abstract: Methods, systems, and computer program products for memory protection in hypervisor environments are provided herein. A method includes maintaining, by a memory management layer of a hypervisor environment, a blockchain-based hash chain associated with a page table of the memory management layer, the page table corresponding to a plurality of memory pages; and verifying, by the first memory management layer, content obtained in connection with a read operation for a given one of the plurality of memory pages based at least in part on hashes maintained for the given memory page in the blockchain-based hash chain.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: August 15, 2023
    Assignee: International Business Machines Corporation
    Inventors: Akshar Kaul, Krishnasuri Narayanam, Ken Kumar, Pankaj S. Dayama
  • Patent number: 11726906
    Abstract: According to one embodiment, a memory device includes a nonvolatile memory, address translation unit, generation unit, and reception unit. The nonvolatile memory includes erase unit areas. Each of the erase unit areas includes write unit areas. The address translation unit generates address translation information relating a logical address of write data written to the nonvolatile memory to a physical address indicative of a write position of the write data in the nonvolatile memory. The generation unit generates valid/invalid information indicating whether data written to the erase unit areas is valid data or invalid data. The reception unit receives deletion information including a logical address indicative of data to be deleted in the erase unit area.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: August 15, 2023
    Assignee: Kioxia Corporation
    Inventor: Shinichi Kanno
  • Patent number: 11726812
    Abstract: A multiprocessor system and method for swapping applications executing on the multiprocessor system are disclosed. The plurality of applications may include a first application and a plurality of other applications. The first application may be dynamically swapped with a second application. The swapping may be performed without stopping the plurality of other applications. The plurality of other applications may continue to execute during the swapping to perform a real-time operation and process real-time data. After the swapping, the plurality of other applications may continue to execute with the second application, and at least a subset of the plurality of other applications may communicate with the second application to perform the real time operation and process the real time data.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: August 15, 2023
    Assignee: Coherent Logix, Incorporated
    Inventors: Wilbur William Kaku, Michael Lyle Purnell, Geoffrey Neil Ellis, John Mark Beardslee, Zhong Qing Shang, Teng-I Wang, Stephen E. Lim
  • Patent number: 11726666
    Abstract: A network adapter includes a network interface controller and a processor. The network interface controller is to communicate over a peripheral bus with a host, and over a network with a remote storage device. The processor is to expose on the peripheral bus a peripheral-bus device that communicates with the host using a bus storage protocol, to receive first I/O transactions of the bus storage protocol from the host, via the exposed peripheral-bus device, and to complete the first I/O transactions in the remote storage device by (i) translating between the first I/O transactions and second I/O transactions of a network storage protocol, and (ii) executing the second I/O transactions in the remote storage device. For receiving and completing the first I/O transactions, the processor is to cause the network interface controller to transfer data directly between the remote storage device and a memory of the host using zero-copy.
    Type: Grant
    Filed: July 11, 2021
    Date of Patent: August 15, 2023
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Ben Ben-Ishay, Boris Pismenny, Yorai Itzhak Zack, Khalid Manaa, Liran Liss, Uria Basher, Or Gerlitz, Miriam Menes
  • Patent number: 11720259
    Abstract: An asynchronous power loss (APL) event is detected at a memory device. A last written page is identified in the memory device in response to detecting the APL event. A count of zeros programmed in the last written page is determined. The count of zeros is compared to a threshold constraint to determine whether to perform a dummy write operation on the last written page.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: August 8, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Michael G. Miller, Gary F. Besinga
  • Patent number: 11720269
    Abstract: Systems and methods for managing computer block storage for a computer application include calculating an optimal required block storage capacity based on the storage needs of the application; provisioning block storage of the optimal capacity; receiving at least one block storage usage metric of the application; using a machine learning based model, trained on historic data of at least one application, to identify at least one future time at which a block storage capacity adjustment is required; and adjusting the block storage capacity within a time of the future time at which the block storage capacity adjustment is required.
    Type: Grant
    Filed: April 5, 2022
    Date of Patent: August 8, 2023
    Assignee: ZESTY TECH LTD.
    Inventors: Alexey Baikov, Maxim Melamedov, Alon Oshri Kadashev, Michael Amar
  • Patent number: 11720253
    Abstract: Methods, systems, and devices for access of a memory system based on fragmentation are described. The memory system may receive a first message indicating a set of data that the memory system is to store using a fragmentation-based write procedure. The memory system may, based on the first message, determine blocks of a memory device that satisfy a fragmentation threshold. After determining the blocks, the memory system may transmit to the host system a second message that indicates the memory system is ready to receive the set of data indicated in the first message. The memory system may then store the set of data in the determined blocks based on transmitting the second message.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: August 8, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Jun Huang, Bhagyashree Bokade, Violet Gomm, Deping He, Lavanya Sriram
  • Patent number: 11714758
    Abstract: A method to store a data value onto a cache of a storage hierarchy. A range of a collection of values that resides on a first tier of the hierarchy is initialized. The range is partitioned into disjointed range partitions; a first subset of which is designated as cached; a second subset is designated as uncached. The collection is partitioned into a subset of uncached data and cached data and placed into respective portions. The range partition to which the data value belongs (i.e. the target range partition) is identified as being cached. If the cache is full, the target range partition is divided into two partitions, the partition that excludes the data value is designated as uncached; the values therein are evicted. If the cache has space, the data value is copied onto the cache; otherwise the division/eviction are repeated until the cache has space.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: August 1, 2023
    Assignee: Kinaxis Inc.
    Inventor: Angela Lin
  • Patent number: 11715030
    Abstract: Automatic object optimization to accelerate machine learning training is disclosed. A request for a machine learning training dataset comprising a plurality of objects is received from a requestor. The plurality of objects includes data for training a machine learning model. A uniqueness characteristic for objects of the plurality of objects is determined, the uniqueness characteristic being indicative of how unique each object is relative to each other object. A group of objects from the plurality of objects is sent to the requestor, the group of objects being selected based at least partially on the uniqueness characteristic or sent in an order based at least partially on the uniqueness characteristic.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: August 1, 2023
    Assignee: Red Hat, Inc.
    Inventors: Huamin Chen, Dennis R. C. Keefe
  • Patent number: 11709604
    Abstract: Technologies are provided for increasing electronic noise of a memory device during an initialization of the memory device and performing initialization operations, such as memory access centering operations, for the memory device while the electronic noise of the memory device is increased. The electronic noise of the memory device can be increased by increasing a level of ground bounce (or ground noise) during a training phase of the memory device. Increasing the ground noise can comprise increasing an inductance across a memory of the memory device during the training phase. The inductance can be increased by deactivating one or more ground connections of the memory during the memory’s training phase. Additionally or alternatively, the inductance can be increased by activating one or more inductors connected to one or more ground connections of the memory during the memory’s training phase.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: July 25, 2023
    Assignee: Amazon Technologies, Inc.
    Inventor: Adam Shobash
  • Patent number: 11709631
    Abstract: A system includes a processing device, operatively coupled with a memory device, to perform operations including receiving a media access operation command designating a first memory location, and determining whether a first media access operation command designating the first memory location and a second media access operation designating a second memory location are synchronized, after determining that the first and second media access operation commands are not synchronized, determining that the media access operation command is an error flow recovery (ERF) read command, in response to determining that the media access operation command is an ERF read command, determining whether a head command of the first queue is blocked from execution, and in response to determining that the head command is unblocked from execution, servicing the ERF read command from a media buffer maintaining previously written ERF data.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: July 25, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Fangfang Zhu, Jiangli Zhu, Juane Li
  • Patent number: 11709611
    Abstract: A system includes a parser that receives and parses source code for a reconfigurable dataflow processor, a tensor expression extractor configured to extract tensor indexing expressions from the source code, a logical memory constraint generator that converts the tensor indexing expressions to logical memory indexing constraints, a grouping module that groups the logical memory indexing constraints into concurrent access groups and a memory partitioning module that determines a memory unit partitioning solution for each concurrent access group. The system also includes reconfigurable dataflow processor that comprises an array of compute units and an array of memory units interconnected with a switching fabric. The reconfigurable dataflow processor may be configured to execute the plurality of tensor indexing expressions and access the array of memory units according to the memory unit partitioning solution. A corresponding method and computer-readable medium are also disclosed herein.
    Type: Grant
    Filed: August 1, 2022
    Date of Patent: July 25, 2023
    Assignee: SambaNova Systems, Inc.
    Inventors: Matthew S. Feldman, Yaqi Zhang
  • Patent number: 11704036
    Abstract: Systems and method for implementing deduplication process based on performance analyses. The system may include a processing device to determine a first performance metric associated with retrieving a second stored data block that is within a specified range of a duplicate of the first data block and a second performance metric associated with retrieving a hash value corresponding to the second stored data block. The processing device further to retrieve the second stored data block within a specified range of the duplicate of the first data block in response to the first performance metric not exceeding the second performance metric.
    Type: Grant
    Filed: November 16, 2018
    Date of Patent: July 18, 2023
    Assignee: PURE STORAGE, INC.
    Inventors: John Colgrove, Ronald Karr, Ethan L. Miller