Patents Examined by Tuan V. Thai
  • Patent number: 11966581
    Abstract: According to one general aspect, a memory management unit (MMU) may be configured to interface with a heterogeneous memory system that comprises a plurality of types of storage mediums. Each type of storage medium may be based upon a respective memory technology and may be associated with performance characteristic(s). The MMU may receive a data access for the heterogeneous memory system. The MMU may also determine at least one of the storage mediums of the heterogeneous memory system to service the data access. The target storage medium may be selected based upon at least one performance characteristic associated with the target storage medium and a quality of service tag that is associated with the virtual machine and that indicates one or more performance characteristics. The MMU may route the data access by the virtual machine to the at least one of the storage mediums.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: April 23, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Manu Awasthi, Robert Brennan
  • Patent number: 11954050
    Abstract: A method for direct memory access includes: receiving a direct memory access request designating addresses in a data block to be accessed in a memory; randomizing an order of the addresses the data block is accessed; and accessing the memory at addresses in the randomized order. A system for direct memory access is disclosed.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: April 9, 2024
    Assignee: NXP USA, Inc.
    Inventors: Jurgen Geerlings, Yang Liu, Zhijun Chen
  • Patent number: 11934693
    Abstract: The present disclosure generally relates to data storage devices, such as solid state drives (SSDs), specifically utilizing the data storage device memory in the execution of host commands. A controller is configured to receive a command pointer or a data chunk from a host device, mark a destination used for the command pointer or the data chunk, determine whether a last chunk of the command pointer or the data chunk has been received, and determine whether the command pointer or the data chunk uses an illegal combination of locations after determining that the last chunk of the command pointer has been received. The controller is further configured to return an error message to the host device upon determining that the command pointer or the data chunk uses an illegal combination of locations.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: March 19, 2024
    Assignee: Western Digital Technologies, Inc.
    Inventors: Amir Segev, Shay Benisty
  • Patent number: 11880311
    Abstract: A method for controlling operations of an asynchronous FIFO memory includes: determining a current depth of the asynchronous FIFO memory according to at least one of a clock ratio, a burst length and a continuous transmission length, where the clock ratio is a ratio of a frequency of a first clock signal used by a master device to a frequency of a second clock signal used by a slave device; configuring one or more entries of the asynchronous FIFO memory to be used according to the current depth; and controlling a plurality of FIFO clock signals provided to the asynchronous FIFO memory according to the current depth. One FIFO clock signal corresponds to one entry, and one or more FIFO clock signals corresponding to one or more entries that are not configured according to the current depth are disabled.
    Type: Grant
    Filed: August 5, 2022
    Date of Patent: January 23, 2024
    Assignee: Realtek Semiconductor Corp.
    Inventors: Yuefeng Chen, Hui Gu
  • Patent number: 11880760
    Abstract: A processor to perform inference on deep learning neural network models. In some embodiments, the process includes: a first tile, a second tile, a memory, and a bus, the bus being connected to: the memory, the first tile, and the second tile, the first tile including: a first weight register, a second weight register, an activations cache, a shuffler, an activations buffer, a first multiplier, and a second multiplier, the activations buffer being configured to include: a first queue connected to the first multiplier, and a second queue connected to the second multiplier, the activations cache including a plurality of independent lanes, each of the independent lanes being randomly accessible, the first tile being configured: to receive a tensor including a plurality of two-dimensional arrays, each representing one color component of the image; and to perform a convolution of a kernel with one of the two-dimensional arrays.
    Type: Grant
    Filed: April 3, 2020
    Date of Patent: January 23, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ilia Ovsiannikov, Ali Shafiee Ardestani, Hamzah Ahmed Ali Abdelaziz, Joseph H. Hassoun
  • Patent number: 11868287
    Abstract: The memory sub-systems of the present disclosure discloses a just-in-time (JIT) scheduling system and method. In one embodiment, a system receives a request to perform a memory operation using a hardware resource associated with a memory device. The system identifies a traffic class corresponding to the memory operation. The system determines a number of available quality of service (QoS) credits for the traffic class during a current scheduling time frame. The system determines a number of QoS credits associated with a type of the memory operation. Responsive to determining the number of QoS credits associated with the type of the memory operation is less than the number of available QoS credits, the system submits the memory operation to be processed at a memory device.
    Type: Grant
    Filed: August 20, 2021
    Date of Patent: January 9, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Johnny A Lam, Alex J. Wesenberg, Guanying Wu, Sanjay Subbarao, Chandra Guda
  • Patent number: 11861165
    Abstract: A system, method, and machine-readable storage medium for analyzing a state of a data object are provided. In some embodiments, the method includes receiving, at a storage device, a metadata request for the data object from a client. The data object is composed of a plurality of segments. The method also includes selecting a subset of the plurality of segments and obtaining a segment state for each segment of the subset. Each segment state indicates whether the respective segment is accessible via a backing store. The method further includes determining a most restrictive state of the one or more segment states and sending state information to the client in response to the metadata request, the state information being derived from the most restrictive state.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: January 2, 2024
    Assignee: NETAPP, INC.
    Inventors: Raymond Yu Shun Mak, Aditya Kalyanakrishnan, Song Guen Yoon, Emalayan Vairavanathan, Dheeraj Sangamkar, Chia-Chen Chu
  • Patent number: 11861195
    Abstract: The present disclosure generally relates to improving programming to data storage devices, such as solid state drives (SSDs). A first memory device has a first XOR element and a second memory device has a second XOR element. The ratio of the first XOR element to the capacity of the first memory device is substantially smaller than the ratio of the second XOR element to the capacity of the second memory device. A read verify operation to find program failures is executed on either a wordline to wordline basis, an erase block to erase block basis, or both a wordline to wordline basis and an erase block to erase block basis. Because the program failures are found and fixed prior to programming to the second memory device, the second XOR element may be decreased substantially.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: January 2, 2024
    Assignee: Western Digital Technologies, Inc.
    Inventors: Sergey Anatolievich Gorobets, Alan D. Bennett, Liam Parker, Yuval Shohet, Michelle Martin
  • Patent number: 11853206
    Abstract: An operation method of a memory system, the operation method may include: determining a garbage collection trigger condition based on current time and usage of the memory system over a set period of time; and performing a garbage collection operation when the garbage collection trigger condition is satisfied.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: December 26, 2023
    Assignee: SK hynix Inc.
    Inventor: Jong-Hwan Lee
  • Patent number: 11824723
    Abstract: A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: November 21, 2023
    Assignee: NUMECENT HOLDINGS, INC.
    Inventors: Jeffrey DeVries, Arthur S. Hitomi
  • Patent number: 11822484
    Abstract: A cache includes an upstream port, a cache memory for storing cache lines each having a line width, and a cache controller. The cache controller is coupled to the upstream port and the cache memory. The upstream port transfers data words having a transfer width less than the line width. In response to a cache line fill, the cache controller selectively determines data bus inversion information for a sequence of data words having the transfer width, and stores the data bus inversion information along with selected inverted data words for the cache line fill in the cache memory.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: November 21, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, John Wuu, Chintan S. Patel
  • Patent number: 11816026
    Abstract: A digital signal processing device includes: a delay means that delays audio data in units of sampling periods; and a control means that writes audio data to a first buffer memory one word at a time in sequence at a sampling period, performs control to burst transfer burst length audio data to a DRAM from the first buffer memory, performs control to burst transfer the burst length audio data to a second buffer memory from the DRAM, and outputs audio data to the delay means from the second buffer memory one word at a time in sequence at the sampling period, in which a delay time of audio data output by the delay means is determined by a combination of a delay time of multiple sampling period units depending on a burst length of the DRAM and a delay time of a sampling period unit of the delay means.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: November 14, 2023
    Assignee: KABUSHIKI KAISHA KAWAI GAKKI SEISAKUSHO
    Inventors: Seiji Okamoto, Tetsuya Hirano
  • Patent number: 11798640
    Abstract: A memory device includes a memory cell array and a memory controller. The memory cell array includes a plurality of memory blocks. Each of the memory blocks includes a plurality of word lines. A plurality of memory chunks is coupled to at least one of the word lines. The memory controller is configured to program data to a particular memory chunk of the plurality of memory chunks by performing a chunk operation that includes selecting a particular word line from the plurality of word lines, selecting a particular memory chunk from the plurality of memory chunks that are coupled to the particular word line, and applying a program voltage to a particular memory block corresponding to the particular memory chunk to program data to the particular memory chunk.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: October 24, 2023
    Assignee: Macronix International Co., Ltd.
    Inventor: Yi-Chun Liu
  • Patent number: 11789869
    Abstract: The technology disclosed herein involves tracking contention and using the tracked contention to reduce latency of exclusive memory operations. The technology enables a processor to track which locations in main memory are contentious and to modify the order exclusive memory operations are processed based on the contentiousness. A thread can include multiple exclusive operations for the same memory location (e.g., exclusive load and a complementary exclusive store). The multiple exclusive memory operations can be added to a queue and include one or more intervening operations between them in the queue. The processor may process the operations in the queue based on the order they were added and may use the tracked contention to perform out-of-order processing for some of the exclusive operations. For example, the processor can execute the exclusive load operation and because the corresponding location is contentious can process the complementary exclusive store operation before the intervening operations.
    Type: Grant
    Filed: January 20, 2022
    Date of Patent: October 17, 2023
    Assignee: Nvidia Corporation
    Inventors: Anurag Chaudhary, Christopher Richard Feilbach, Jasjit Singh, Manuel Gautho, Aprajith Thirumalai, Shailender Chaudhry
  • Patent number: 11762790
    Abstract: Disclosed are a method for data synchronization between a host side and a Field Programmable Gate Array (FPGA) accelerator, a Bidirectional Memory Synchronize Engine (DMSE), a FPGA accelerator, and a data synchronization system. The method includes: in response to detection of data migration from a host side to a preset memory space, generating second state information according to first state information in a first address space, and writing the second state information to a second address space (S201); and in response to detection of the second state information in the second address space, calling Direct Memory Access (DMA) to migrate data in the preset memory space to a memory space of a FPGA accelerator, and copying the second state information to the first address space, so as to implement synchronization (S202).
    Type: Grant
    Filed: January 7, 2021
    Date of Patent: September 19, 2023
    Assignee: INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD.
    Inventors: Mingyang Ou, Jiaheng Fan, Hongwei Kan
  • Patent number: 11762570
    Abstract: Direct data transfer between devices having a shared bus may be implemented with reduced involvement from a controller associated with the devices. A controller, a source memory device, and a target memory device may be coupled with a shared bus. The controller may identify a source address at the source memory device for data to be transferred to the target memory device. The controller also may identify a target address in the target memory device, and initiate a data transfer directly from the source to the target through a command that is received at both the source and the target memory device. In response to the command, the source memory device may read data out to the bus, and the target memory may read the data from the bus and store the data starting at the target address without further commands from the controller.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: September 19, 2023
    Inventors: Yihua Zhang, James Cooke
  • Patent number: 11755486
    Abstract: A shared memory controller receives a flit from another first shared memory controller over a shared memory link, where the flit includes a node identifier (ID) field and an address of a particular line of the shared memory. The node ID field identifies that the first shared memory controller corresponds to a source of the flit. Further, a second shared memory controller is determined from at least the address field of the flit, where the second shared memory controller is connected to a memory element corresponding to the particular line. The flit is forwarded to the second shared memory controller using a shared memory link according to a routing path.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: September 12, 2023
    Assignee: Intel Corporation
    Inventors: Debendra Das Sharma, Michelle C. Jen, Brian S. Morris
  • Patent number: 11748025
    Abstract: A nonvolatile memory device may include: a memory cell array operated by a first voltage, and including a plurality of memory cells; a peripheral circuit operated by the first voltage, and configured to store data in the memory cell array or read data from the memory cell array; an operation recorder operated by a second voltage, and configured to record information on an operation being performed in the nonvolatile memory device; and a control logic operated by the first voltage, and configured to control the peripheral circuit such that the nonvolatile memory device performs an operation corresponding to a command received from an external device, and control the operation recorder to store the information on the operation being performed in the nonvolatile memory device.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: September 5, 2023
    Assignee: SK hynix Inc.
    Inventor: Jee Yul Kim
  • Patent number: 11720500
    Abstract: Provided are a computer program product, system, and method for determining status of tracks in storage cached in a cache for a host. A storage controller receives from the host a list of tracks for the host to access and determines whether the tracks in the list are available in the cache for immediate access. A response is returned to the host indicating the tracks as one of available in the cache for immediate access and not available in the cache for immediate access.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: August 8, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh Mohan Gupta, Beth Ann Peterson, Matthew G. Borlick, Matthew J. Kalos
  • Patent number: 11714759
    Abstract: Techniques are disclosed relating to private memory management using a mapping thread, which may be persistent. In some embodiments, a graphics processor is configured to generate a pool of private memory pages for a set of graphics work that includes multiple threads. The processor may maintain a translation table configured to map private memory addresses to virtual addresses based on identifiers of the threads. The processor may execute a mapping thread to receive a request to allocate a private memory page for a requesting thread, select a private memory page from the pool in response to the request, and map the selected page in the translation table for the requesting. The processor may then execute one or more instructions of the requesting thread to access a private memory space, wherein the execution includes translation of a private memory address to a virtual address based on the mapped page in the translation table.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: August 1, 2023
    Assignee: Apple Inc.
    Inventors: Benjiman L. Goodman, Terence M. Potter, Anjana Rajendran, Mark I. Luffel, William V. Miller