Patents Examined by Charles Rones
  • Patent number: 11435953
    Abstract: A method for predicting logical blocks address (LBA) information, including: receiving, by a Solid State Drive (SSD), a trace sent from a host, wherein the host can acquire the trace in a reusable environment; determining, by the SSD, one or more LBAs received by the SSD according to the trace; obtaining, by the SSD, a distribution of the LBAs by learning the LBAs based on a preset learning algorithm; and predicting, by the SSD, one or more subsequent LBAs based on the distribution of the LBAs. As a result, it can perform heat classification and prediction of the following LBA used in the SSD by means of learning the LBA distribution of the SSD in a certain reusable environment of the host, thus to improve the hit rate of reading and writing and the efficiency of classification of hot and cold data in garbage collection.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: September 6, 2022
    Assignee: SHENZHEN DAPU MICROELECTRONICS CO., LTD.
    Inventors: Li Jiang, Xiang Chen, Weijun Li
  • Patent number: 11435921
    Abstract: For each of multiple storage volumes of a distributed storage system, it is determined whether the storage volume has a relatively high potential deduplicability or a relatively low potential deduplicability. Responsive to determining that the storage volume has the relatively high potential deduplicability, a first write flow is executed for each of a plurality of write requests directed to the storage volume, the first write flow utilizing content-based signatures of respective data pages of the storage volume to store the data pages in storage devices of the distributed storage system. Responsive to determining that the storage volume has the relatively low potential deduplicability, a second write flow is executed for each of a plurality of write requests directed to the storage volume, the second write flow utilizing non-content-based signatures of respective data pages of the storage volume to store the data pages in storage devices of the distributed storage system.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: September 6, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: David Meiri, Xiangping Chen
  • Patent number: 11436017
    Abstract: A data temporary storage apparatus includes a moving unit coupled to a first storage unit and multiple second storage units. The moving unit receives a moving instruction having contents including a read address, a destination address and a predetermined moving rule. The moving unit further executes the moving instruction to fetch input data by row from the first storage unit according to the read address, and to temporarily stores one after another in an alternate and sequential manner the data in each row to each of the second storage units indicated by the destination address. The data moving, data reading and convolution approaches of the present invention implement in parallel data moving and a convolution operation, achieving a ping-pong operation of convolution units and enhancing convolution efficiency, while reducing memory costs since configuring two data storage spaces in a memory is not necessary.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: September 6, 2022
    Assignee: SIGMASTAR TECHNOLOGY LTD.
    Inventors: Bo Lin, Wei Zhu, Chao Li
  • Patent number: 11429416
    Abstract: Methods, systems, and computer program products are included for de-duplicating one or more memory pages. A method includes receiving, by a hypervisor, a list of read-only memory page hints from a guest running on a virtual machine. The list of read-only memory page hints specifies a first memory page marked as writeable. The method also includes determining whether the first memory page matches a second memory page. In response to a determination that the first memory page matches the second memory page, the hypervisor may deduplicate the first and second memory pages.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: August 30, 2022
    Assignee: RED HAT ISRAEL, LTD.
    Inventors: Michael Tsirkin, Uri Lublin
  • Patent number: 11429306
    Abstract: A comparison unit configured to compare volume of unnecessary data of a first semiconductor memory to a threshold of the first semiconductor memory, which is set in advance, and a transmission unit configured to transmit a delete command to the first semiconductor memory in accordance with a comparison result indicating that the volume of the unnecessary data of the first semiconductor memory is larger than the threshold of the first semiconductor memory are provided, and the transmission unit transmits a delete command to the second semiconductor memory upon transmission of the delete command to the first semiconductor memory.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: August 30, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventor: Takehiro Ito
  • Patent number: 11429284
    Abstract: In an example, an apparatus may include a memory comprising a number of groups of memory cells and a controller coupled to the memory and configured to track respective invalidation velocities of the number of groups of memory cells and to assign categories to the number of groups of memory cells based on the invalidation velocities.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: August 30, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Shirish D. Bahirat, Jonathan M. Haswell, William Akin
  • Patent number: 11429525
    Abstract: In various embodiments, a predictive assignment application computes a forecasted amount of processor use for each workload included in a set of workloads using a trained machine-learning model. Based on the forecasted amounts of processor use, the predictive assignment application computes a performance cost estimate associated with an estimated level of cache interference arising from executing the set of workloads on a set of processors. Subsequently, the predictive assignment application determines processor assignment(s) based on the performance cost estimate. At least one processor included in the set of processors is subsequently configured to execute at least a portion of a first workload that is included in the set of workloads based on the processor assignment(s).
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: August 30, 2022
    Assignee: NETFLIX, INC.
    Inventors: Benoit Rostykus, Gabriel Hartmann
  • Patent number: 11429289
    Abstract: An apparatus to facilitate memory map security in a system on chip (SOC), is disclosed. The apparatus includes a micro controller to receive a request to grant a host device an access to a memory device and perform an alias checking process to verify accuracy of a memory map of the memory device.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: August 30, 2022
    Assignee: Intel Corporation
    Inventors: Karunakara Kotary, Pannerkumar Rajagopal, Sahil Dureja, Mohamed Haniffa, Prashant Dewan
  • Patent number: 11429418
    Abstract: A data management system having a storage appliance configured to store a snapshot of a virtual machine; and one or more processors in communication with the storage appliance. The one or more processors are configured to perform operations including: identifying a plurality of shards of the virtual machine; requesting a shard snapshot of each of the plurality of shards; receiving the shard snapshots asynchronously; ordering the received shard snapshots sequentially into a results queue; and storing a single snapshot of the virtual machine based on the ordered shard snapshots. The operations may further include maintaining a flow control queue that limits a number of the requested shard snapshots.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: August 30, 2022
    Assignee: Rubrik, Inc.
    Inventors: Christopher Denny, Li Ding, Linglin Yu, Stephen Chu, Ying Wu
  • Patent number: 11422711
    Abstract: A computing device includes an interface configured to interface and communicate with a dispersed storage network (DSN), a memory that stores operational instructions, and a processing module operably coupled to the interface and memory such that the processing module, when operable within the computing device based on the operational instructions, is configured to perform various operations. For example, the computing device monitors storage unit (SU)-based write transfer rates and SU-based write failure rates associated with each of the SUs for a write request of encoded data slices (EDSs) to the SUs within the DSN. The computing device generates and maintains a SU write performance distribution based on monitoring of the SU-based write transfer rates and the SU-based write failure rates and adaptively adjusts a trimmed write threshold number of EDSs and/or a target width of EDSs for write requests of sets of EDSs to the SUs within the DSN.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: August 23, 2022
    Assignee: PURE STORAGE, INC.
    Inventors: Greg R. Dhuse, Jason K. Resch, Ethan S. Wozniak
  • Patent number: 11422739
    Abstract: A memory controller controls a memory device including a memory cell array, and includes: a message information generator configured to receive a first request message from a host, and generate and output response characteristic information indicating a type of the first request message that defines a response time within which a message response to the first request message is provided to the host and a response output controller configured to determine, based on the response characteristic information, a time at which the message response corresponding to the first request message is output to the host.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: August 23, 2022
    Assignee: SK hynix Inc.
    Inventor: Hung Yung Cho
  • Patent number: 11416162
    Abstract: The present application relates to a garbage collection method and a storage device for reducing write amplification. A method for selecting a data block to be collected in garbage collection, including: obtaining, according to a first selection policy, a first data block to be collected; determining, according to a first rejection policy, whether to refuse to collect the first data block to be collected; and if according to the first rejection policy, rejection to collect of the first data block to be collected is determined, not performing garbage collection on the first data block to be collected.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: August 16, 2022
    Assignee: BEIJING MEMBLAZE TECHNOLOGY CO., LTD
    Inventors: Jinyi Wang, Xiangfeng Lu
  • Patent number: 11416152
    Abstract: According to one embodiment, an information processing device includes a characteristics monitoring unit, a determination unit, and a notification unit. The characteristics monitoring unit monitors characteristics information that indicates at least one of its performance and lifetime with respect to a storage device, and includes input/output characteristics. The determination unit determines, based on monitored characteristics information including the input/output characteristics, whether change instruction for changing characteristics is to be notified to the storage device. The notification unit notifies the storage device of the change instruction when the determination unit determines that the change instruction is to be notified.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: August 16, 2022
    Assignee: KIOXIA CORPORATON
    Inventors: Takeshi Ishihara, Shinichi Kanno
  • Patent number: 11416410
    Abstract: A memory system includes: a memory device suitable for storing map information; and a controller suitable for storing a portion of the map information in a map cache, and accessing the memory device based on the map information stored in the map cache or accessing the memory device based on a physical address that is selectively provided together with an access request from a host, wherein the map cache includes a write map cache suitable for storing map information corresponding to a write command, and a read map cache suitable for storing map information corresponding to a read command, and wherein the controller provides the host with map information that is outputted from the read map cache.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: August 16, 2022
    Assignee: SK hynix Inc.
    Inventor: Hye-Mi Kang
  • Patent number: 11409459
    Abstract: The present disclosure generally relates to methods of operating storage devices. The storage device comprises a controller comprising first random access memory (RAM1), second random access memory (RAM2), and a storage unit divided into a plurality of zones. A first command to write data to a first zone is received, first parity data for the first command is generated in the RAM1, and the data of the first command is written to the first zone. When a second command to write data to a second zone is received, the generated first parity data is copied from the RAM1 to a parking section in the storage unit, and second parity data associated with the second zone is copied from the parking section to the RAM1. The second parity data is then updated in the RAM1 with the data of the second command and copied to the parking section.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: August 9, 2022
    Assignee: Western Digital Technologies, Inc.
    Inventors: Peter Grayson, Daniel L. Helmick, Liam Parker, Sergey Anatolievich Gorobets
  • Patent number: 11409667
    Abstract: A deduplication engine maintains a hash table containing hash values of tracks of data stored on managed drives of a storage system. The deduplication engine keeps track of how frequently the tracks are accessed by the deduplication engine using an exponential moving average for each track. Target tracks which are frequently accessed by the deduplication engine are cached in local memory, so that required byte-by-byte comparisons between the target track and write data may be performed locally rather than requiring the target track to be read from managed drives. The deduplication engine implements a Least Recently Used (LRU) cache data structure in local memory to manage locally cached tracks of data. If a track is to be removed from local memory, a final validation of the target track is implemented on the version stored in managed resources before evicting the track from the LRU cache.
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: August 9, 2022
    Assignee: Dell Products, L.P.
    Inventors: Venkata Ippatapu, Ramesh Doddaiah
  • Patent number: 11403025
    Abstract: A matrix transfer accelerator (MTA) system/method that coordinates data transfers between an external data memory (EDM) and a local data memory (LDM) using matrix tiling and/or grouping is disclosed. The system utilizes foreground/background buffering that overlaps compute and data transfer operations and permits EDM-to-LDM data transfers with or without zero pad peripheral matrix filling. The system may incorporate an automated zero-fill direct memory access (DMA) controller (ZDC) that transfers data from the EDM to the LDM based on a set of DMA controller registers including data width register (DWR), transfer count register (TCR), fill count register (FCR), EDM source address register (ESR), and LDM target address register (LTR). The ZDC transfers matrix data from the EDM[ESR] to the LDM[LTR] such that EDM matrix data of DWR row data width is automatically zero-filled around a periphery of a matrix written to the LDM matrix based on the FCR value.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: August 2, 2022
    Assignee: Texas Instruments Incorporated
    Inventors: Arthur John Redfern, Asheesh Bhadwaj
  • Patent number: 11398894
    Abstract: A method comprising initializing, by a processor, a field identification (FID) field and a file type field in a memory encryption counter block associated with pages for each file of a plurality of files stored in a persistent memory device (PMD), in response to a command by an operating system (OS). The file type field identifies whether each file associated with FID field is one of an encrypted file and a memory location. The method includes decrypting data of a page stored in the PMD, based on a read command by a requesting core. When decrypting, determining whether the requested page is an encrypted file or memory location. If the requested page is an encrypted file, performing decryption based on a first encryption pad generated based on the file encryption key of the encrypted file and a second encryption pad generated based on a processor key of the secure processor.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: July 26, 2022
    Assignee: UNIVERSITY OF CENTRAL FLORIDA RESEARCH FOUNDATION, INC.
    Inventor: Amro Awad
  • Patent number: 11397528
    Abstract: A snapshot for use in a cascaded snapshot environment includes a device level source sequence number and a Direct Image Lookup (DIL) data structure. The device level source sequence number indicates the level of the snapshot in the cascade, and the snapshot DIL indicates the location of the data within the snapshot cascade. A target device for use in the cascaded snapshot environment includes a device level target sequence number, a track level sequence data structure, and a DIL. When the target device is linked to a snapshot, the device level target sequence number is incremented, which invalidates all tracks of the target device. The snapshot DIL is copied to the target device, but a define process is not run on the target device such that the tracks of the target device remain undefined. IO operations use the device level target sequence number to identify data on the target device.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: July 26, 2022
    Assignee: Dell Products, L.P.
    Inventors: Sandeep Chandrashekhara, Michael Ferrari, Jeffrey Wilson
  • Patent number: 11392499
    Abstract: Various implementations described herein relate to systems and methods for dynamically managing buffers of a storage device, including receiving, by a controller of the storage device from a host, information indicative of a frequency by which data stored in the storage device is accessed, and in response to receiving the information determining, by the controller, the order by which read buffers of the storage device are allocated for a next read command. The NAND read count of virtual Word-Lines (WLs) are also used to cache more frequently accessed WLs, thus proactively reducing read disturb and consequently increasing NAND reliability and NAND life.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: July 19, 2022
    Assignee: KIOXIA CORPORATION
    Inventors: Saswati Das, Manish Kadam, Neil Buxton