Patents Examined by Ryan Dare
  • Patent number: 10936226
    Abstract: According to one embodiment, when data is to be written to a first physical storage location that is designated by a first physical address, a memory system encrypts the data with the first physical address and a first encryption key, and writes the encrypted data to the first physical storage location. When the encrypted data is to be copied to a second physical storage location, the memory system decrypts the encrypted data with the first physical address and the first encryption key, and re-encrypts the decrypted data with a second encryption key and a copy destination physical address indicative of the second physical storage location.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: March 2, 2021
    Assignee: TOSHIBA MEMORY CORPORATION
    Inventor: Shinichi Kanno
  • Patent number: 10936502
    Abstract: A computing device includes a persistent storage and a processor. The processor includes a local storage. The local storage includes blocks and an address space. The address space includes a first portion of entries that specify blocks of the local storage and a second portion of entries that specify blocks of the remote data storage. The processor obtains data for storage and makes a determination that the data cannot be stored in the local storage. In response to the determination, the processor stores the data in the remote storage using the second portion of entries.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: March 2, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Bob Yan, Helen Yan
  • Patent number: 10915486
    Abstract: Server computers often include one or more input/output (I/O) devices for communicating with a network or directly attached storage device. The data transfer latency for request can be reduced by utilizing ingress data placement logic to bypass the processor of the I/O device. For example, host memory descriptors can be stored in a memory of the I/O device to facilitate placement of the requested data.
    Type: Grant
    Filed: November 10, 2017
    Date of Patent: February 9, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Asif Khan, Thomas A. Volpe, Marc John Brooker, Marc Stephen Olson, Norbert Paul Kusters, Mark Bradley Davis, Robert Michael Johnson
  • Patent number: 10915453
    Abstract: An apparatus is described. The apparatus includes a memory controller to interface to a multi-level system memory having first and second different cache structures. The memory controller has circuitry to service a read request by concurrently performing a look-up into the first and second different cache structures for a cache line that is targeted by the read request.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: February 9, 2021
    Assignee: Intel Corporation
    Inventors: Israel Diamand, Zvika Greenfield, Julius Mandelblat, Asaf Rubinstein
  • Patent number: 10915251
    Abstract: Techniques to optimize use of the available capacity of a backup target storage device are disclosed. In various embodiments, a current capacity of a target system to which backup data is to be streamed to handle additional streams is determined dynamically, at or near a time at which a backup operation is to be performed. One or more backup parameters of the backup operation is/are set dynamically, based at least in part on the dynamically determined current capacity of the target system.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: February 9, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Shelesh Chopra, Rajkumar Palkhade
  • Patent number: 10915267
    Abstract: Examples include techniques for implementing a write transaction to two or more memory devices in a storage device. In some examples, the write transaction includes an atomic write transaction from an application or operating system executing on a computing platform to a storage device coupled with the computing platform. For these examples, the storage device includes a storage controller to receive an atomic multimedia write transaction request to write first data and second data; cause the first data to be stored in a first memory device, and cause the second data to be stored in a second memory device, simultaneously and atomically.
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: February 9, 2021
    Assignee: INTEL CORPORATION
    Inventors: Sanjeev N. Trika, Peng Li, Jawad B. Khan, Myron Loewen
  • Patent number: 10908847
    Abstract: A Memory Device (MD) includes a Non-Volatile Memory (NVM) including a first memory array and a second memory array. An address is associated with a first location in the first memory array and with a second location in the second memory array. A read command is received to read data for the address, and it is determined whether data stored in the NVM for the address is persistent. If not, it is determined whether data for the address has been written for the address after a last power-up of the MD. The read command is performed by returning zeroed data if data has not been written for the address after the last power-up. If data has been written after the last power-up, data stored in the first location is returned. In one aspect, a processor sends a command to the MD setting a volatility mode for the MD.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: February 2, 2021
    Assignee: Western Digital Technologies, Inc.
    Inventors: Viacheslav Dubeyko, Luis Cargnini
  • Patent number: 10871905
    Abstract: A computing device includes an interface configured to interface and communicate with a dispersed storage network (DSN), a memory that stores operational instructions, and a processing module operably coupled to the interface and memory such that the processing module, when operable within the computing device based on the operational instructions, is configured to perform various operations. For example, the computing device monitors storage unit (SU)-based write transfer rates and SU-based write failure rates associated with each of the SUs for a write request of encoded data slices (EDSs) to the SUs within the DSN. The computing device generates and maintains a SU write performance distribution based on monitoring of the SU-based write transfer rates and the SU-based write failure rates and adaptively adjusts a trimmed write threshold number of EDSs and/or a target width of EDSs for write requests of sets of EDSs to the SUs within the DSN.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: December 22, 2020
    Assignee: PURE STORAGE, INC.
    Inventors: Greg R. Dhuse, Jason K. Resch, Ethan S. Wozniak
  • Patent number: 10853264
    Abstract: A virtual memory system includes a virtual memory engine coupled to a plurality of physical memory devices and a virtual memory database. During an initialization process, virtual memory engine uses a first unique global identifier to create virtual memory in the virtual memory database by mapping a continuous virtual memory address range to non-continuous physical memory device address ranges that are provided across the plurality of physical memory devices. During the initialization process, or subsequently during runtime, the virtual memory engine uses a second global unique identifier to define a virtual memory device namespace in the virtual memory that includes a first continuous subset of the continuous virtual member address range. During runtime, the virtual memory engine then provides read and write block mode access to the plurality of physical memory devices via the virtual memory device namespace defined in the virtual memory database.
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: December 1, 2020
    Assignee: Dell Products L.P.
    Inventors: Shekar Babu Suryanarayana, Sumanth Vidyadhara, Parmeshwr Prasad
  • Patent number: 10846242
    Abstract: Systems and methods in accordance with various embodiments of the present disclosure provide approaches for configurable allocation of ways in a cache. When a packet is received, the packet can be parsed to determine its type and a corresponding operating mode can be looked up. Based on the operating mode, one or more specific ranges of ways may be determined for the packet's data. For example, a first range of ways may be defined to include context data, a second range of ways may be defined to include descriptor data, and a third range of ways may be defined that can include both context and descriptor data. An eviction engine may clear data from and/or store data to a particular way in the cache based on the operating mode.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: November 24, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Ofer Frishman, Guy Nakibly, Erez Izenberg
  • Patent number: 10838644
    Abstract: A method for augmenting a computing device is disclosed comprising providing a data storage arrangement, the data storage arrangement having a memory having a partition of a first section and a sub dividable second section, monitoring the computing device to determine when the first section of memory requires augmentation, subdividing the second section of the memory into a transferable section memory and a remainder section memory and augmenting the first section of the memory with the transferable section.
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: November 17, 2020
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Karin Inbar, Avichay Hodes
  • Patent number: 10838895
    Abstract: A processing method of data redundancy is utilized for a Non-Volatile Memory express (NVMe) to transfer data via a fabric channel from a host terminal to a Remote-direct-memory-access-enable Network Interface Controller (RNIC) and a Just a Bunch of Flash (JBOF). The processing method comprises virtualizing a Field Programmable Gate Array (FPGA) of the RNIC into a Dynamic Random Access Memory (DRAM) and storing the data to the DRAM; replicating or splitting the data into a plurality of data packets and reporting a plurality of virtual memory addresses corresponding to the plurality of data packets to the RNIC by the FPGA; and reading and transmitting the plurality of data packets to a plurality of corresponding NVMe controllers according to the plurality of virtual memory addresses; wherein the FPGA reports to the RNIC that a memory size of the FPGA is larger than that of the DRAM.
    Type: Grant
    Filed: April 11, 2018
    Date of Patent: November 17, 2020
    Assignee: Wiwynn Corporation
    Inventors: Pei-Ling Yu, Chia-Liang Hsu, Bing-Kun Syu
  • Patent number: 10824337
    Abstract: A data storage system according to certain aspects manages and administers the sharing of storage resources among clients in the shared storage pool. The shared storage pool according to certain aspects can provide readily available remote storage to clients in the pool. A share list for each client may be used to determine where data is stored within the storage pool. The share list may include clients that are known to each client, and therefore, a user may feel more at ease storing the data on the clients in the storage pool. Management and administration of the storage pool and backup and restore jobs can be performed by an entity other than the client, making backup and restore more streamlined and simple for the clients in the pool.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: November 3, 2020
    Assignee: Commvault Systems, Inc.
    Inventor: Sanjay Harakhchand Kripalani
  • Patent number: 10824352
    Abstract: A controller sets an error count margin for each of multiple units of a non-volatile memory and detects whether the error count margin of any of the multiple units has been exceeded. In response to detecting that the error count margin of a memory unit is exceeded, the controller determines whether calibration of the memory unit would improve a bit error rate of the memory unit sufficiently to warrant calibration. If so, the controller performs calibration of the memory unit. In some implementations, the controller refrains from performing the calibration in response to determining that calibration of the memory unit would not improve the bit error rate of the memory unit sufficiently to warrant calibration, but instead relocates a desired part or all valid data within the memory unit and, if all valid data has been relocated from it, erases the memory unit.
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: November 3, 2020
    Assignee: International Business Machines Corporation
    Inventors: Nikolas Ioannou, Nikolaos Papandreou, Roman A. Pletka, Sasa Tomic
  • Patent number: 10795590
    Abstract: A solid state drive (SSD) employing a redundant array of independent disks (RAID) scheme includes a flash memory chip, erasable blocks in the flash memory chip, and a flash controller. The erasable blocks are configured to store flash memory pages. The flash controller is operably coupled to the flash memory chip. The flash controller is also configured to organize certain of the flash memory pages into a RAID line group and to write RAID line group membership information to each of the flash memory pages in the RAID line group.
    Type: Grant
    Filed: January 9, 2019
    Date of Patent: October 6, 2020
    Assignee: Futurewei Technologies, Inc.
    Inventor: Yiren Huang
  • Patent number: 10788996
    Abstract: The present invention effectively utilizes computation resources by allocating the computation resources, in accordance with conditions, to a process that shares a computation resource with another process and a process that occupies a computation resource. Execution control causes a processor core allocated to a storage control process to be occupied by the storage control process, the execution control causes a processor core allocated to an application process to be shared with another process, and the execution control changes the number of processor cores allocated to the storage control process on the basis of I/O information indicating a state of an I/O.
    Type: Grant
    Filed: March 25, 2015
    Date of Patent: September 29, 2020
    Assignee: HITACHI, LTD.
    Inventors: Masakuni Agetsuma, Hiroaki Akutsu, Yusuke Nonaka
  • Patent number: 10768827
    Abstract: Methods, systems, apparatuses, and computer program products are provided that enable storage performance to be customized and throttled at the drive level. For example, performance metric(s) may be specified for virtual drive(s) assigned to a virtual machine. Physical storage disk(s), which are mapped to the drive(s), may be allocated based on the specified performance metric(s). By providing a means to customize and throttle on a per-drive basis, each function of the virtual machine can be provided a dedicated channel for input/output transactions, thereby ensuring that no function is starved of resources.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: September 8, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Harshad Nadkarni
  • Patent number: 10732850
    Abstract: A memory card is attached to a host device, and includes a data control circuit which transfers data with respect to the host device in synchronism with a rise edge and a fall edge of a clock signal.
    Type: Grant
    Filed: December 26, 2018
    Date of Patent: August 4, 2020
    Assignee: Toshiba Memory Corporation
    Inventor: Takafumi Ito
  • Patent number: 10691363
    Abstract: A computing system includes a parent partition, child partitions, a hypervisor, shared memories each associated with one of the child partitions, and trigger pages each associated with one of the child partitions. The hypervisor receives a system event signal from one of the child partitions and, in response to receiving the system event signal, accesses the trigger page associated with that child partition. The hypervisor determines whether the trigger page indicates whether data is available to be read from the shared memory associated with the child partition. The hypervisor can send an indication to either the parent partition or the child partitions that data is available to be read from the shared memory associated with the child partition if the hypervisor determines that the trigger page indicates that data is available to be read from the shared memory associated with the child partition.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: June 23, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Thomas Fahrig
  • Patent number: 10671537
    Abstract: Reducing translation latency within a memory management unit (MMU) using external caching structures including requesting, by the MMU on a node, page table entry (PTE) data and coherent ownership of the PTE data from a page table in memory; receiving, by the MMU, the PTE data, a source flag, and an indication that the MMU has coherent ownership of the PTE data, wherein the source flag identifies a source location of the PTE data; performing a lateral cast out to a local high-level cache on the node in response to determining that the source flag indicates that the source location of the PTE data is external to the node; and directing at least one subsequent request for the PTE data to the local high-level cache.
    Type: Grant
    Filed: August 22, 2017
    Date of Patent: June 2, 2020
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Jody B. Joyner, Ronald N. Kalla, Michael S. Siegel, Jeffrey A. Stuecheli, Charles D. Wait, Frederick J. Ziegler