Patents Examined by Prasith Thammavong
  • Patent number: 10938418
    Abstract: A computing device includes an interface configured to interface and communicate with a dispersed storage network (DSN), a memory that stores operational instructions, and a processing module operably coupled to the interface and memory such that the processing module, when operable within the computing device based on the operational instructions, is configured to perform various operations. The computing device detects a failed memory device (e.g., of a storage unit (SU) that stores at least one encoded data slice (EDS). The computing device then determines a DSN address range associated with at least some EDSs associated with a data object stored within the failed memory device and transmits the DSN address range to another computing device within the DSN to instruct restriction within the DSN of a memory access request for an EDSs associated with the data object that is stored within the failed memory device.
    Type: Grant
    Filed: November 28, 2016
    Date of Patent: March 2, 2021
    Assignee: PURE STORAGE, INC.
    Inventors: Dustin M. Hendrickson, Manish Motwani
  • Patent number: 10936220
    Abstract: An apparatus comprises a host device configured to communicate over a network with a storage system. The host device comprises a plurality of nodes each comprising a plurality of processing devices and at least one communication adapter. The host device comprises a multi-path input-output (MPIO) driver that is configured to obtain an input-output (IO) operation that targets a given logical volume. The MPIO driver identifies a source node and a plurality of paths between the source node and the given logical volume. The MPIO driver determines a load factor and a distance for each identified path. The MPIO driver determines a weight associated with each identified path based at least in part on the determined load factor and distance and selects a target path based at least in part on the determined weight. The MPIO driver delivers the obtained IO operation to the given logical volume via the selected target path.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: March 2, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Sanjib Mallick, Kundan Kumar, Vinay G. Rao
  • Patent number: 10929054
    Abstract: Methods and systems for performing memory garbage collection include determining a size of N double-ended queues (“deques”) associated with N respective garbage collection threads, where N is three or greater. A task is popped from a deque out of the N deques having a largest size. Garbage collection is performed on the popped task.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: February 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Michihiro Horie, Kazunori Ogata, Hiroshi Horii
  • Patent number: 10929044
    Abstract: An information processing apparatus 100 is configured to include a selection unit 110 configured to select a storage device to be moved from a first place associated with a first storage device to a second place associated with a second storage device connected with the first storage device over a network, based on travel information 120 including the origin and the destination of the storage device. The storage device is selected as a third storage device to be used for storing transport target data stored in the first storage device and transporting the data to the second storage device.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: February 23, 2021
    Assignee: NEC CORPORATION
    Inventor: Jun Yokoyama
  • Patent number: 10929018
    Abstract: A mega cluster storage system includes clusters of multiple storage modules. Each module is able to access a portion of the data within the mega cluster and serves as a proxy in order for another storage module to access the remaining portion of the data. A cluster is assigned to a unique cluster volume and all the data within the cluster volume is accessible by all of the modules within the cluster. Each host connection to the mega cluster is associated with a particular cluster volume. A module that receives a host I/O request determines whether the I/O request should be satisfied by a module within its own cluster or be satisfied by a module within a different cluster. The module may forward the I/O request to a module within a different cluster as indicated by a distribution data structure that is allocated and stored within each storage module.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: February 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Zah Barzik, Dan Ben-Yaacov, Mor Griv, Maxim Kalaev, Rivka M. Matosevich
  • Patent number: 10929297
    Abstract: Providing control over processing of a prefetch request in response to conditions in a receiver of the prefetch request and to conditions in a source of the prefetch request. A processor generates a prefetch request and a tag that dictates processing the prefect request. A processor sends the prefetch request and the tag to a second processor. A processor generates a conflict indication based on whether a concurrent processing of the prefetch request and an atomic transaction by the second processor would generate a conflict with a memory access that is associated with the atomic transaction. Based on an analysis of the conflict indication and the tag, a processor processes (i) either the prefetch request or the atomic transaction, or (ii) both the prefetch request and the atomic transaction.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: February 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum
  • Patent number: 10929288
    Abstract: Garbage collection is performed for a virtualized storage system whose virtual address space is addressed in extents. Valid data in source extents is copied via a cache into destination extents. Once all valid data in a source extent is copied into one or more destination extents, the source extent may be reused. A source extent is released for reuse only after the one or more destination extents that received the valid data copied from the source extent are determined to be full, and the valid data copied from the source extent to the destination extent via the cache is flushed out of the cache.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: February 23, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Roderick Guy Charles Moore, Miles Mulholland, William John Passingham, Richard Alan Bordoli
  • Patent number: 10908997
    Abstract: A technique is directed to storing data on a plurality of storage devices of a data storage array. The technique involves, on each storage device of the plurality of storage devices, providing large disk extents and small disk extents for allocation to RAID extents. The technique further involves forming, from the large disk extents, a user-data RAID extent to store user data for the data storage array. The technique further involves forming, from the small disk extents, an internal-metadata RAID extent to store internal metadata for the data storage array. In some arrangements, spare space is reserved on one or more storage devices between large and small disk extents.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: February 2, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Vamsi K. Vankamamidi, Shuyu Lee, Ronald D. Proulx
  • Patent number: 10910025
    Abstract: Embodiments of the present invention disclose a method, computer program product, and system for utilizing a block storage device as Dynamic Random-Access Memory (DRAM) space, wherein a computer includes at least one DRAM module and at least one block storage device interfaced to the computer using a double data rate (DDR) interface. During boot up, the computer configures DRAM and block storage devices of the computer for utilization as DRAM or block storage. Then the computer determines that more DRAM space is required. Responsive to determining that more DRAM space is required, the computer transforms a block storage device into DRAM space. Once the computer determines that the transformed block storage device that is being used for DRAM space is no longer needed to be used as DRAM space, the computer transforms the block storage device back to block storage space.
    Type: Grant
    Filed: December 20, 2012
    Date of Patent: February 2, 2021
    Assignee: Lenovo Enterprise Solutions (Singapore) Pte. Ltd.
    Inventors: Gary D. Cudak, Christopher J. Hardee, Adam Roberts
  • Patent number: 10901628
    Abstract: The present disclosure relates to method and system for operating storage drives to increase lifecycle of the storage drives. A drive manager receives a plurality of parameters of a plurality of storage drives and determines operational state of one or more storage drives from the plurality of storage drives as unhealthy. Further, the drive manager identifies an application frequently retrieving data from the one or more storage drives and further determines one or more memory locations in the one or more storage drive from where the data is retrieved frequently. Thereafter, the data present in the one or more memory locations are stored in a temporary storage and is provided to the application during a future data retrieval cycle. Thus, the application may not retrieve data from the one or more storage drives. Hence, the lifecycle of the one or more storage drives can be increased.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: January 26, 2021
    Assignee: Wipro Limited
    Inventors: Rishav Das, Maulik Yagnik
  • Patent number: 10895989
    Abstract: Provided is a multi-node storage system including a plurality of nodes providing a volume to a computer as a logical storage area, each node including a controller that processes an I/O request from the computer, and including a plurality of NVMe drives PCI-connected to any one of the controllers of the node and a switch that connects the controllers of the plurality of nodes to each other, in which the controller includes a plurality of processors that process the I/O request from the computer and a memory, and the memory includes, for the plurality of NVMe drives, virtual queues which are equal in number to processors of a plurality of controllers constituting the multi-node storage system and real queues that store a command in any one of the plurality of NVMe drives and that are smaller in number than the virtual queues.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: January 19, 2021
    Assignee: HITACHI, LTD.
    Inventors: Hajime Ikeda, Atsushi Sato, Takafumi Maruyama
  • Patent number: 10884739
    Abstract: Systems and methods for load canceling in a processor that is connected to an external interconnect fabric are disclosed. As a part of a method for load canceling in a processor that is connected to an external bus, and responsive to a flush request and a corresponding cancellation of pending speculative loads from a load queue, a type of one or more of the pending speculative loads that are positioned in the instruction pipeline external to the processor, is converted from load to prefetch. Data corresponding to one or more of the pending speculative loads that are positioned in the instruction pipeline external to the processor is accessed and returned to cache as prefetch data. The prefetch data is retired in a cache location of the processor.
    Type: Grant
    Filed: May 24, 2018
    Date of Patent: January 5, 2021
    Assignee: INTEL CORPORATION
    Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
  • Patent number: 10884937
    Abstract: Processors configured by aspects of the present invention optimize reference cache maintenance in a serialization system by serializing a plurality of objects into a buffer and determining whether any of the objects are repeated within the buffered serialized plurality. The configured processors insert an object repetition data signal within the serialized plurality of objects that indicates to a receiver whether or not any objects are determined to be repeated within the buffered serialized plurality of objects, and send the serialized plurality of objects with the inserted object repetition data signal as a single chunk to a receiver, wherein the inserted object repetition data signal conveys reference cache management instructions to the receiver.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: January 5, 2021
    Assignee: International Business Machines Corporation
    Inventor: Sathiskumar Palaniappan
  • Patent number: 10877691
    Abstract: An embodiment of a semiconductor package apparatus may include technology to determine a stream classification for an access request to a persistent storage media, and assign the access request to a stream based on the stream classification. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: December 29, 2020
    Assignee: Intel Corporation
    Inventors: Mariusz Barczak, Dhruvil Shah, Kapil Karkra, Andrzej Jakowski, Piotr Wysocki
  • Patent number: 10877665
    Abstract: A method includes, in one non-limiting embodiment, receiving a command originating from an initiator at a controller associated with a non-volatile mass memory coupled with a host device, the command being a command to write data that is currently resident in a memory of the host device to the non-volatile mass memory; moving the data that is currently resident in the memory of the host device from an original location to a portion of the memory allocated for use at least by the non-volatile mass memory; and acknowledging to the initiator that the command to write the data to the non-volatile mass memory has been executed. An apparatus configured to perform the method is also described.
    Type: Grant
    Filed: August 12, 2016
    Date of Patent: December 29, 2020
    Assignee: Memory Technologies LLC
    Inventors: Kimmo J. Mylly, Jani J. Klint, Jani Hyvonen, Tapio Hill, Jukka-Pekka Vihmalo, Matti K. Floman
  • Patent number: 10860229
    Abstract: A request associated with one or more privileges assigned to a first entity may be received. Each of the one or more privileges may correspond to an operation of an integrated circuit. Information corresponding to the first entity and stored in a memory that is associated with the integrated circuit may be identified. Furthermore, the memory may be programmed to modify the information stored in the memory that is associated with the integrated circuit in response to the request associated with the one or more privileges assigned to the first entity.
    Type: Grant
    Filed: August 31, 2015
    Date of Patent: December 8, 2020
    Assignee: CRYPTOGRAPHY RESEARCH INC.
    Inventors: Benjamin Che-Ming Jun, William Craig Rawlings, Ambuj Kumar, Mark Evan Marson
  • Patent number: 10831393
    Abstract: A variety of applications can include systems and/or methods of partial save of memory in an apparatus such as a non-volatile dual in-line memory module. In various embodiments, a set of control registers of a non-volatile dual in-line memory module can be configured to contain an identification of a portion of dynamic random-access memory of the non-volatile dual in-line memory module from which to back up content to non-volatile memory of the non-volatile dual in-line memory module. Registers of the set of control registers may also be allotted to contain an amount of content to transfer from the dynamic random-access memory content to the non-volatile memory. Additional apparatus, systems, and methods are disclosed.
    Type: Grant
    Filed: August 27, 2018
    Date of Patent: November 10, 2020
    Assignee: Micron Technology, Inc.
    Inventors: Jeffery J. Leyda, Nathan A. Eckel
  • Patent number: 10824341
    Abstract: In a flash-based accelerator, a flash-based non-volatile memory stores data in pages, and a buffer subsystem stores data in words or bytes. An accelerator controller manages data movement between the flash-based non-volatile memory and the buffer subsystem. A plurality of processors processes data stored in the buffer subsystem. A network integrates the flash-based non-volatile memory, the buffer subsystem, the accelerator controller, and the plurality of processors.
    Type: Grant
    Filed: June 16, 2016
    Date of Patent: November 3, 2020
    Assignees: MemRay Corporation, Yonsei University, University-Industry Foundation (UIF)
    Inventors: Myoungsoo Jung, Jie Zhang
  • Patent number: 10811112
    Abstract: Apparatuses, systems, and methods are disclosed for wear leveling for non-volatile memory. An apparatus may include one or more non-volatile memory elements, and a controller. A controller may perform a wear-leveling process for one or more non-volatile memory elements, by periodically updating a logical-to-physical mapping and moving data based on the updated mapping. A controller may detect a wear-based attack for one or more non-volatile memory elements. A controller may change a wear-leveling process in response to detecting a wear-based attack.
    Type: Grant
    Filed: September 29, 2018
    Date of Patent: October 20, 2020
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Daniel Helmick, Amir Gholamipour, James Fitzpatrick
  • Patent number: 10802983
    Abstract: The disclosure provides an approach for obscuring the organization of data within a storage system by embedding virtual machines within blocks of the storage system. A storage system may receive a command comprising an address. The address may correspond to a location of an embedded virtual machine within the storage system. The virtual machine instantiates and executes an opaque algorithm, the algorithm returning a new address. The new address corresponds to the real location of data on with the command is executed. The logic of the algorithm is obscured within the virtual machine, making the algorithm less predictable and thus, more secure.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: October 13, 2020
    Assignee: VMware, Inc.
    Inventor: Matthew D. McClure