Patents Examined by Christopher D Birkhimer
  • Patent number: 10761735
    Abstract: An embodiment of the invention provides a method comprising: permitting an application to be aware to be aware of a distribution of a data of the application across a cache and a permanent storage device. The cache comprises a solid state device and the permanent storage device comprises a disk or a memory. In yet another embodiment of the invention, an apparatus comprises: a caching application program interface configured to permit an application to be aware to be aware of a distribution of a data of the application across a cache and a permanent storage device. A caching application program interface is configured to determine an input/output strategy to consume the data based on the distribution of the data.
    Type: Grant
    Filed: December 4, 2018
    Date of Patent: September 1, 2020
    Assignee: PrimaryIO, Inc.
    Inventors: Sumit Kumar, Sumit Kapoor
  • Patent number: 10747436
    Abstract: Systems and methods for migrating stored backup data between disks (e.g., from an existing disk to another disk), such as a new or different disk in a magnetic storage library, without interrupting or otherwise affecting secondary copy operations (e.g., operations currently writing data to the storage library) utilizing the magnetic storage library, are described. In some embodiments, the systems and methods mark one or more mount paths as full when a running secondary copy operation associated with the mount path has completed a job (regardless of the actual current capacity or intended use of the mount path), and migrate each of the one or more data to a second disk of the data storage library when the mount path associated with the data is marked as full.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: August 18, 2020
    Assignee: Commvault Systems, Inc.
    Inventors: Jaidev O. Kochunni, Michael F. Klose
  • Patent number: 10740029
    Abstract: A processing system employs an expandable memory buffer that supports enlarging the memory buffer when the processing system generates a large number of long latency memory transactions. The hybrid structure of the memory buffer allows a memory controller of the processing system to store a larger number of memory transactions while still maintaining adequate transaction throughput and also ensuring a relatively small buffer footprint and power consumption. Further, the hybrid structure allows different portions of the buffer to be placed on separate integrated circuit dies, which in turn allows the memory controller to be used in a wide variety of integrated circuit configurations, including configurations that use only one portion of the memory buffer.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: August 11, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Gabriel H. Loh, William L. Walker
  • Patent number: 10740228
    Abstract: Systems, methods and/or devices are used to enable locality grouping during garbage collection of a storage device. In one aspect, the method includes, at a storage controller for the storage device: performing one or more operations for a garbage collection read, including: identifying one or more sequences of valid data in a source unit, wherein each identified sequence of valid data has a length selected from a set of predefined lengths; and for each respective sequence of the one or more sequences of valid data in the source unit, transferring the respective sequence to a respective queue of a plurality of queues, in accordance with the length of the respective sequence; and performing one or more operations for a garbage collection write, including: identifying full respective queues for writing to a destination unit; and writing from the full respective queues to the destination unit.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: August 11, 2020
    Assignee: Sandisk Technologies LLC
    Inventors: Neil D. Hutchison, Steven Theodore Sprouse, Shakeel I. Bukhari
  • Patent number: 10725686
    Abstract: A method of operating a storage controller is provided. The method includes receiving data transferred by a host for storage in a target partition of a storage media, and detecting properties of the data. The method also includes establishing one or more inferred partitions on the storage media based at least on the properties of the data, and based at least on the properties of the data, sorting subsets of the data for storage within the target partition and the one or more inferred partitions.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: July 28, 2020
    Assignee: Burlywood, Inc.
    Inventors: Erik Habbinga, Kevin Darveau Landin, Tod Roland Earhart, Nathan Koch, John Foister Murphy, David Christopher Pruett, John William Slattery, Amy Lee Wohlschlegel
  • Patent number: 10719236
    Abstract: Subject matter disclosed herein may relate to buffers, and may relate more particularly to non-volatile buffers for memory operations.
    Type: Grant
    Filed: November 20, 2015
    Date of Patent: July 21, 2020
    Assignee: ARM Ltd.
    Inventors: Andreas Hansson, Stephan Diestelhorst, Wei Wang, Irenéus Johannes de Jong
  • Patent number: 10719266
    Abstract: A controller includes: a processor suitable for controlling a memory device to read map data stored in a memory and read out a physical address corresponding to data requested by a host to be read; a counter suitable for obtaining reliability information on the map data stored in the memory; a determining unit suitable for activating a pre-pumping mode when reliability of the map data is poor; a deciding unit suitable for determining a first target die of a pre-pumping operation for reading the data in the activated pre-pumping mode; and a pumping unit suitable for controlling the memory device to perform the pre-pumping operation on the first target die during a background operation for reading out the physical address.
    Type: Grant
    Filed: February 12, 2018
    Date of Patent: July 21, 2020
    Assignee: SK hynix Inc.
    Inventors: Byeong-Gyu Park, Hyunjun Kim, Byoung-Sung You
  • Patent number: 10713181
    Abstract: On a computer system having a processor, a single OS and a first instance of a system driver installed and performing system services, method for sharing driver pages among Containers, including instantiating a plurality of Containers that virtualize the OS, wherein the first instance is loaded from an image, and instantiating a second instance of the system driver upon request from Container for system services by: allocating virtual memory pages for the second instance and loading, from the image, the second instance into a physical memory; acquiring virtual addresses of identical pages of the first instance compared to the second instance; mapping the virtual addresses of the identical pages of the second instance to physical pages to which virtual addresses of the corresponding pages of the first instance are mapped, and protecting the physical pages from modification; and releasing physical memory occupied by the identical pages of the second instance.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: July 14, 2020
    Assignee: Virtuozzo International GmbH
    Inventors: Pavel Makhov, Marina Kudinova, Alexey Kostyushko, Mikhail Philippov
  • Patent number: 10705752
    Abstract: Embodiments provide a method, a system, and a computer program product for performing copy operations of one or more data units in a hierarchical storage management (HSM) system. The HSM system includes an upper layer and a lower layer. The upper layer includes multiple storage nodes having a grid configuration. The method comprises scheduling a copy operations of multiple data units each of which is stored in at least one of the multiple storage nodes such that loads on the copy operations are distributed among the multiple storage nodes in which the multiple data units are stored and copying the multiple data units to the lower layer in accordance with the scheduling.
    Type: Grant
    Filed: February 12, 2018
    Date of Patent: July 7, 2020
    Assignee: International Business Machines Corporation
    Inventors: Kousei Kawamura, Koichi Masuda, Sosuke Matsui, Shinsuke Mitsuma, Takeshi Nohta, Takahiro Tsuda
  • Patent number: 10705590
    Abstract: Techniques and apparatuses are described that enable power-conserving cache memory usage. Main memory constructed using, e.g., DRAM can be placed in a low-power mode, such as a self-refresh mode, for longer time periods using the described techniques and apparatuses. A hierarchical memory system includes a supplemental cache memory operatively coupled between a higher-level cache memory and the main memory. The main memory can be placed in the self-refresh mode responsive to the supplemental cache memory being selectively activated. The supplemental cache memory can be implemented with a highly- or fully-associative cache memory that is smaller than the higher-level cache memory. Thus, the supplemental cache memory can handle those cache misses by the higher-level cache memory that arise because too many memory blocks are mapped to a single cache line. In this manner, a DRAM implementation of the main memory can be kept in the self-refresh mode for longer time periods.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: July 7, 2020
    Assignee: Google LLC
    Inventor: Christopher J. Phoenix
  • Patent number: 10705925
    Abstract: Examples provided herein describe a system and method for satisfying recovery service level agreements (SLAs). For example, a first entity may determine that a first recovery operation is to be performed at a first storage device. The first entity may then determine that the first storage device is available. Responsive to determining that the first storage device is available, the first entity may establish a data connection with a first storage device and may perform a first recovery operation at the first storage device. The first entity may receive a second storage device availability message from a second entity that requests a second recovery operation at the first storage device and may facilitate communication with the second entity. The first entity may then perform the second recovery operation at the first storage device and communicate the recovered data to the second entity.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: July 7, 2020
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Mandar Nanivadekar, Veeresh Mallappa Anami
  • Patent number: 10698629
    Abstract: Systems, methods, and non-transitory computer readable media are configured to determine a request corresponding to a portion of data. A placement configuration associated with the portion of data can be determined. The placement configuration can belong to a set of placement configurations. A datacenter identified by the placement configuration can be selected. Subsequently, the portion of data can be accessed at the selected datacenter.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: June 30, 2020
    Assignee: Facebook, Inc.
    Inventors: Muthukaruppan Annamalai, Harish Srinivas, Kaushik Ravichandran, Igor A. Zinkovsky, Luning Pan
  • Patent number: 10656869
    Abstract: A movement system of a block-level data storage service obtains usage information for a data storage volume. The movement system processes the usage information to identify a placement strategy for the data storage volume that is associated with a second operational state for the data storage volume. Based on the placement strategy, the movement system causes a set of servers to perform an operation to implement the second operational state for the data storage volume. As a result of the operation being successfully performed, the movement system provides access to the data storage volume in accordance with the second operational state.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: May 19, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Christopher Magee Greenwood, Sriram Venugopal, Mitchell Gannon Flaherty
  • Patent number: 10656839
    Abstract: In an embodiment of the invention, a method comprises: recording application-level heuristics and IO-level (input/output-level) heuristics; correlating and analyzing the application-level heuristics and IO-level heuristics; and based on an analysis and correlation of the application-level heuristics and IO-level heuristics, generating a policy for achieving optimal application performance. In another embodiment of the invention, an apparatus comprises: a system configured to record application-level heuristics and IO-level heuristics, to correlate and analyze the application-level heuristics and IO-level heuristics, and based on an analysis and correlation of the application-level heuristics and IO-level heuristics, to generate a policy for achieving optimal application performance.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: May 19, 2020
    Assignee: PrimaryIO, Inc.
    Inventor: Murali Nagaraj
  • Patent number: 10649673
    Abstract: Embodiments of the present disclosure may relate to methods and a computer program product for allowing writes based on a granularity level. The method for a storage server may include receiving a received granularity level for a particular volume of a storage device of a client computer including an effective duration for the received granularity level. The method may include receiving an anticipated write to the particular volume at an anticipated write granularity level. The method may include verifying whether the anticipated write granularity level substantially matches the received granularity level at the effective duration. The method may also include writing, in response to the anticipated write granularity level substantially matching the received granularity level at the effective duration, the anticipated write to the particular volume for the received granularity level.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: May 12, 2020
    Assignee: International Business Machines Corporation
    Inventors: Juan A. Coronado, Lisa R. Martinez, Beth A. Peterson, Clint A. Hardy, Jennifer S. Shioya
  • Patent number: 10579305
    Abstract: A method and an apparatus for processing a read/write request in a physical machine, where the method includes polling, by a host by accessing memory of at least one of virtual storage devices, at least one instruction transmit queue of the at least one virtual storage device in order to obtain a first read/write request from the at least one instruction transmit queue, performing a first forwarding operation on the first read/write request, and obtaining, by the host, another first read/write request from the at least one instruction transmit queue by polling such that the host performs the first forwarding operation on the other first read/write request. According to the method and the apparatus in embodiments of the present disclosure, a speed of processing a read/write request in a virtualization storage scenario can be increased.
    Type: Grant
    Filed: July 2, 2018
    Date of Patent: March 3, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Lina Lu
  • Patent number: 10565119
    Abstract: When a shingled magnetic recording (SMR) hard disk drive (HDD) performs additional SMR band copy and/or flush operations to ensure that data associated with logical bands that are adjacent or proximate in logical space are stored in physical locations in the SMR HDD that are proximate in physical space. As a result, efficient execution is ensured of read commands that span multiple logical bands of the SMR HDD.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: February 18, 2020
    Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA ELECTRONIC DEVICES & STORAGE CORPORATION
    Inventors: Thorsten Schmidt, Richard M. Ehrlich, Fernando Anibal Zayas
  • Patent number: 10565111
    Abstract: A processor includes a hierarchical cache memory having a higher-order cache memory and a lower-order cache memory. The hierarchical cache memory is in an inclusive state in which data stored in the higher-order cache memory is included in the lower-order cache memory. The processor also includes a cache hit determination unit configured to determine a cache hit/miss with respect to the higher-order cache memory and the lower-order cache memory at the time of accessing predetermined data, and a control unit configured to perform control to realize the inclusive state, based on the determination results of the cache hit/miss with respect to the higher-order cache memory and the lower-order cache memory.
    Type: Grant
    Filed: February 12, 2018
    Date of Patent: February 18, 2020
    Assignee: NEC CORPORATION
    Inventor: Kenji Ezoe
  • Patent number: 10565131
    Abstract: Disclosed is a main memory capable of speeding up a hardware accelerator and saving memory space. The main memory according to the present disclosure is at least temporarily implemented by a computer and includes a memory, and an accelerator responsible for performing an operation for hardware acceleration while sharing the storage space of a host processor and the memory.
    Type: Grant
    Filed: March 17, 2017
    Date of Patent: February 18, 2020
    Assignee: INDUSTRY-ACADEMIC COOPERATION FOUNDATION, YONSEI UNIVERSITY
    Inventors: Eui Young Chung, Hyeok Jun Seo, Sang Woo Han
  • Patent number: 10558581
    Abstract: Components of a data object are distributed throughout a data storage system. Manifests are used to store the locations of the components of data objects in a data storage system to allow for subsequent reconstruction of the data objects. The manifests may be stored in another data storage system when cost projections indicate it being economical to do so. If a manifest for a data object becomes lost or otherwise inaccessible, clues are used to regenerate the manifest, thereby providing a continued ability to access the components of the data object to reconstruct the data object.
    Type: Grant
    Filed: February 19, 2013
    Date of Patent: February 11, 2020
    Assignee: Amazon Technologies, Inc.
    Inventor: Colin Laird Lazier