Patents Examined by Glenn A. Gossage
  • Patent number: 10222993
    Abstract: A computing device and method for establishing more direct access to a storage device from unprivileged code in an unprivileged storage architecture component. Using a storage infrastructure driver to discover and enumerate storage architecture component(s), a user-mode application requests at least one portion of the storage device to store application-related data corresponding to executing input/output activity. The storage infrastructure driver maps the at least one portion of the storage device to substantially match an address space associated with the application-related data and configures at least one path for the user-mode application to perform block-level input/output between the storage device and the unprivileged storage architecture component. A virtual function may be generated corresponding to at least one path between a computing device and the unprivileged storage architecture component to execute input/output activity using the address space.
    Type: Grant
    Filed: July 6, 2016
    Date of Patent: March 5, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Dmitry Malloy, Dexter Paul Bradshaw, Suyash Sinha
  • Patent number: 10216440
    Abstract: A method, computer program product, and computer system for disk management in a distributed storage system. The distributed storage system comprises a plurality of disks within a main disk ring, where the disks store target data. In one embodiment, the method comprises dividing the target data into cold target data and hot target data, and grouping one or more disks within the main disk ring into a cold data disk ring and the remaining one or more disks within the main disk ring into a hot data disk ring. The method further comprises migrating the cold target data on disks not within the cold data disk ring onto disks within the cold data disk ring while migrating the hot target data on disks not within the hot data disk ring onto disks within the hot data disk ring, and reducing a spinning rate of disks within the cold data disk ring.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: February 26, 2019
    Assignee: International Business Machines Corporation
    Inventors: Lei Chen, Li Chen, Xiaoyang Yang, Jun Wei Zhang
  • Patent number: 10216432
    Abstract: Systems and techniques are provided for managing performance of a backup environment. A set of rules are stored, with each rule specifying a threshold value of a backup configuration parameter. Configurations of the backup environment are periodically obtained. Each obtained configuration includes a current value of the backup configuration parameter. A determination is made for each configuration as to whether the current value exceeds a suggested value, where the suggested value is based on the threshold value. If the current value exceeds the suggested value, an entry including an alert of a first type is written to a log. The log is analyzed, and if the frequency of entries in the log including alerts of the first type exceeds a threshold frequency, an entry including an alert of a second type, different from the first type, is written to the log. The threshold value of the backup configuration parameter may specify a maximum number of backup streams or a maximum number of backup clients, for example.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: February 26, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Gururaj Kulkarni, Shelesh Chopra, Vladimir Mandic
  • Patent number: 10198202
    Abstract: A device access system includes a memory having a supervisor memory, a processor, an input output memory management unit (IOMMU), and a supervisor. The supervisor includes a supervisor driver, which executes on the processor to allocate the supervisor memory and reserve a range of application virtual addresses. The supervisor driver programs the IOMMU to map the supervisor memory to the reserved range. A device is granted access to the reserved range, which is protected in host page table entries such that an application cannot modify data within the range. The supervisor driver configures the device to use the supervisor memory and receive a request including a virtual address and length from the application to use the device. The supervisor driver validates the request by verifying that the virtual address and length do not overlap the range reserved by the supervisor, and responsive to validating the request, submits the request to the device.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: February 5, 2019
    Assignee: Red Hat, Inc.
    Inventor: Michael Tsirkin
  • Patent number: 10191663
    Abstract: An accelerator intermediary node (AIN) associated with a data store obtains an indication of a control setting to be applied with respect to a write request directed to a data item, where the control setting specifies a target for one or more of replication count, data durability, a transaction grouping with respect to a write request, or back-end synchronization node. Using the control setting, a write propagation node set is identified for the write request. The write propagation node set includes another accelerator intermediary node and/or a storage node of a data store. Respective operation requests corresponding to the write request are transmitted to one or more members of the write propagation node set. A write coordinator role may be verified prior to attempting a commit together of a plurality of write requests as part of a multi-write transaction.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: January 29, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Brian O'Neill, Kevin Christen, Omer Ahmed Zaki, Kiran Kumar Muniswamy Reddy
  • Patent number: 10191813
    Abstract: Persistent storage for a master copy is provided using operation numbers. A master copy can include a persistent key-value store such as a B-tree with references to corresponding data. When provisioning a slave copy, the master copy sends a point-in-time copy of the B-tree to the slave copy, which stores a copy of the B-tree, allocates the necessary space, and updates the references of the B-tree to point to a local storage before the data is transferred. When writing the data to persistent storage, a snapshot created on the master copy is an operation that is replicated to the slave copy. The snapshot is generated using a volume view that includes changes to chunks of data of the master copy since a previous snapshot, as determined using the operation number for the previous snapshot. Data (and metadata) for the snapshot is written to persistent storage while new input/output operations are processed.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: January 29, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Jianhua Fan, Benjamin Arthur Hawks, Norbert Paul Kusters, Nachiappan Arumugam, Danny Wei, John Luther Guthrie, II
  • Patent number: 10176106
    Abstract: Caching extracted information from application containers by one or more processors. Upon extracting relevant information from a temporary container, the relevant information is cached at a container template level. A space guard is applied controlling an amount of storage consumed by the cached relevant information, and a time guard is applied controlling an expiration of the cached relevant information. The cached relevant information is maintained for injection into a working container. Applying the space guard includes defining a purge process for pruning or removing cached relevant information stored in the cache, and candidate files for the purge process may be identified using a predetermined criterion. Applying the time guard includes using a time metric defined in a profile of an information injection agent, where the time metric is based on one of a creation time, a last access time or a last modified time of the cached relevant information.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: January 8, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lior Aronovich, Shibin I. Ma
  • Patent number: 10176028
    Abstract: A computer program product, system, and method are provided for upgrading a kernel or kernel module with a configured persistent. A persistent memory memory space is configured in the memory to store application data from applications in user mode. A kernel executing in the memory is prevented from accessing the persistent memory space. A service is called to load an updated kernel in the memory to replace the kernel, wherein the applications have access to the persistent memory space after the updated kernel is loaded. The service may comprise a kernel execution mechanism that directly loads the updated kernel into the memory without a full reboot of the computer system. An extended memory kernel service may be loaded during a boot operation to reserve the persistent memory space as an extended memory space for use by the applications and prevent the kernel from accessing the persistent memory space.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: January 8, 2019
    Assignee: International Business Machines Corporation
    Inventors: Lior Chen, Alex Friedman, Constantine Gavrilov, Aharon Novogrodski, Alex Snast
  • Patent number: 10169240
    Abstract: Systems and methods for managing memory access bandwidth include a spatial locality predictor. The spatial locality predictor includes a memory region table with prediction counters associated with memory regions of a memory. When cache lines are evicted from a cache, the sizes of the cache lines which were accessed by a processor are used for updating the prediction counters. Depending on values of the prediction counters, the sizes of cache lines which are likely to be used by the processor are predicted for the corresponding memory regions. Correspondingly, the memory access bandwidth between the processor and the memory may be reduced to fetch a smaller size data (e.g., half cache line) than a full cache line if the size of the cache line likely to be used is predicted to be less than that of the full cache line. Prediction counters may be incremented or decremented by different amounts depending on access bits corresponding to portions of a cache line.
    Type: Grant
    Filed: September 20, 2016
    Date of Patent: January 1, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Brandon Harley Anthony Dwiel, Harold Wade Cain, III, Shivam Priyadarshi
  • Patent number: 10162534
    Abstract: Systems and methods for utilization of notification or ordering commands are disclosed that can enable more efficient processing of flush requests from software programs and increase data consistency in storage devices. A data storage device or system may include a non-volatile memory, a memory comprising a data cache and a controller. The controller may be configured to receive an ordering command requesting commitment to the non-volatile memory of cached data items associated with a first identifier prior to commitment of cached data items associated with a second identifier, and to delay commitment of the second data item to the non-volatile memory until commitment of the first data item to the non-volatile memory, based at least in part on the ordering command. The controller may be further configured to select data items from the data cache for commitment to the non-volatile memory in accordance with native command queuing (NCQ).
    Type: Grant
    Filed: April 6, 2017
    Date of Patent: December 25, 2018
    Assignee: Western Digital Technologies, Inc.
    Inventor: Nathan Obr
  • Patent number: 10157107
    Abstract: A system for progressive just-in-time restoration of data from backup media. Backup data may be divided into a plurality of chunks and stored on any kind of media such as a direct attached storage (DAS) disk, object storage, USB drive, network share or tape. An index map is maintained that indicates the location of each of the plurality of chunks in cloud storage, the index map representing contiguous blocks of backup data of a volume. The backup data may be compressed, encrypted, or de-duplicated. The backup data may be located on different media, object stores, or network shares, or at differing geographic locations. To perform a recovery, a virtual LUN or virtual volume is mounted and provided to the operating system and applications of the restored computer. Chunks may be progressively copied from cloud storage to a data cache and restored in response to requests for blocks.
    Type: Grant
    Filed: July 2, 2014
    Date of Patent: December 18, 2018
    Assignee: CATALOGIC SOFTWARE, INC.
    Inventors: Kamlesh Lad, Peter Chi-Hsiung Liu
  • Patent number: 10157023
    Abstract: A memory controller includes a plurality of request queues for storing requests transmitted from corresponding host devices among a plurality of host devices, and a token information generation unit for generating information related to the numbers of first and second tokens corresponding to the plurality of respective host devices. The memory controller also includes a request scheduler for selecting repeatedly and sequentially the plurality of request queues, and outputting requests stored in a selected request queue, by using the first and second tokens, wherein the request scheduler outputs one request per one first token and, when first tokens are all consumed, outputs one request per one second token. The scheduler may output requests according to a first-ready first-come first-served (FR-FCFS) rule when using a first token, and output requests according to a first-ready (FR) rule when using a second token. The number of first tokens and second tokens may depend on characteristics of the host devices.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: December 18, 2018
    Assignee: SK Hynix Inc.
    Inventors: Young-Suk Moon, Hong-Sik Kim
  • Patent number: 10157127
    Abstract: A data storage device and method for operating the data storage device are disclosed. The data storage device includes a memory device including a plurality of memory regions, and a controller for selecting one or more candidate memory regions among the plurality of memory regions based on erase counts of the plurality of memory regions, and determining an adjustment value based on the number of candidate memory regions. The controller selects a number of victim memory regions among the one or more candidate memory regions, and performs a garbage collection operation on the selected number of victim memory regions. The number of victim memory regions may be equal to or less than the adjustment value, for example. The controller may determine whether candidate memory regions exist for which garbage collection has not been performed, and may select victim memory regions depending on amounts of valid data stored therein and a number of free memory regions.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: December 18, 2018
    Assignee: SK Hynix Inc.
    Inventor: Yeong Sik Yi
  • Patent number: 10089243
    Abstract: A memory controller includes a plurality of ports coupled with a host device and a plurality of channels coupled with a memory device. The memory controller also includes an arbiter receiving a first address received through the plurality of ports to output the first address; a mapping table storage block, including a plurality of address mapping tables, selecting an address mapping table, corresponding to the first address, among the plurality of address mapping tables and outputting the selected address mapping table as a variable address mapping table; an address mapping block mapping the first address to a second address according to the variable address mapping table, and a fixed address mapping table; and a scheduler outputting the second address to the channels. The plurality of address mapping tables may employ different methods of mapping one or more first bits of the first address to the second address.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: October 2, 2018
    Assignee: SK Hynix Inc.
    Inventors: Ki-Sun Kim, Sang-Yeon Kim
  • Patent number: 10078446
    Abstract: A processor of a parallel computing apparatus accumulates first release requests that are outputted, each of which requests releasing of a storage region that stores management information of a buffer storing data subjected to inter-process communication. Each of the first release requests includes one identifier of the storage region to be released. When the number of accumulated first release requests has reached a threshold, the processor selects first release requests, that request releasing of storage regions of management information that is not presently being used, out of the accumulated first release requests starting from a first release request with an oldest output time as first release requests to be executed. The processor then outputs a single second release request that collectively requests releasing of storage regions of management information indicated in the first release requests to be executed.
    Type: Grant
    Filed: September 28, 2015
    Date of Patent: September 18, 2018
    Assignee: FUJITSU LIMITED
    Inventor: Nobutaka Ihara
  • Patent number: 10067722
    Abstract: An administrator provisions a virtual disk in a remote storage platform and defines policies for that virtual disk. A virtual machine writes to and reads from the storage platform using any storage protocol. Virtual disk data within a failed storage pool is migrated to different storage pools while still respecting the policies of each virtual disk. Snapshot and revert commands are given for a virtual disk at a particular point in time and overhead is minimal. A virtual disk is cloned utilizing snapshot information and no data need be copied. Any number of Zookeeper clusters are executing in a coordinated fashion within the storage platform, thus increasing overall throughput. A timestamp is generated that guarantees a monotonically increasing counter, even upon a crash of a virtual machine. Any virtual disk has a “hybrid cloud aware” policy in which one replica of the virtual disk is stored in a public cloud.
    Type: Grant
    Filed: July 2, 2014
    Date of Patent: September 4, 2018
    Assignee: HEDVIG, INC
    Inventor: Avinash Lakshman
  • Patent number: 10019375
    Abstract: A cache device has a data memory capable of storing a piece of first cache line data and a piece of second cache line data for first and second ways in compressed form, and a tag memory configured to store, for each of the pieces of cache line data, a piece of tag data including uncompressed data writing state information, an absence flag, and a compression information field. In case of modifying only part of a cache line, i.e., a partial write, a request converter converts a write request into a read request, and a read-out piece of data is decompressed and written in a write status buffer. Data may be written from the write status buffer to the data memory without being compressed, which eliminates a need for decompression and compression for every writing or modifying operation of a piece of partial data, thereby reducing latency and power consumption.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: July 10, 2018
    Assignee: TOSHIBA MEMORY CORPORATION
    Inventors: Hiroyuki Usui, Seiji Maeda
  • Patent number: 10019583
    Abstract: A Protected Walk-based Shadow Paging (PWSP) method includes storing a multiple level first stage (S1) page tables structure in second stage (S2) page tables. The method includes: when an S1 page table in an S2 page table entry is marked with a writable attribute: (i) permitting an operating system (OS) to write to the S1 page table, (ii) blocking a memory management unit (MMU) from reading the S1 page table for translation, and (iii) in response, verifying the S1 page table for translation and changing the marking of the S1 page table in the S2 page table entry to a read-only attribute, enabling the MMU to subsequently read the S1 page table.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: July 10, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Kirk R. Swidowski, Ahmed M. Azab
  • Patent number: 10019360
    Abstract: Apparatus and methods implementing a hardware predictor for reducing performance inversions caused by intra-core data transfer during inter-core data transfer optimization for network function virtualizations (NFVs) and other producer-consumer workloads. An apparatus embodiment includes a plurality of hardware processor cores each including a first cache, a second cache shared by the plurality of hardware processor cores, and a predictor circuit to track the number of inter-core versus intra-core accesses to a plurality of monitored cache lines in the first cache and control enablement of a cache line demotion instruction, such as a cache line LLC allocation (CLLA) instruction, based upon the tracked accesses. An execution of the cache line demotion instruction by one of the plurality of hardware processor cores causes a plurality of unmonitored cache lines in the first cache to be moved to the second cache, such as from L1 or L2 caches to a shared L3 or last level cache (LLC).
    Type: Grant
    Filed: September 26, 2015
    Date of Patent: July 10, 2018
    Assignee: Intel Corporation
    Inventors: Ren Wang, Andrew J. Herdrich, Christopher B. Wilkerson
  • Patent number: 10013359
    Abstract: A redundant disk array method includes allocating identically sized logical blocks of storage units together to form a stripe on each of several data storage devices, at least two of the logical blocks in the stripe being located on different data storage devices, generating a lookup table representing a mapping between a logical location of each logical block in the stripe and a physical location of the respective logical block on the corresponding data storage device, and writing data to the physical locations of each logical block in the stripe, the physical locations being obtained from the lookup table. In some cases, at least two of the data storage devices are heterogeneous, and at least two of the data storage devices have a different total number of logical blocks.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: July 3, 2018
    Assignee: University of New Hampshire
    Inventors: András Krisztián Fekete, Elizabeth Varki