Patents Examined by Thanh Vo
  • Patent number: 9916239
    Abstract: The embodiments relate to a computer system, computer program product and method for managing a garbage collection process. Processing control is obtained based on execution of a load instruction and a determination that an object pointer to be loaded indicates a location within a selected portion of memory undergoing a garbage collection process. The determination includes identifying a base address and size of a first memory block subject to the garbage collection, subdividing the first memory block into sections, assigning a binary value to each section, and determining if the first memory block corresponds to the enabled section. An image of the load instruction is obtained and a pointer address is calculated from the image. The object pointer is read and it is determined whether the object pointer is to be modified. The object pointer is modified and stored in a selected location.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: March 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Giles R. Frazier, Michael Karl Gschwind, Younes Manton, Karl M. Taylor, Brian W. Thompto
  • Patent number: 9891847
    Abstract: A storage device with a memory may improve yield by reducing the allocation of blocks for secondary writes in a dual programming system. In a dual programming system, all host writes are written to both a primary copy and to a secondary copy. If the secondary copy blocks that are available have a higher endurance, then the overall allocation of available blocks for use as a secondary copy block can be reduced (improving yield). In one embodiment, utilizing different trim parameters for the secondary copy blocks may be used to increase the endurance for those blocks. Before programming the secondary copy, the trim parameters may be adjusted to increase endurance and after programming the secondary copy, the trim parameters may be adjusted back to the default value that is used when programming the primary copy.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: February 13, 2018
    Assignee: SanDisk Technologies LLC
    Inventors: Narendhiran Chinnaanangur Ravimohan, Abhijeet Manohar, Muralitharan Jayaraman
  • Patent number: 9892057
    Abstract: In a network element a decision apparatus has a plurality of multi-way hash tables of single size and double size associative entries. A logic pipeline extracts a search key from each of a sequence of received data items. A hash circuit applies first and second hash functions to the search key to generate first and second indices. A lookup circuit reads associative entries in the hash tables that are indicated respectively by the first and second indices, matches the search key against the associative entries in all the ways. Upon finding a match between the search key and an entry key in an indicated associative entry. A processor uses the value of the indicated associative entry to insert associative entries from a stash of associative entries into the hash tables in accordance with a single size and a double size cuckoo insertion procedure.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: February 13, 2018
    Assignee: MELLANOX TECHNOLOGIES TLV LTD.
    Inventors: Gil Levy, Salvatore Pontarelli, Pedro Reviriego
  • Patent number: 9892043
    Abstract: A computer system comprising multiple nodes, each node comprising a plurality of processors and a local cache hierarchy, suppresses local cache coherency of a node operations or global cache coherency operations between nodes based on the coherency request being a global or local request, and the state of the cache line at the node.
    Type: Grant
    Filed: April 27, 2017
    Date of Patent: February 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Garrett Michael Drapala, William J Lewis, Pak-kin Mak, Robert J Sonnelitter, III
  • Patent number: 9880906
    Abstract: Embodiments include methods, apparatus, and systems for managing resources in a physical storage library behind a virtual storage library. In one embodiment, priorities are assigned to copy applications and rules determine which when applications are assigned to resources in the physical storage library.
    Type: Grant
    Filed: June 27, 2007
    Date of Patent: January 30, 2018
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Stephen Gold, Shannon Moyes Clark
  • Patent number: 9875026
    Abstract: Techniques to send and receive access commands are provided. The access commands may include an expected media position. The expected media position may be compared to an actual media position.
    Type: Grant
    Filed: June 29, 2011
    Date of Patent: January 23, 2018
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Curtis C Ballard, Kevin Lloyd Jones
  • Patent number: 9830271
    Abstract: Embodiments present a virtual disk image to applications such as virtual machines (VMs) executing on a computing device. The virtual disk image corresponds to one or more subparts of binary large objects (blobs) of data stored by a cloud service, and is implemented in a log structured format. Grains of the virtual disk image are cached by the computing device. The computing device caches only a subset of the grains and performs write operations without blocking the applications to reduce storage latency perceived by the applications. Some embodiments enable the applications that lack enterprise class storage to benefit from enterprise class cloud storage services.
    Type: Grant
    Filed: July 25, 2012
    Date of Patent: November 28, 2017
    Assignee: VMware, Inc.
    Inventors: Thomas A. Phelan, Erik Cota-Robles, David William Barry, Adam Back
  • Patent number: 9830269
    Abstract: Method and systems for a storage system are provided. Simulated cache blocks of a cache system are tracked using cache metadata while performing a workload having a plurality of storage operations. The cache metadata is segmented, each segment corresponding to a cache size. Predictive statistics are determined for each cache size using a corresponding segment of the cache metadata. The predictive statistics are used to determine an amount of data that is written for each cache size within certain duration. The process then determines if each cache size provides an endurance level after executing a certain number of write operations, where the endurance level indicates a desired life-cycle for each cache size.
    Type: Grant
    Filed: July 29, 2014
    Date of Patent: November 28, 2017
    Assignee: NetApp Inc.
    Inventors: Brian D. McKean, Donald R. Humlicek
  • Patent number: 9823977
    Abstract: According to certain aspects, a system includes a client device that includes a virtual machine (VM) executed by a hypervisor, a driver located within the hypervisor, and a data agent. The VM may include a virtual hard disk file and a change block bitmap file. The driver may intercept a first write operation generated by the VM to store data in a first sector, determine an identity of the first sector based on the intercepted write operation, determine an entry in the change block bitmap file that corresponds with the first sector, and modify the entry in the change block bitmap file to indicate that data in the first sector has changed. The data agent may generate an incremental backup of the VM based on the change block bitmap file in response to an instruction from a storage manager, where the incremental backup includes the data in the first sector.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: November 21, 2017
    Assignee: COMMVAULT SYSTEMS, INC.
    Inventors: Henry Wallace Dornemann, Rahul S. Pawar
  • Patent number: 9817769
    Abstract: In one embodiment, a method includes receive a translation vector, selecting a translation entry from a plurality of translation entries, and determining whether the translation entry is associated with a first identifier class or a second identifier class. The translation vector includes a first identifier, a second identifier, and a virtual memory identifier. The first identifier is associated with a first identifier class, and the second identifier is associated with a second identifier class. The translation vector is received from a translation module including a memory configured to store the plurality of translation entries. Each translation entry from the plurality of translation entries including a virtual memory identifier. The translation entry is selected from the plurality of translation entries of the translation module based on the virtual memory identifier of the translation vector.
    Type: Grant
    Filed: April 10, 2017
    Date of Patent: November 14, 2017
    Assignee: Juniper Networks, Inc.
    Inventors: Xiangwen Xu, Hexin Wang, Xiang Zhu
  • Patent number: 9760492
    Abstract: A method for controlling access of a cache includes at least following steps: receiving a memory address; utilizing a hashing address logic to perform a programmable hash function upon at least a portion of the memory address to generate a hashing address; and determining an index of the cache based at least partly on the hashing address.
    Type: Grant
    Filed: November 21, 2014
    Date of Patent: September 12, 2017
    Assignee: MediaTek Singapore Pte. Ltd.
    Inventors: Hsilin Huang, Cheng-Ying Ko, Hsin-Hao Chung, Chao-Chin Chen
  • Patent number: 9734062
    Abstract: An apparatus comprising a memory and a controller. The memory may be configured to (i) implement a cache and (ii) store meta-data. The cache comprises one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. Each of the cache-lines comprises a plurality of sub-cache lines. Each of the plurality of cache-lines and each of the plurality of sub-cache lines is associated with meta-data indicating one or more of a dirty state and an invalid state. The controller is connected to the memory and configured to (i) recognize sub-cache line boundaries and (ii) process the I/O requests in multiples of a size of said sub-cache lines to minimize cache-fills.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: August 15, 2017
    Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
    Inventors: Saugata Das Purkayastha, Luca Bert, Horia Simionescu, Kishore Kaniyar Sampathkumar, Mark Ish
  • Patent number: 9734052
    Abstract: The embodiments relate to a method for managing a garbage collection process. The method includes executing a garbage collection process on a memory block of user address space. A load instruction is run. Running the load instruction includes loading content of a storage location into a processor. The loaded content corresponds to a memory address. It is determined if the garbage collection process is being executed at the memory address. The load instruction is diverted to a process to move an object at the memory address to a location outside of the memory block in response to determining that the garbage collection process is being executed at the first memory address. The load instruction is continued in response to determining that the garbage collection process is not being executed at the memory address.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: August 15, 2017
    Assignee: International Business Machines Corporation
    Inventors: Giles R. Frazier, Michael Karl Gschwind, Younes Manton, Karl M. Taylor, Brian W. Thompto
  • Patent number: 9727464
    Abstract: A computer system comprising multiple nodes, each node comprising a plurality of processors and a local cache hierarchy, suppresses local cache coherency of a node operations or global cache coherency operations between nodes based on the coherency request being a global or local request, and the state of the cache line at the node.
    Type: Grant
    Filed: November 20, 2014
    Date of Patent: August 8, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Garrett Michael Drapala, William J Lewis, Pak-kin Mak, Robert J Sonnelitter, III
  • Patent number: 9720833
    Abstract: A computer system comprising multiple nodes, each node comprising a plurality of processors and a local cache hierarchy, suppresses local cache coherency of a node operations or global cache coherency operations between nodes based on the coherency request being a global or local request, and the state of the cache line at the node.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: August 1, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Garrett Michael Drapala, William J Lewis, Pak-kin Mak, Robert J Sonnelitter, III
  • Patent number: 9697117
    Abstract: The embodiments relate to a method for managing a garbage collection process. The method includes executing a garbage collection process on a memory block of user address space. A load instruction is run. Running the load instruction includes loading content of a storage location into a processor. The loaded content corresponds to a memory address. It is determined if the garbage collection process is being executed at the memory address. The load instruction is diverted to a process to move an object at the memory address to a location outside of the memory block in response to determining that the garbage collection process is being executed at the first memory address. The load instruction is continued in response to determining that the garbage collection process is not being executed at the memory address.
    Type: Grant
    Filed: August 24, 2015
    Date of Patent: July 4, 2017
    Assignee: International Business Machines Corporation
    Inventors: Giles R. Frazier, Michael Karl Gschwind, Younes Manton, Karl M. Taylor, Brian W. Thompto
  • Patent number: 9696932
    Abstract: Guaranteeing space availability for thin devices includes reserving space without committing, or fully pre-allocating, the space to specific thin device ranges. Space may be held in reserve for a particular set of thin devices and consumed as needed by those thin devices. The system guards user-critical devices from running out of space, for example due to a “rogue device” scenario in which one device allocates an excessive amount of space. The system uses a reservation entity, to which a thin device may subscribe, which reserves space for the thin device without allocating that space before it is need to service an I/O request.
    Type: Grant
    Filed: October 5, 2015
    Date of Patent: July 4, 2017
    Assignee: EMC IP Holding Company LLC
    Inventors: Igor Fradkin, Alexandr Veprinsky, John Fitzgerald, Magnus E. Bjornsson
  • Patent number: 9684594
    Abstract: A coordinating node acts as a write back cache, isolating local cache storage endpoints from latencies associated with accessing geographically remote cloud cache and storage resources.
    Type: Grant
    Filed: July 16, 2014
    Date of Patent: June 20, 2017
    Assignee: ClearSky Data
    Inventors: Lazarus Vekiarides, Daniel Suman, Janice Ann Lacy
  • Patent number: 9658777
    Abstract: A storage module and host device for storage module defragmentation are disclosed. In one embodiment, a host controller sends a storage module a first set of logical block addresses of a file stored in the storage module. The host controller receives a metric from the storage module indicative of a fragmentation level of the file in physical blocks of memory in the storage module. If the metric is greater than a threshold, the host controller reads the file and then writes it back to the storage module using a different set of logical block addresses. To avoid sending the file back and forth, in another embodiment, the host controller sends the fragmentation threshold and the different set of logical block addresses to the storage module. The storage module then moves the file itself if the metric indicative of the fragmentation level is greater than the threshold. Other embodiments are provided.
    Type: Grant
    Filed: April 9, 2014
    Date of Patent: May 23, 2017
    Assignee: SANDISK TECHNOLOGIES LLC
    Inventors: Yacov Duzly, Hadas Oshinsky, Shahar Bar-Or, Judah Gamliel Hahn
  • Patent number: 9652389
    Abstract: A coordinating node maintains globally consistent logical block address (LBA) metadata for a hierarchy of caches, which may be implemented in local and cloud based storage resources. Associated storage endpoints initially determine a hash associated with each access request, but forward the access request to the coordinating node to determine a unique discriminator for each hash.
    Type: Grant
    Filed: July 16, 2014
    Date of Patent: May 16, 2017
    Assignee: ClearSky Data
    Inventors: Lazarus Vekiarides, Daniel Suman, Janice Ann Lacy