Patents Examined by Mano Padmanabhan
  • Patent number: 10216629
    Abstract: A data manager may include a data opaque interface configured to provide, to an arbitrarily selected page-oriented access method, interface access to page data storage that includes latch-free access to the page data storage. In another aspect, a swap operation may be initiated, of a portion of a first page in cache layer storage to a location in secondary storage, based on initiating a prepending of a partial swap delta record to a page state associated with the first page, the partial swap delta record including a main memory address indicating a storage location of a flush delta record that indicates a location in secondary storage of a missing part of the first page. In another aspect, a page manager may initiate a flush operation of a first page in cache layer storage to a location in secondary storage, based on atomic operations with flush delta records.
    Type: Grant
    Filed: December 7, 2016
    Date of Patent: February 26, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: David B. Lomet, Justin Levandoski, Sudipta Sengupta
  • Patent number: 10216536
    Abstract: Memory data for a virtual machine can be stored in a swap file, which is comprised of storage blocks. A defragmentation procedure can be performed on a thin swap file while the virtual machine is still running. The described defragmentation procedure traversing a page frame space of the virtual machine, identifying candidate page frames, relocating the swapped page, and updating the page frame. Resulting unused storage blocks are released to the storage system. A data structure for aiding the defragmentation process is also described.
    Type: Grant
    Filed: March 11, 2016
    Date of Patent: February 26, 2019
    Assignee: VMware, Inc.
    Inventors: Ishan Banerjee, Preeti Agarwal, Jui-Hao Chiang
  • Patent number: 10204059
    Abstract: Embodiments of the present invention provide memory optimization by phase-dependent data residency. Application programs are profiled a priori or in real time for temporal memory usage. Memory regions such as initialization data are proactively removed from memory when the application transitions to a new phase. A hypervisor monitors application activity and coordinates the removal of memory regions that are no longer needed by the application. Additionally, memory regions that are anticipated to be needed in the future are proactively preloaded.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: February 12, 2019
    Assignee: International Business Machines Corporation
    Inventors: Peter D. Bain, Peter D. Shipton
  • Patent number: 10203880
    Abstract: A technique writes data to a storage array. The technique involves operating storage circuitry in a “FILL HOLE” mode in which the circuitry writes a stream of first data portions within storage portions of used storage stripes of the array. The technique further involves, after operating the circuitry in the “FILL HOLE” mode and in response to a first event, transitioning the circuitry from the “FILL HOLE” mode to a “STRIPE WRITE” mode in which the circuitry writes a stream of second data portions within unused storage stripes of the array. The technique further involves, after operating the circuitry in the “STRIPE WRITE” mode and in response to a second event, transitioning the circuitry from the “STRIPE WRITE” mode back to the “FILL HOLE” mode in which the circuitry writes a stream of third data portions within storage portions of used storage stripes of the array.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: February 12, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Philippe Armangau, Bruce E. Caram, Christopher A. Seibel, Christopher Jones
  • Patent number: 10198301
    Abstract: A semiconductor device includes a central processing unit and a processor on one semiconductor substrate. The processor includes a buffer for storing a first register setting list and notifies the central processing unit of an access complete signal indicating completion of reading a second register setting list within a memory. The central processing unit changes the second register setting list within the memory based on the access complete signal and notifies the processor of an update request signal. The processor reads the second register setting list changed by the central processing unit into the buffer to update the first register setting list based on the update request information.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: February 5, 2019
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventors: Tetsuji Tsuda, Masaru Hase, Yuki Inoue, Naohiro Nishikawa
  • Patent number: 10191820
    Abstract: Techniques for virtual proxy based backup of virtual machines in a cluster environment are disclosed. In some embodiments, each of a subset of virtual machines hosted by physical nodes in a cluster environment is configured as a virtual proxy dedicated to backup operations. During backup, data rollover of each virtual machine in the cluster environment that is subjected to backup is performed using a virtual proxy.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: January 29, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Soumen Acharya, Anupam Chakraborty, Sunil Yadav, Tushar Dethe
  • Patent number: 10191854
    Abstract: A system for providing both low-level, physical data access and high-level, logical data access to a single process is disclosed, having a data block table with a physical memory address portion and a logical memory address portion. Data blocks that are mapped to physical memory bypass multiple logical memory address layers, such as the operating system layer and a logical block address layer, while data blocks that are mapped to the logical memory will be routed through traditional API layers, providing both increased performance and flexibility.
    Type: Grant
    Filed: December 6, 2016
    Date of Patent: January 29, 2019
    Assignee: Levyx, Inc.
    Inventor: Ali Tootoonchian
  • Patent number: 10191812
    Abstract: A storage server includes an IO controller, a management controller and physical drives. The IO controller generates multiple metadata updates and writes a cache entry that includes the multiple metadata updates to a first cache in memory of the management controller. The IO controller additionally writes a copy of the cache entry to a second cache in a memory of the IO controller and increments a commit pointer in the first and second caches to indicate that the metadata updates are committed.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: January 29, 2019
    Assignee: Pavilion Data Systems, Inc.
    Inventors: Suhas Dantkale, Venkeepuram R. Satish, Raghuraman Govindasamy
  • Patent number: 10185731
    Abstract: An apparatus has processing circuitry for processing instructions from multiple threads. A storage structure is shared between the threads and has a number of entries. Indexing circuitry generates a target index value identifying an entry of the storage structure to be accessed in response to a request from the processing circuitry specifying a requested index value corresponding to information to be accessed from the storage structure. The indexing circuitry generates the target index value as a function of the requested index value and a key value selected depending on which of the threads trigger the request. The key value for at least one of the threads is updated from time to time.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: January 22, 2019
    Assignee: ARM Limited
    Inventors: Mitchell Bryan Hayenga, Curtis Glenn Dunham, Dam Sunwoo
  • Patent number: 10176098
    Abstract: A computer device including a node having a storage device having a plurality of first internal address spaces, a cache memory, and a processor may be provided. The processor may provide a virtual volume. The virtual volume may have a plurality of virtual address spaces including first virtual address spaces corresponding to the plurality of first internal address spaces. The processor may cache data of a virtual address space in a first cache space of the cache memory by associating the virtual address space with the first cache space. Further, the processor may cache data of a first internal address space of the first internal address spaces in a second cache space of the cache memory by associating the first internal address space with the second cache space.
    Type: Grant
    Filed: November 17, 2014
    Date of Patent: January 8, 2019
    Assignee: HITACHI, LTD.
    Inventor: Akira Deguchi
  • Patent number: 10157133
    Abstract: A data processing system, having two or more of processors that access a shared data resource, and method of operation thereof. Data stored in a local cache is marked as being in a ‘UniqueDirty’, ‘SharedDirty’, ‘UniqueClean’, ‘SharedClean’ or ‘Invalid’ state. A snoop filter monitors access by the processors to the shared data resource, and includes snoop filter control logic and a snoop filter cache configured to maintain cache coherency. The snoop filter cache does not identify any local cache that stores the block of data in a ‘SharedDirty’ state, resulting in a smaller snoop filter cache size and simple snoop control logic. The data processing system by be defined by instructions of a Hardware Description Language.
    Type: Grant
    Filed: December 10, 2015
    Date of Patent: December 18, 2018
    Assignee: Arm Limited
    Inventors: Jamshed Jalal, Mark David Werkheiser
  • Patent number: 10146441
    Abstract: An arithmetic processing device includes: a processor that issues a store command and a load command; and a memory coupled to the processor, wherein the processor: includes a cache memory which stores data to be stored corresponding to the store command and a buffer including entries which stores the data to be stored; searches, in a case where the load command is issued, the entries; and selects, when data to be loaded corresponding to the load command is present in the entries, the data to be loaded from the buffer.
    Type: Grant
    Filed: January 26, 2017
    Date of Patent: December 4, 2018
    Assignee: FUJITSU LIMITED
    Inventor: Masaharu Maruyama
  • Patent number: 10146694
    Abstract: Implementations are provided herein for having at least two data streams associated with each file in a file system. The first, a cache overlay layer, can store additional state information on a per block basis that details whether each individual block of file data within the cache overlay layer is clean, dirty, or indicates that a write back to the storage layer is in progress. The second, a storage layer, can be a use case defined repository that can transform data using data augmentation methods or store unmodified raw data in local storage. File system operations directed to the cache overlay layer can be processed asynchronously from file system operations directed to the storage layer.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: December 4, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: Max Laier, Evgeny Popovich, Hwanju Kim
  • Patent number: 10146697
    Abstract: Embodiments are directed to perfect physical garbage collection (PPGC) process that uses a NUMA-aware perfect hash vector. The process splits a perfect hash vector (PHVEC) into a number of perfect hash vectors, wherein the number corresponds to a number of nodes having a processing core and associated local memory, directs each perfect hash to a respective local memory of a node so that each perfect hash vector accesses only a local memory, and assigns fingerprints in the perfect hash vector to a respective node using a mask function. The process also performs a simultaneous creation of perfect hash vectors in a multi-threaded manner by scanning the Index once.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: December 4, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: Abhinav Duggal, Tony Wong
  • Patent number: 10140036
    Abstract: A system and method is disclosed for managing a non-volatile memory system having a multi-processor controller. The controller may be configured with a plurality of processors and a shared data queue in a cyclic data buffer. Each of the plurality of processors may manage a separate pointer pointing to a different entry of the shared data queue and multiple ones of the processors may concurrently access or update entries in the shared data queue.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: November 27, 2018
    Assignee: SanDisk Technologies LLC
    Inventors: Vered Kelner, Noga Deshe, Alon Banin, Gadi Vishne, Yevgeny Zagalsky, Ilya Gusev, Eran Ben Abou
  • Patent number: 10126980
    Abstract: When a request is received to perform a data operation requiring an interaction with any one of multiple data replicas stored on one or more data storage devices and managed by a quorum-based data management protocol in which completion of a data update is reported to an initiator of the data update when acceptance of the data update is reported by a majority of the data replicas, the data operation is routed to be performed using one of a predefined minority of the data replicas if the data operation requires less than strong consistency, is a read-only data operation, and meets a predefined criterion of being computationally time-intensive or computationally resource-intensive, or routed to be performed using a predefined majority of the data replicas if the data operation requires strong consistency or requires a data write operation or does not meet the predefined criterion.
    Type: Grant
    Filed: April 29, 2015
    Date of Patent: November 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy Laden, Benjamin Mandler, Yoav Tock
  • Patent number: 10126957
    Abstract: A semiconductor storage device includes memory cells, select transistors, memory strings, first and second blocks, word lines, and select gate lines. In the memory string, the current paths of plural memory cells are connected in series. When data are written in a first block, after a select gate line connected to the gate of a select transistor of one of the memory strings in the first block is selected, the data are sequentially written in the memory cells in the memory string connected to the selected select gate line. When data are written in the second block, after a word line connected to the control gates of memory cells of different memory strings in the second block is selected, the data are sequentially written in the memory cells of the different memory strings in the second block which have their control gates connected to the selected word line.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: November 13, 2018
    Assignee: Toshiba Memory Corporation
    Inventor: Hiroshi Maejima
  • Patent number: 10126958
    Abstract: Techniques are disclosed for write suppression to improve endurance rating of non-volatile memories, such as QLC-NAND SSDs or other relatively slow, low endurance non-volatile memories. In an embodiment, an SSD is configured with a fast frontend non-volatile memory, a relatively slow lower endurance backend non-volatile memory, and a frontend manager that selectively transfers data from the fast memory to the slow memory based on transfer criteria. In operation, write data from the host is initially written to the fast memory by the frontend manager. The data is moved from the fast memory to the slow memory in bands. For each data band stored in the fast memory, the frontend manager tracks invalid data counts and data age. Only bands that still remain valid are transferred to the slow memory. After a given band has been fully transferred, it is erased and re-usable for other incoming writes by the frontend manager.
    Type: Grant
    Filed: October 5, 2015
    Date of Patent: November 13, 2018
    Assignee: INTEL CORPORATION
    Inventor: Anand S. Ramalingam
  • Patent number: 10120804
    Abstract: Tracking a processor instruction is provided to limit a speculative mis-prediction. A non-speculative read set indication and/or write set indication are maintained for a transaction. The indication(s) are stored in cache. In addition, a queue(s) of at least one address corresponding to a speculatively executed instruction is maintained. For a received request from a processor, a transaction resolution process takes place, and a resolution is performed if an address match in the queue is detected. The resolution includes to hold a response to the receive request until the speculative instruction is committed or flushed.
    Type: Grant
    Filed: May 12, 2017
    Date of Patent: November 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum
  • Patent number: 10114559
    Abstract: Provided are a computer program product, system, and method for generating node access information for a transaction accessing nodes of a data set index. Pages in the memory are allocated to internal nodes and leaf nodes of a tree data structure representing all or a portion of a data set index for the data set. A transaction is processed with respect to the data set that involves accessing the internal and leaf nodes in the tree data structure, wherein the transaction comprises a read or write operation. Node access information is generated in transaction information, for accessed nodes comprising nodes in the tree data structure accessed as part of processing the transaction. The node access information includes a pointer to the page allocated to the accessed node prior to the transaction in response to the node being modified during the transaction.
    Type: Grant
    Filed: August 12, 2016
    Date of Patent: October 30, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Derek L. Erdmann, David C. Reed, Thomas C. Reed, Max D. Smith