Patents by Inventor Richard P. Spillane

Richard P. Spillane has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10740039
    Abstract: Embodiments described herein are related to cloning a volume in a file system. In some embodiments, a directory hard link is used to generate a clone of the root node of the volume. In certain embodiments, upon determining that a file or directory of the clone which comprises a hard link to an index node has been modified, a new object directory is generated beneath a root node of the volume. The index node may be added to the new object directory and one or more files and directories in the volume which link to the index node may be updated to contain symbolic links to the index node in the new object directory. In certain embodiments, a copy-on-write operation is performed in order to copy the file or directory and the new object directory to the clone.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: August 11, 2020
    Assignee: VMWARE, INC.
    Inventors: Richard P. Spillane, Wenguang Wang
  • Publication number: 20200241939
    Abstract: The disclosure provides an approach for performing an operation by a first process on behalf of a second process, the method comprising: obtaining, by the first process, a memory handle from the second process, wherein the memory handle allows access, by the first process, to at least some of the address space of the second process; dividing the address space of the memory handle into a plurality of sections; receiving, by the first process, a request from the second process to perform an operation; determining, by the first process, a section of the plurality of sections that is to be mapped from the address space of the memory handle to the address space of the first process for the performance of the operation by the first process; mapping the section from the address space of the memory handle to the address space of the first process; and performing the operation by the first process on behalf of the second process.
    Type: Application
    Filed: January 24, 2019
    Publication date: July 30, 2020
    Inventors: Wenguang WANG, Christoph KLEE, Adrian DRZEWIECKI, Christos KARAMANOLIS, Richard P. SPILLANE, Maxime AUSTRUY
  • Publication number: 20200242034
    Abstract: The present disclosure provides techniques for managing a cache of a computer system using a cache management data structure. The cache management data structure includes a cold queue, a ghost queue, and a hot queue. The techniques herein improve the functioning of the computer because management of the cache management data structure can be performed in parallel with multiple cores or multiple processors, because a sequential scan will only pollute (i.e., add unimportant memory pages) cold queue, and to an extent, ghost queue, but not hot queue, and also because the cache management data structure has lower memory requirements and lower CPU overhead on cache hit than some prior art algorithms.
    Type: Application
    Filed: January 24, 2019
    Publication date: July 30, 2020
    Inventors: Wenguang WANG, Christoph KLEE, Adrian DRZEWIECKI, Christos KARAMANOLIS, Richard P. SPILLANE, Maxime AUSTRUY
  • Publication number: 20200233801
    Abstract: Certain aspects provide systems and methods for performing an operation on a B?-tree. A method comprises writing a message associated with the operation to a first slot in a first buffer of a first non-leaf node of the B?-tree in an append-only manner, wherein a first filter associated with the first slot is used for query operations associated with the first slot. The method further comprises determining that the first buffer is full and, upon determining to flush the message to a non-leaf child node, flushing the message in an append-only manner to a second slot in a second buffer of the non-leaf child node, wherein a second filter associated with the second slot is used for query operations associated with the second slot. The method further comprises, upon determining to flush the message to a leaf node, flushing the message to the leaf node in a sorted manner.
    Type: Application
    Filed: January 18, 2019
    Publication date: July 23, 2020
    Inventors: Abhishek GUPTA, Robert T. JOHNSON, Richard P. SPILLANE, Sandeep RANGASWAMY, Jorge GUERRA DELGADO, Kapil CHOWKSEY, Srinath PREMACHANDRAN
  • Patent number: 10698865
    Abstract: System and method for managing leaf nodes of a B-tree for a file system of a computer system utilize used slots in a directory section of a leaf node to index variable size key-value pair entries stored in a data section of the leaf node and free spaces slots in the directory section to index contiguous free spaces in the data section. Contents of the free space slots in the directory section are updated in response to changes in the contiguous free spaces in the data section to manage free space in the data section of the leaf node.
    Type: Grant
    Filed: June 26, 2017
    Date of Patent: June 30, 2020
    Assignee: VMware, Inc.
    Inventors: Li Ding, Richard P. Spillane, Wenguang Wang
  • Publication number: 20200201821
    Abstract: The disclosure herein describes synchronizing cached index copies at a first site with indexes of a log-structured merge (LSM) tree file system on an object storage platform at a second site. An indication that the LSM tree file system has been compacted based on a compaction process is received. A cached metadata catalog of the included parent catalog version at the first site is accessed. A set of cached index copies is identified at the first site based on the metadata of the cached metadata catalog. The compaction process is applied to the identified set of cached index copies and a compacted set of cached index copies is generated at the first site, whereby the compacted set of cached index copies is synchronized with a respective set of indexes of the plurality of sorted data tables of the LSM tree file system at the second site.
    Type: Application
    Filed: December 21, 2018
    Publication date: June 25, 2020
    Inventors: Wenguang Wang, Richard P. Spillane, Junlong Gao, Robert T. Johnson, Christos Karamanolis, Maxime Austruy
  • Publication number: 20200201822
    Abstract: The disclosure herein describes synchronizing a data cache and an LSM tree file system on an object storage platform. Instructions to send a cached data set from the data cache to the LSM tree file system are received. An updated metadata catalog is generated. If the LSM tree structure is out of shape, compaction is performed on the LSM tree file system which may be on a different system or server. When an unmerged compacted metadata catalog is identified, a merged metadata catalog is generated, based on the compacted metadata catalog and the cached data set, and associated with the cached data set. The cached data set and the associated metadata catalog are sent to the LSM tree file system, whereby the data cache and the LSM tree file system are synchronized. Synchronization is enabled without the data cache or file system being locked and/or waiting for the other entity.
    Type: Application
    Filed: December 21, 2018
    Publication date: June 25, 2020
    Inventors: Wenguang Wang, Junlong Gao, Richard P. Spillane, Robert T. Johnson, Christos Karamanolis, Maxime Austruy
  • Publication number: 20200183906
    Abstract: The disclosure herein describes providing and accessing data on an object storage platform using a log-structured merge (LSM) tree file system. The LSM tree file system on the object storage platform includes sorted data tables, each sorted data table including a payload portion and an index portion. Data is written to the LSM tree file system in at least one new sorted data table. Data is ready by identifying a data location of the data based on index portions of the sorted data tables and reading the data from a sorted data table associated with the identified data location. The use of the LSM tree file system on the object storage platform provides an efficient means for interacting with the data stored thereon.
    Type: Application
    Filed: December 7, 2018
    Publication date: June 11, 2020
    Inventors: Richard P. Spillane, Wenguang Wang, Junlong Gao, Robert T. Johnson, Christos Karamanolis, Maxime Austruy
  • Publication number: 20200183886
    Abstract: The disclosure herein describes writing data to a log-structured merge (LSM) tree file system on an object storage platform. Write data instructions indicating data for writing to the LSM tree file system are received. Based on the received instructions, the data is written to the first data cache. Based on an instruction to transfer data in the live data cache to the LSM tree file system, the first data cache is converted to a stable cache. A second data cache configured as a live data cache is then generated based on cloning the first data cache. The data in the first data cache is then written to the LSM tree file system. Use of a stable cache and a cloned live data cache enables parallel writing data to the file system by the stable cache and handling write data instructions by the live data cache.
    Type: Application
    Filed: December 7, 2018
    Publication date: June 11, 2020
    Inventors: Wenguang Wang, Richard P. Spillane, Junlong Gao, Robert T. Johnson, Christos Karamanolis, Maxime Austruy
  • Publication number: 20200183905
    Abstract: Certain aspects provide systems and methods of compacting data within a log-structured merge tree (LSM tree) using sharding. In certain aspects, a method includes determining a size of the LSM tree, determining a compaction time for a compaction of the LSM tree based on the size, determining a number of compaction entities for performing the compaction in parallel based on the compaction time, determining a number of shards based on the number of compaction entities, and determining a key range associated with the LSM tree. The method further comprises dividing the key range by the number of shards into a number of sub key ranges, wherein each of the number of sub key ranges corresponds to a shard of the number of shards and assigning the number of shards to the number of compaction entities for compaction.
    Type: Application
    Filed: December 6, 2018
    Publication date: June 11, 2020
    Inventors: Wenguang WANG, Richard P. SPILLANE, Junlong GAO, Robert T. JOHNSON, Christos KARAMANOLIS, Maxime AUSTRUY
  • Publication number: 20200183890
    Abstract: The disclosure provides for isolation of concurrent read and write transactions on the same file, thereby enabling higher file system throughput relative to serial-only transactions. Race conditions and lock contentions in multi-writer scenarios are avoided in file stat (metadata) updates by the use of an aggregator to merge updates of committed transactions to maintain file stat truth, and an upgrade lock that enforces atomicity of file stat access, even while still permitting multiple processes to concurrently read from and/or write to the file data. The disclosure is applicable to generic file systems, whether native or virtualized, and may be used, for example, to speed access to database files that require prolonged input/output (I/O) transaction time periods.
    Type: Application
    Filed: December 7, 2018
    Publication date: June 11, 2020
    Inventors: Wenguang Wang, Richard P. Spillane, Junlong Gao, Fengshuang Li
  • Publication number: 20200151268
    Abstract: A buffer tree structure includes, at each internal node, a buffer having a compacted portion and an uncompacted portion. Insertion of data elements to the buffer tree can occur units called packets. A packet is initially stored in the uncompacted portion of a receiving node's buffer. When a compaction trigger condition exists, packet compaction is performed including a data element compaction operation. A buffer-emptying (flush) operation pushes the compacted packets to children nodes.
    Type: Application
    Filed: November 8, 2018
    Publication date: May 14, 2020
    Inventors: Robert T. Johnson, Abhishek Gupta, Jorge Guerra Delgado, Ittai Abraham, Richard P. Spillane, Srinath Premachandran, Sandeep Rangaswamy, Kapil Chowksey
  • Patent number: 10649959
    Abstract: A B?-tree associated with a file system on a storage volume includes a hierarchy of nodes. Each node includes a buffer portion that can be characterized by a fixed maximum allowable size to store key-value pairs as messages in the buffer. Messages can be initially buffered in the root node of the B?-tree, and flushed to descendent children from the root node. Messages stored in the buffers can be indexed using a B+-tree data structure. As the B+-tree data structure in a buffer grows (due to receiving flushed messages) and shrinks (due to messages being flushed), disk blocks can be allocated from the storage volume to increase the actual size of the buffer and deallocated from the buffer to reduce the actual size of the buffer.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: May 12, 2020
    Assignee: VMware, Inc.
    Inventors: Abhishek Gupta, Richard P Spillane, Kapil Chowksey, Wenguang Wang, Robert T Johnson
  • Patent number: 10642783
    Abstract: Techniques are disclosed for using in-memory replicated object to support file services. Certain embodiments provide a method of storing persistent file handles in a storage system comprising a plurality of computing devices. The method may include requesting to write a persistent file handle corresponding to a file to a file system stored on the plurality of computing devices. The request may be translated to a block input/output (I/O) command to an in-memory object, the in-memory object representing at least a portion of the file system, a copy of the in-memory object being stored at each of the plurality of computing devices in volatile memory. The persistent file handle may then be written to the copy of the in-memory object stored in the volatile memory of each of the plurality of computing devices.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: May 5, 2020
    Assignee: VMware, Inc.
    Inventors: Wenguang Wang, Eric Knauft, Srinath Premachandran, Zhaohui Guo, Richard P. Spillane
  • Publication number: 20200089788
    Abstract: A buffer tree structure includes, at each internal node, a buffer having a compacted portion and an uncompacted portion. Insertion of data elements to the buffer tree can occur units called packets. A packet is initially stored in the uncompacted portion of a receiving node's buffer. When a compaction trigger condition exists, packet compaction is performed including a data element compaction operation. A buffer-emptying (flush) operation pushes the compacted packets to children nodes.
    Type: Application
    Filed: September 18, 2018
    Publication date: March 19, 2020
    Inventors: Robert T. Johnson, Ittai Abraham, Abhishek Gupta, Richard P. Spillane, Srinath Premachandran, Jorge Guerra Delgado, Sandeep Rangaswamy, Kapil Chowksey
  • Patent number: 10592530
    Abstract: Data storage system and method for managing transaction requests in the data storage system utilizes prepare requests for a transaction request for multiple data storage operations. The prepare requests are sent to selected destination storage nodes of the data storage system to handle the multiple data storage operations. Each prepare request includes at least one of the multiple data storage operations to be handled by a particular destination data store node and a list of the destination storage nodes involved in the transaction request.
    Type: Grant
    Filed: January 23, 2018
    Date of Patent: March 17, 2020
    Assignee: VMware, Inc.
    Inventors: Wenguang Wang, Abhishek Gupta, Kapil Chowksey, Richard P. Spillane, Rob Johnson
  • Publication number: 20200012735
    Abstract: A buffer tree structure includes, at each internal node, a buffer having a compacted portion and an uncompacted portion. Insertion of data elements to the buffer tree can occur units called packets. A packet is initially stored in the uncompacted portion of a receiving node's buffer. After a time, packets in the uncompacted portion of a buffer are combined into compacted packets in the compacted portion of the buffer. A buffer-emptying (flush) operation pushes the compacted packets to children nodes.
    Type: Application
    Filed: July 6, 2018
    Publication date: January 9, 2020
    Inventors: Robert T Johnson, Ittai Abraham, Abhishek Gupta, Richard P Spillane, Sandeep Rangaswamy, Jorge Guerra Delgado, Srinath Premachandran, Kapil Chowksey
  • Patent number: 10515052
    Abstract: A file system stores directories and files in a file system directory that uses case sensitive names. The same file system directory can support directory and file name lookups that treat the directory and file names in a case sensitive manner or in a case insensitive manner. The search criteria used for the lookup can be based on case-folding the name to produce a case-neutral name and on the original name with its case preserved. Search criteria can be generated for a case sensitive name lookup or for a case insensitive name lookup on the same file system directory, thus avoiding having to support separate file systems or separate file system directories for case sensitive and case insensitive file access.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: December 24, 2019
    Assignee: VMware, Inc.
    Inventors: Wenguang Wang, Richard P Spillane
  • Publication number: 20190370239
    Abstract: Embodiments herein are directed towards systems and methods for performing range lookups in B?-trees. One example method involves receiving a request to return key-value pairs within a range of keys from the B?-tree. The B?-tree includes a plurality of nodes, each node being associated with a buffer that stores key-value pairs. The method further involves determining a fractional size of the range of keys. The method further involves, for each level of the B?-tree, obtaining from within one or more buffers of one or more nodes of the level, a set of key-value pairs within the range of keys up to a size equal to the fractional size and transferring the set of key-value pairs to a result data structure. The method further involves sorting and merging all key-value pairs in the result data structure and returning the result data structure in response to the request.
    Type: Application
    Filed: June 5, 2018
    Publication date: December 5, 2019
    Inventors: Abhishek GUPTA, Richard P. SPILLANE, Rob JOHNSON, Wenguang WANG, Kapil CHOWKSEY, Jorge GUERRA DELGADO, Sandeep RANGASWAMY, Srinath PREMACHANDRAN
  • Patent number: 10452496
    Abstract: Data storage system and method for managing transaction requests to the data storage system utilizes a write ahead log to write transaction requests received at the data storage system during a current checkpoint generation. After the transaction requests in the write ahead log are applied to a copy-on-write (COW) storage data structure stored in a storage system, one of first and second allocation bitmaps is updated to reflect changes in the COW storage data structure with respect to allocation of storage space in the storage system, and one of first and second super blocks is updated with references to central nodes of the COW storage data structure. After the allocation bitmap and the super block have been updated, an end indicator for the current checkpoint generation is written in the write ahead log to indicate that processing of the transaction requests for the current checkpoint generation has been completed.
    Type: Grant
    Filed: October 6, 2017
    Date of Patent: October 22, 2019
    Assignee: VMware, Inc.
    Inventors: Abhishek Gupta, Richard P. Spillane, Kapil Chowksey, Rob Johnson, Wenguang Wang