Patents by Inventor Christos Karamanolis

Christos Karamanolis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10855602
    Abstract: Embodiments of the disclosure provide techniques for measuring congestion and controlling quality of service to a shared resource. A module that interfaces with the shared resource monitors the usage of the shared resource by accessing clients. Upon detecting that the rate of usage of the shared resource has exceeded a maximum rate supported by the shared resource, the module determines and transmits a congestion metric to clients that are currently attempting to access the shared resource. Clients, in turn determine a delay period based on the congestion metric prior to attempting another access of the shared resource.
    Type: Grant
    Filed: February 5, 2018
    Date of Patent: December 1, 2020
    Assignee: VMware, Inc.
    Inventors: William Earl, Christos Karamanolis
  • Publication number: 20200371721
    Abstract: Techniques are described for storing a virtual disk in an object store comprising a plurality of physical storage devices housed in a plurality of host computers. A profile is received for creation of the virtual disk wherein the profile specifies storage properties desired for an intended use of the virtual disk. A virtual disk blueprint is generated based on the profile such that that the virtual disk blueprint describes a storage organization for the virtual disk that addresses redundancy or performance requirements corresponding to the profile. A set of the physical storage devices that can store components of the virtual disk in a manner that satisfies the storage organization is then determined.
    Type: Application
    Filed: August 7, 2020
    Publication date: November 26, 2020
    Inventors: Christos KARAMANOLIS, Mansi SHAH, Nathan BURNETT
  • Patent number: 10812582
    Abstract: Examples disclosed herein relate to propagating changes made on a file system volume of a primary cluster of nodes to the same file system volume also being managed by a secondary cluster of nodes. An application is executed on both clusters, and data changes on the primary cluster are mirrored to the secondary cluster using an exo-clone file. The exo-clone file includes the differences between two or more snapshots of the volume on the primary cluster, along with identifiers of the change blocks and (optionally) state information thereof. Just these changes, identifiers, and state information are packaged in the exo-clone file and then exported to the secondary cluster, which in turn makes the changes to its version of the volume. Exporting just the changes to the data blocks and the corresponding block identifiers drastically reduces the information needed to be exchanged and processed to keep the two volumes consistent.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: October 20, 2020
    Assignee: VMware, Inc.
    Inventors: Richard Spillane, Yunshan Luke Lu, Wenguang Wang, Maxime Austruy, Christos Karamanolis, Rawlinson Rivera
  • Patent number: 10776045
    Abstract: System and method for managing multiple data storages using a file system of a computer system utilize a primary data storage to cache objects of logical object containers stored in a secondary data storage in caching-tier volumes. When an access request for an object stored in the secondary data storage is received at the file system and the object is not currently cached in the primary data storage, a caching-tier volume in the primary data storage is created that corresponds to a logical object container in the secondary data storage that includes the requested object. The caching-tier volume is used to cache the object as an inflated file so that the inflated file is available at the primary data storage in the caching-tier volume for a subsequent access request for the object stored in the secondary data storage.
    Type: Grant
    Filed: August 1, 2017
    Date of Patent: September 15, 2020
    Assignee: VMware, Inc.
    Inventors: Richard P. Spillane, Wenguang Wang, Abhishek Gupta, Maxime Austruy, Christos Karamanolis
  • Patent number: 10769036
    Abstract: Embodiments of the disclosure provide techniques for updating a distributed transaction log on a previously offline resource object component using distributed transaction logs from active host computer nodes from separate RAID mirror configurations. Each component object maintains a journal (log) where distributed transactions are recorded. If a component object goes offline and subsequently returns (e.g., if the node hosting the component object reboots), the component object is marked as stale. To return the component object to an active state, a distributed resources module retrieves the journals from other resource component objects from other RAID configurations where the data is mirrored. The module filters corresponding data that is missing in the journal of the previously offline corresponding object and merges the filtered data to the journal.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: September 8, 2020
    Assignee: VMware, Inc.
    Inventors: William Earl, Christos Karamanolis, Eric Knauft, Pascal Renauld
  • Patent number: 10747594
    Abstract: The disclosure provides an approach for performing an operation by a first process on behalf of a second process, the method comprising: obtaining, by the first process, a memory handle from the second process, wherein the memory handle allows access, by the first process, to at least some of the address space of the second process; dividing the address space of the memory handle into a plurality of sections; receiving, by the first process, a request from the second process to perform an operation; determining, by the first process, a section of the plurality of sections that is to be mapped from the address space of the memory handle to the address space of the first process for the performance of the operation by the first process; mapping the section from the address space of the memory handle to the address space of the first process; and performing the operation by the first process on behalf of the second process.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: August 18, 2020
    Assignee: VMware, Inc.
    Inventors: Wenguang Wang, Christoph Klee, Adrian Drzewiecki, Christos Karamanolis, Richard P. Spillane, Maxime Austruy
  • Patent number: 10747475
    Abstract: Techniques are described for storing a virtual disk in an object store comprising a plurality of physical storage devices housed in a plurality of host computers. A profile is received for creation of the virtual disk wherein the profile specifies storage properties desired for an intended use of the virtual disk. A virtual disk blueprint is generated based on the profile such that that the virtual disk blueprint describes a storage organization for the virtual disk that addresses redundancy or performance requirements corresponding to the profile. A set of the physical storage devices that can store components of the virtual disk in a manner that satisfies the storage organization is then determined.
    Type: Grant
    Filed: August 26, 2013
    Date of Patent: August 18, 2020
    Assignee: VMware, Inc.
    Inventors: Christos Karamanolis, Mansi Shah, Nathan Burnett
  • Publication number: 20200242034
    Abstract: The present disclosure provides techniques for managing a cache of a computer system using a cache management data structure. The cache management data structure includes a cold queue, a ghost queue, and a hot queue. The techniques herein improve the functioning of the computer because management of the cache management data structure can be performed in parallel with multiple cores or multiple processors, because a sequential scan will only pollute (i.e., add unimportant memory pages) cold queue, and to an extent, ghost queue, but not hot queue, and also because the cache management data structure has lower memory requirements and lower CPU overhead on cache hit than some prior art algorithms.
    Type: Application
    Filed: January 24, 2019
    Publication date: July 30, 2020
    Inventors: Wenguang WANG, Christoph KLEE, Adrian DRZEWIECKI, Christos KARAMANOLIS, Richard P. SPILLANE, Maxime AUSTRUY
  • Publication number: 20200241939
    Abstract: The disclosure provides an approach for performing an operation by a first process on behalf of a second process, the method comprising: obtaining, by the first process, a memory handle from the second process, wherein the memory handle allows access, by the first process, to at least some of the address space of the second process; dividing the address space of the memory handle into a plurality of sections; receiving, by the first process, a request from the second process to perform an operation; determining, by the first process, a section of the plurality of sections that is to be mapped from the address space of the memory handle to the address space of the first process for the performance of the operation by the first process; mapping the section from the address space of the memory handle to the address space of the first process; and performing the operation by the first process on behalf of the second process.
    Type: Application
    Filed: January 24, 2019
    Publication date: July 30, 2020
    Inventors: Wenguang WANG, Christoph KLEE, Adrian DRZEWIECKI, Christos KARAMANOLIS, Richard P. SPILLANE, Maxime AUSTRUY
  • Publication number: 20200233693
    Abstract: Techniques are disclosed for maintaining high availability (HA) for virtual machines (VMs) running on host systems of a host cluster, where each host system executes a HA module in a plurality of HA modules and a storage module in a plurality of storage modules, where the host cluster aggregates, via the plurality of storage modules, locally-attached storage resources of the host systems to provide an object store, where persistent data for the VMs is stored as per-VM storage objects across the locally-attached storage resources comprising the object store, and where a failure causes the plurality of storage modules to observe a network partition in the host cluster that the plurality of HA modules do not. In one embodiment, a host system in the host cluster executing a first HA module invokes an API exposed by the plurality of storage modules for persisting metadata for a VM to the object store.
    Type: Application
    Filed: July 31, 2019
    Publication date: July 23, 2020
    Inventors: Marc Sevigny, Keith Farkas, Christos Karamanolis
  • Publication number: 20200201822
    Abstract: The disclosure herein describes synchronizing a data cache and an LSM tree file system on an object storage platform. Instructions to send a cached data set from the data cache to the LSM tree file system are received. An updated metadata catalog is generated. If the LSM tree structure is out of shape, compaction is performed on the LSM tree file system which may be on a different system or server. When an unmerged compacted metadata catalog is identified, a merged metadata catalog is generated, based on the compacted metadata catalog and the cached data set, and associated with the cached data set. The cached data set and the associated metadata catalog are sent to the LSM tree file system, whereby the data cache and the LSM tree file system are synchronized. Synchronization is enabled without the data cache or file system being locked and/or waiting for the other entity.
    Type: Application
    Filed: December 21, 2018
    Publication date: June 25, 2020
    Inventors: Wenguang Wang, Junlong Gao, Richard P. Spillane, Robert T. Johnson, Christos Karamanolis, Maxime Austruy
  • Publication number: 20200201821
    Abstract: The disclosure herein describes synchronizing cached index copies at a first site with indexes of a log-structured merge (LSM) tree file system on an object storage platform at a second site. An indication that the LSM tree file system has been compacted based on a compaction process is received. A cached metadata catalog of the included parent catalog version at the first site is accessed. A set of cached index copies is identified at the first site based on the metadata of the cached metadata catalog. The compaction process is applied to the identified set of cached index copies and a compacted set of cached index copies is generated at the first site, whereby the compacted set of cached index copies is synchronized with a respective set of indexes of the plurality of sorted data tables of the LSM tree file system at the second site.
    Type: Application
    Filed: December 21, 2018
    Publication date: June 25, 2020
    Inventors: Wenguang Wang, Richard P. Spillane, Junlong Gao, Robert T. Johnson, Christos Karamanolis, Maxime Austruy
  • Publication number: 20200183905
    Abstract: Certain aspects provide systems and methods of compacting data within a log-structured merge tree (LSM tree) using sharding. In certain aspects, a method includes determining a size of the LSM tree, determining a compaction time for a compaction of the LSM tree based on the size, determining a number of compaction entities for performing the compaction in parallel based on the compaction time, determining a number of shards based on the number of compaction entities, and determining a key range associated with the LSM tree. The method further comprises dividing the key range by the number of shards into a number of sub key ranges, wherein each of the number of sub key ranges corresponds to a shard of the number of shards and assigning the number of shards to the number of compaction entities for compaction.
    Type: Application
    Filed: December 6, 2018
    Publication date: June 11, 2020
    Inventors: Wenguang WANG, Richard P. SPILLANE, Junlong GAO, Robert T. JOHNSON, Christos KARAMANOLIS, Maxime AUSTRUY
  • Publication number: 20200183720
    Abstract: Techniques for decoupling compute and storage resources in a hyper-converged infrastructure (HCI) are provided. In one set of embodiments, a control plane of the HCI deployment can provision a host from a host platform of an infrastructure on which the HCI deployment is implemented and can provision one or more storage volumes from a storage platform of the infrastructure, where the storage platform runs on physical server resources in the infrastructure that are separate from the host platform. The control plane can then cause the one or more storage volumes to be network-attached to the host in a manner that enables a hypervisor of the host to make the one or more storage volumes available, as part of a virtual storage pool, to one or more virtual machines in the HCI deployment for data storage.
    Type: Application
    Filed: December 5, 2018
    Publication date: June 11, 2020
    Inventors: Peng Dai, Matthew B. Amdur, Christos Karamanolis
  • Publication number: 20200183906
    Abstract: The disclosure herein describes providing and accessing data on an object storage platform using a log-structured merge (LSM) tree file system. The LSM tree file system on the object storage platform includes sorted data tables, each sorted data table including a payload portion and an index portion. Data is written to the LSM tree file system in at least one new sorted data table. Data is ready by identifying a data location of the data based on index portions of the sorted data tables and reading the data from a sorted data table associated with the identified data location. The use of the LSM tree file system on the object storage platform provides an efficient means for interacting with the data stored thereon.
    Type: Application
    Filed: December 7, 2018
    Publication date: June 11, 2020
    Inventors: Richard P. Spillane, Wenguang Wang, Junlong Gao, Robert T. Johnson, Christos Karamanolis, Maxime Austruy
  • Publication number: 20200183886
    Abstract: The disclosure herein describes writing data to a log-structured merge (LSM) tree file system on an object storage platform. Write data instructions indicating data for writing to the LSM tree file system are received. Based on the received instructions, the data is written to the first data cache. Based on an instruction to transfer data in the live data cache to the LSM tree file system, the first data cache is converted to a stable cache. A second data cache configured as a live data cache is then generated based on cloning the first data cache. The data in the first data cache is then written to the LSM tree file system. Use of a stable cache and a cloned live data cache enables parallel writing data to the file system by the stable cache and handling write data instructions by the live data cache.
    Type: Application
    Filed: December 7, 2018
    Publication date: June 11, 2020
    Inventors: Wenguang Wang, Richard P. Spillane, Junlong Gao, Robert T. Johnson, Christos Karamanolis, Maxime Austruy
  • Publication number: 20200174974
    Abstract: Techniques are disclosed for providing a file system interface for an object store intended to support simultaneous access to objects stored in the object store by multiple clients. In accordance with one method, an abstraction of a root directory to a hierarchical namespace for the object store is exposed to clients. The object store is backed by a plurality of physical storage devices housed in or directly attached to the plurality of host computers and internally tracks its stored objects using a flat namespace that maps unique identifiers to the stored objects. The creation of top-level objects appearing as subdirectories of the root directory is enabled, wherein each top-level object represents a separate abstraction of a storage device having a separate namespace that can be organized in accordance with any designated file system.
    Type: Application
    Filed: February 4, 2020
    Publication date: June 4, 2020
    Inventors: Christos KARAMANOLIS, Soam VASANI
  • Patent number: 10642526
    Abstract: In a storage cluster having nodes, blocks of a logical storage space of a storage object are allocated flexibly by a parent node to component nodes that are backed by physical storage. The method includes maintaining a first allocation map for the parent node, and second and third allocation maps for the first and second component nodes, respectively, executing a first write operation on the first component node and updating the second allocation map to indicate that the first block is a written block, and upon detecting that the first component node is offline, executing a second write operation that targets a second block of the logical storage space, which is allocated to the first component node, on the second component node and updating the third allocation map to indicate that the second block is a written block.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: May 5, 2020
    Assignee: VMware, Inc.
    Inventors: Eric Knauft, Mansi Shah, Jin Zhang, Christian Dickmann, Pascal Renauld, Radhika Vullikanti, Christos Karamanolis
  • Patent number: 10628196
    Abstract: A given host machine in a virtualization system having a virtual distributed storage system may receive an iSCSI protocol packet from a computer system separate from the given host machine. Processing the iSCSI protocol may include accessing distributed storage device (iSCSI target) comprising storage connected to the two or more host machines in the virtualization system. The given host machine may generate an outbound iSCSI protocol packet comprising return data received from the target and send the outbound iSCSI protocol packet to the computer system.
    Type: Grant
    Filed: November 12, 2016
    Date of Patent: April 21, 2020
    Assignee: VMWARE, INC.
    Inventors: Zhaohui Guo, Zhou Huang, Jian Zhao, Yizheng Chen, Aditya Kotwal, Jin Feng, Christos Karamanolis
  • Patent number: 10614046
    Abstract: Techniques are disclosed for providing a file system interface for an object store intended to support simultaneous access to objects stored in the object store by multiple clients. In accordance with one method, an abstraction of a root directory to a hierarchical namespace for the object store is exposed to clients. The object store is backed by a plurality of physical storage devices housed in or directly attached to the plurality of host computers and internally tracks its stored objects using a flat namespace that maps unique identifiers to the stored objects. The creation of top-level objects appearing as subdirectories of the root directory is enabled, wherein each top-level object represents a separate abstraction of a storage device having a separate namespace that can be organized in accordance with any designated file system.
    Type: Grant
    Filed: November 1, 2017
    Date of Patent: April 7, 2020
    Assignee: VMware, Inc.
    Inventors: Christos Karamanolis, Soam Vasani