Patents by Inventor Sandeep Uttamchandani

Sandeep Uttamchandani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10657101
    Abstract: Techniques for utilizing flash storage as an extension of hard disk (HDD) storage are provided. In one embodiment, a computer system stores a subset of blocks of a logical file in a first physical file, associated with a first data structure that represents a filesystem object, on flash storage and a subset of blocks, associated with a second data structure that represents a filesystem object comprising tiering configuration information that includes an identifier of the first physical file, in a second physical file on HDD storage. The computer system processes an I/O request directed to the logical file by directing it to either the physical file on the flash storage or the HDD storage by verifying that the tiering configuration information exists in the data structure and determining whether the one or more blocks are part of the first subset of blocks or the second subset of blocks.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: May 19, 2020
    Assignee: VMware, Inc.
    Inventors: Deng Liu, Sandeep Uttamchandani, Li Zhou, Mayank Rawat
  • Patent number: 10540119
    Abstract: A distributed shared log storage system employs an adapter that translates Application Programming Interfaces (APIs) for a big data application to APIs of the distributed shared log storage system. An instance of an adapter is configured for different big data applications in accordance with a profile thereof, so that the big data applications can take on a variety of added characteristics to enhance the application and/or to improve the performance of the application. Included in the added characteristics are global or local ordering of operations, replication of operations according to different replication models, making the operations atomic and caching.
    Type: Grant
    Filed: January 10, 2017
    Date of Patent: January 21, 2020
    Assignee: VMware, Inc.
    Inventors: Sandeep Uttamchandani, Cheng Zhang
  • Publication number: 20180276233
    Abstract: Techniques for utilizing flash storage as an extension of hard disk (HDD) storage are provided. In one embodiment, a computer system stores a subset of blocks of a logical file in a first physical file, associated with a first data structure that represents a filesystem object, on flash storage and a subset of blocks, associated with a second data structure that represents a filesystem object comprising tiering configuration information that includes an identifier of the first physical file, in a second physical file on HDD storage. The computer system processes an I/O request directed to the logical file by directing it to either the physical file on the flash storage or the HDD storage by verifying that the tiering configuration information exists in the data structure and determining whether the one or more blocks are part of the first subset of blocks or the second subset of blocks.
    Type: Application
    Filed: May 25, 2018
    Publication date: September 27, 2018
    Inventors: Deng Liu, Sandeep Uttamchandani, Li Zhou, Mayank Rawat
  • Patent number: 10082978
    Abstract: A distributed shared log storage system employs an adapter that translates APIs for a big data application to APIs of the distributed shared log storage system. The adapter is configured for different big data applications in accordance with a profile thereof, so that storage performance using the distributed shared log storage system can be comparable to the storage performance of the profiled big data application. An over-utilized adapter instance is detected and the workload assigned to the over-utilized adapter instance is either moved to a different adapter instance that can handle the workload or split among two or more adapter instances.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: September 25, 2018
    Assignee: VMware, Inc.
    Inventors: Sandeep Uttamchandani, Cheng Zhang
  • Publication number: 20180173451
    Abstract: A distributed shared log storage system employs an adapter that translates APIs for a big data application to APIs of the distributed shared log storage system. The adapter is configured for different big data applications in accordance with a profile thereof, so that storage performance using the distributed shared log storage system can be comparable to the storage performance of the profiled big data application. An over-utilized adapter instance is detected and the workload assigned to the over-utilized adapter instance is either moved to a different adapter instance that can handle the workload or split among two or more adapter instances.
    Type: Application
    Filed: December 19, 2016
    Publication date: June 21, 2018
    Inventors: Sandeep UTTAMCHANDANI, Cheng ZHANG
  • Patent number: 9984089
    Abstract: Techniques for utilizing flash storage as an extension of hard disk (HDD) storage are provided. In one embodiment, a computer system stores a subset of blocks of a logical file in a first physical file, associated with a first data structure that represents a filesystem object, on flash storage and a subset of blocks, associated with a second data structure that represents a filesystem object comprising tiering configuration information that includes an identifier of the first physical file, in a second physical file on HDD storage. The computer system processes an I/O request directed to the logical file by directing it to either the physical file on the flash storage or the HDD storage by verifying that the tiering configuration information exists in the data structure and determining whether the one or more blocks are part of the first subset of blocks or the second subset of blocks.
    Type: Grant
    Filed: October 21, 2015
    Date of Patent: May 29, 2018
    Assignee: VMware, Inc.
    Inventors: Deng Liu, Sandeep Uttamchandani, Li Zhou, Mayank Rawat
  • Publication number: 20180060143
    Abstract: A distributed shared log storage system employs an adapter that translates APIs for a big data application to APIs of the distributed shared log storage system. An instance of an adapter is configured for different big data applications in accordance with a profile thereof, so that the big data applications can take on a variety of added characteristics to enhance the application and/or to improve the performance of the application. Included in the added characteristics are global or local ordering of operations, replication of operations according to different replication models, making the operations atomic and caching.
    Type: Application
    Filed: January 10, 2017
    Publication date: March 1, 2018
    Inventors: Sandeep UTTAMCHANDANI, Cheng ZHANG
  • Patent number: 9495104
    Abstract: Techniques for automatically allocating space in a flash storage-based cache are provided. In one embodiment, a computer system collects I/O trace logs for a plurality of virtual machines or a plurality of virtual disks and determines cache utility models for the plurality of virtual machines or the plurality of virtual disks based on the I/O trace logs. The cache utility model for each virtual machine or each virtual disk defines an expected utility of allocating space in the flash storage-based cache to the virtual machine or the virtual disk over a range of different cache allocation sizes. The computer system then calculates target cache allocation sizes for the plurality of virtual machines or the plurality of virtual disks based on the cache utility models and allocates space in the flash storage-based cache based on the target cache allocation sizes.
    Type: Grant
    Filed: January 8, 2015
    Date of Patent: November 15, 2016
    Assignee: VMware, Inc.
    Inventors: Sandeep Uttamchandani, Li Zhou, Fei Meng, Deng Liu
  • Patent number: 9336035
    Abstract: Methods are presented for caching I/O data in a solid state drive (SSD) locally attached to a host computer supporting the running of a virtual machine (VM). Portions of the SSD are allocated as cache storage for VMs running on the host computer. A mapping relationship is maintained between unique identifiers for VMs running on the host computer and one or more process identifiers (PIDs) associated with processes running in the host computer that correspond to each of the VM's execution on the host computer. When an I/O request is received, a PID associated with I/O request is determined and a unique identifier for the VM is extracted from the mapping relationship based on the determined PID. A portion of the SSD corresponding to the unique identifier of the VM that is used as a cache for the VM can then be accessed in order to handle the I/O request.
    Type: Grant
    Filed: October 23, 2012
    Date of Patent: May 10, 2016
    Assignee: VMware, Inc.
    Inventors: Li Zhou, Samdeep Nayak, Sandeep Uttamchandani
  • Patent number: 9280300
    Abstract: Techniques for dynamically managing the placement of blocks of a logical file between a flash storage tier and an HDD storage tier are provided. In one embodiment, a computer system can collect I/O statistics pertaining to the logical file, where a first subset of blocks of the logical file are stored on the flash storage tier and where a second subset of blocks of the logical file are stored on the HDD storage tier. The computer system can further generate a heat map for the logical file based on the I/O statistics, where the heat map indicates, for each block of the logical file, the number of times the block has been accessed. The computer system can then identify, using the heat map, one or more blocks of the logical file as being performance-critical blocks, and can move data between the flash and HDD storage tiers such that the performance-critical blocks are placed on the flash storage tier.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: March 8, 2016
    Assignee: VMware, Inc.
    Inventors: Deng Liu, Wei Zhang, Xiaoyun Zhu, Mayank Rawat, Sandeep Uttamchandani, Li Zhou, Jianzhe Tai
  • Publication number: 20160042005
    Abstract: Techniques for utilizing flash storage as an extension of hard disk (HDD) storage are provided. In one embodiment, a computer system stores a subset of blocks of a logical file in a first physical file, associated with a first data structure that represents a filesystem object, on flash storage and a subset of blocks, associated with a second data structure that represents a filesystem object comprising tiering configuration information that includes an identifier of the first physical file, in a second physical file on HDD storage. The computer system processes an I/O request directed to the logical file by directing it to either the physical file on the flash storage or the HDD storage by verifying that the tiering configuration information exists in the data structure and determining whether the one or more blocks are part of the first subset of blocks or the second subset of blocks.
    Type: Application
    Filed: October 21, 2015
    Publication date: February 11, 2016
    Inventors: DENG LIU, Sandeep Uttamchandani, Li Zhou, Mayank Rawat
  • Patent number: 9239682
    Abstract: An I/O hint framework is provided. In one embodiment, a computer system can receive an I/O command originating from a virtual machine (VM), where the I/O command identifies a data block of a virtual disk. The computer system can further extract hint metadata from the I/O command, where the hint metadata includes one or more characteristics of the data block that are relevant for determining how to cache the data block in a flash storage-based cache. The computer system can then make the hint metadata available to a caching module configured to manage the flash storage-based cache.
    Type: Grant
    Filed: February 27, 2013
    Date of Patent: January 19, 2016
    Assignee: VMware, Inc.
    Inventors: Deng Liu, Thomas A. Phelan, Li Zhou, Ramkumar Vadivelu, Sandeep Uttamchandani
  • Patent number: 9182927
    Abstract: Techniques for utilizing flash storage as an extension of hard disk (HDD) based storage are provided. In one embodiment, a computer system can store a first subset of blocks of a logical file in a first physical file residing on a flash storage tier, and a second subset of blocks of the logical file in a second physical file residing on an HDD storage tier. The computer system can then receive an I/O request directed to one or more blocks of the logical file and process the I/O request by accessing the flash storage tier or the HDD storage tier, the accessing being based on whether the one or more blocks are part of the first subset of blocks stored in the first physical file.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: November 10, 2015
    Assignee: VMware, Inc.
    Inventors: Deng Liu, Sandeep Uttamchandani, Li Zhou, Mayank Rawat
  • Patent number: 9183151
    Abstract: Systems and techniques are described for thread cache allocation. A described technique includes monitoring input and output accesses for a plurality of threads executing on a computing device that includes a cache comprising a quantity of memory blocks, determining a respective reuse intensity for each of the threads, determining a respective read ratio for each of the threads, determining a respective quantity of memory blocks for each of the partitions by optimizing a combination of cache utilities, each cache utility being based on the respective reuse intensity, the respective read ratio, and a respective hit ratio for a particular partition, and resizing one or more of the partitions to be equal to the respective quantity of the memory blocks for the partition.
    Type: Grant
    Filed: August 30, 2013
    Date of Patent: November 10, 2015
    Assignee: VMware, Inc.
    Inventors: Sandeep Uttamchandani, Li Zhou, Fei Meng, Deng Liu
  • Patent number: 9183016
    Abstract: A control module is introduced to communicate with an application workload scheduler of a distributed computing application, such as a Job Tracker node of a Hadoop cluster, and with the virtualized computing environment underlying the application. The control module periodically queries for resource consumption data, such as CPU utilization, and uses the data to calculate how MapReduce task slots should be allocated on each task node of the Hadoop cluster. The control module passes the task slot allocation to the application workload scheduler, which honors the allocation by adjusting task assignments to task nodes accordingly. The task nodes may also activate and deactivate task slots according to the changed slot allocation. As a result, the distributed computing application is able to scale up and down when other workloads sharing the virtualized computing environment change.
    Type: Grant
    Filed: February 27, 2013
    Date of Patent: November 10, 2015
    Assignee: VMware, Inc.
    Inventors: Li Zhou, Sandeep Uttamchandani, Yizheng Chen
  • Patent number: 9075731
    Abstract: Techniques for achieving crash consistency when performing write-behind caching using a flash storage-based cache are provided. In one embodiment, a computer system receives from a virtual machine a write request that includes data to be written to a virtual disk and caches the data in a flash storage-based cache. The computer system further logs a transaction entry for the write request in the flash storage-based cache, where the transaction entry includes information usable for flushing the data from the flash storage-based cache to a storage device storing the virtual disk. The computer system then communicates an acknowledgment to the VM indicating that the write request has been successfully processed.
    Type: Grant
    Filed: January 23, 2013
    Date of Patent: July 7, 2015
    Assignee: VMware, Inc.
    Inventors: Deng Liu, Thomas A. Phelan, Ramkumar Vadivelu, Wei Zhang, Sandeep Uttamchandani, Li Zhou
  • Patent number: 9055119
    Abstract: The instant disclosure describes embodiments of a system and method for migrating virtual machine (VM)-specific content cached in a solid state drive (SSD) attached to an original host. During operation, the original host receives event indicating an upcoming migration of a VM to a destination host. In response, the original host transmits a set of metadata associated with the SSD cache to the destination host. The metadata indicates a number of data blocks stored in the SSD cache, thereby allowing the destination host to pre-fetch data blocks specified in the metadata from a storage shared by the original host and the destination host. Subsequently, the original host receives a power-off event for the VM, and transmits a dirty block list to the destination. The dirty block list specifies one or more data blocks that have changed since the transmission of the metadata.
    Type: Grant
    Filed: March 26, 2013
    Date of Patent: June 9, 2015
    Assignee: VMware, Inc.
    Inventors: Li Zhou, Samdeep Nayak, Sandeep Uttamchandani, Sanjay Acharya
  • Publication number: 20150120994
    Abstract: Techniques for automatically allocating space in a flash storage-based cache are provided. In one embodiment, a computer system collects I/O trace logs for a plurality of virtual machines or a plurality of virtual disks and determines cache utility models for the plurality of virtual machines or the plurality of virtual disks based on the I/O trace logs. The cache utility model for each virtual machine or each virtual disk defines an expected utility of allocating space in the flash storage-based cache to the virtual machine or the virtual disk over a range of different cache allocation sizes. The computer system then calculates target cache allocation sizes for the plurality of virtual machines or the plurality of virtual disks based on the cache utility models and allocates space in the flash storage-based cache based on the target cache allocation sizes.
    Type: Application
    Filed: January 8, 2015
    Publication date: April 30, 2015
    Inventors: Sandeep Uttamchandani, Li Zhou, Fei Meng, Deng Liu
  • Publication number: 20150067262
    Abstract: Systems and techniques are described for thread cache allocation. A described technique includes monitoring input and output accesses for a plurality of threads executing on a computing device that includes a cache comprising a quantity of memory blocks, determining a respective reuse intensity for each of the threads, determining a respective read ratio for each of the threads, determining a respective quantity of memory blocks for each of the partitions by optimizing a combination of cache utilities, each cache utility being based on the respective reuse intensity, the respective read ratio, and a respective hit ratio for a particular partition, and resizing one or more of the partitions to be equal to the respective quantity of the memory blocks for the partition.
    Type: Application
    Filed: August 30, 2013
    Publication date: March 5, 2015
    Applicant: VMware, Inc.
    Inventors: Sandeep Uttamchandani, Li Zhou, Fei Meng, Deng Liu
  • Patent number: 8949531
    Abstract: Techniques for automatically allocating space in a flash storage-based cache are provided. In one embodiment, a computer system collects I/O trace logs for a plurality of virtual machines or a plurality of virtual disks and determines cache utility models for the plurality of virtual machines or the plurality of virtual disks based on the I/O trace logs. The cache utility model for each virtual machine or each virtual disk defines an expected utility of allocating space in the flash storage-based cache to the virtual machine or the virtual disk over a range of different cache allocation sizes. The computer system then calculates target cache allocation sizes for the plurality of virtual machines or the plurality of virtual disks based on the cache utility models and allocates space in the flash storage-based cache based on the target cache allocation sizes.
    Type: Grant
    Filed: December 4, 2012
    Date of Patent: February 3, 2015
    Assignee: VMware, Inc.
    Inventors: Sandeep Uttamchandani, Li Zhou, Fei Meng, Deng Liu