Patents by Inventor Yury BASKAKOV
Yury BASKAKOV has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11762573Abstract: A method of preserving the contiguity of large pages of a workload during migration of the workload from a source host to a destination host includes the steps of: detecting at the destination host, receipt of a small page of zeros from the source host, wherein, at the source host, the small page is part of one of the large pages of the workload; and upon detecting the receipt of the small page of zeros, storing, at the destination host, all zeros in a small page that is part of one of the large pages of the workload.Type: GrantFiled: November 18, 2022Date of Patent: September 19, 2023Assignee: VMware, Inc.Inventors: Arunachalam Ramanathan, Yury Baskakov, Anurekh Saxena, Ying Yu, Rajesh Venkatasubramanian, Michael Robert Stunes
-
Publication number: 20230195533Abstract: A method of populating page tables of an executing workload during migration of the executing workload from a source host to a destination host includes the steps of: during transmission of memory pages of the executing workload from the source host to the destination host, populating the page tables of the workload at the destination host, wherein the populating comprises inserting mappings from virtual addresses of the workload to physical addresses of system memory of the destination host for all of the memory pages of the executing workload; and upon completion of transmission of all of the memory pages of the workload, resuming the workload at the destination host.Type: ApplicationFiled: December 22, 2021Publication date: June 22, 2023Inventors: Yury BASKAKOV, Ying YU, Anurekh SAXENA, Arunachalam RAMANATHAN, Frederick Joseph JACOBS, Giritharan RASHIYAMANY
-
Publication number: 20230082951Abstract: A method of preserving the contiguity of large pages of a workload during migration of the workload from a source host to a destination host includes the steps of: detecting at the destination host, receipt of a small page of zeros from the source host, wherein, at the source host, the small page is part of one of the large pages of the workload; and upon detecting the receipt of the small page of zeros, storing, at the destination host, all zeros in a small page that is part of one of the large pages of the workload.Type: ApplicationFiled: November 18, 2022Publication date: March 16, 2023Inventors: Arunachalam RAMANATHAN, Yury BASKAKOV, Anurekh SAXENA, Ying YU, Rajesh VENKATASUBRAMANIAN, Michael Robert STUNES
-
Patent number: 11586371Abstract: A method of populating page tables of an executing workload during migration of the executing workload from a source host to a destination host includes the steps of: before resuming the workload at the destination host, populating the page tables of the workload at the destination host, wherein the populating comprises inserting mappings from virtual addresses of the workload to physical addresses of system memory of the destination host; and upon completion of populating the page tables, resuming the workload at the destination host.Type: GrantFiled: July 23, 2021Date of Patent: February 21, 2023Assignee: VMware, Inc.Inventors: Yury Baskakov, Ying Yu, Anurekh Saxena, Arunachalam Ramanathan, Frederick Joseph Jacobs, Giritharan Rashiyamany
-
Publication number: 20230028047Abstract: A method of preserving the contiguity of large pages of a workload during migration of the workload from a source host to a destination host includes the steps of: detecting at the destination host, receipt of a small page of zeros from the source host, wherein, at the source host, the small page is part of one of the large pages of the workload; and upon detecting the receipt of the small page of zeros, storing, at the destination host, all zeros in a small page that is part of one of the large pages of the workload.Type: ApplicationFiled: July 23, 2021Publication date: January 26, 2023Inventors: Arunachalam RAMANATHAN, Yury BASKAKOV, Anurekh SAXENA, Ying YU, Rajesh VENKATASUBRAMANIAN, Michael Robert STUNES
-
Publication number: 20230023452Abstract: A method of populating page tables of an executing workload during migration of the executing workload from a source host to a destination host includes the steps of: before resuming the workload at the destination host, populating the page tables of the workload at the destination host, wherein the populating comprises inserting mappings from virtual addresses of the workload to physical addresses of system memory of the destination host; and upon completion of populating the page tables, resuming the workload at the destination host.Type: ApplicationFiled: July 23, 2021Publication date: January 26, 2023Inventors: Yury BASKAKOV, Ying YU, Anurekh SAXENA, Arunachalam RAMANATHAN, Frederick Joseph JACOBS, Giritharan RASHIYAMANY
-
Patent number: 11543988Abstract: A method of preserving the contiguity of large pages of a workload during migration of the workload from a source host to a destination host includes the steps of: detecting at the destination host, receipt of a small page of zeros from the source host, wherein, at the source host, the small page is part of one of the large pages of the workload; and upon detecting the receipt of the small page of zeros, storing, at the destination host, all zeros in a small page that is part of one of the large pages of the workload.Type: GrantFiled: July 23, 2021Date of Patent: January 3, 2023Assignee: VMware, Inc.Inventors: Arunachalam Ramanathan, Yury Baskakov, Anurekh Saxena, Ying Yu, Rajesh Venkatasubramanian, Michael Robert Stunes
-
Publication number: 20220066806Abstract: A virtual machine (VM) is migrated from a source host to a destination host in a virtualized computing system, the VM having a plurality of virtual central processing units (CPUs). The method includes copying, by VM migration software executing in the source host and the destination host, memory of the VM from the source host to the destination host by installing, at the source host, write traces spanning all of the memory and then copying the memory from the source host to the destination host over a plurality of iterations; and performing switch-over, by the VM migration software, to quiesce the VM in the source host and resume the VM in the destination host. The VM migration software installs write traces using less than all of the virtual CPUs, and using trace granularity larger than a smallest page granularity.Type: ApplicationFiled: August 25, 2020Publication date: March 3, 2022Inventors: Arunachalam RAMANATHAN, Yanlei ZHAO, Anurekh SAXENA, Yury BASKAKOV, Jeffrey W. SHELDON, Gabriel TARASUK-LEVIN, David A. DUNN, Sreekanth SETTY
-
Patent number: 10691341Abstract: One or more embodiments provide techniques for accessing a memory page of a virtual machine for which loading might have been deferred, according to an embodiment of the invention, includes the steps of examining metadata of the memory page and determining that a flag in the metadata for indicating that the contents of the memory page needs to be updated is set, and updating the contents of the memory page.Type: GrantFiled: December 22, 2016Date of Patent: June 23, 2020Assignee: VMware, Inc.Inventors: Yury Baskakov, Alexander Garthwaite, Jesse Pool
-
Patent number: 9977747Abstract: Memory performance in a computer system that implements large page mapping is improved even when memory is scarce by identifying page sharing opportunities within the large pages at the granularity of small pages and breaking up the large pages so that small pages within the large page can be freed up through page sharing. In addition, the number of small page sharing opportunities within the large pages can be used to estimate the total amount of memory that could be reclaimed through page sharing.Type: GrantFiled: February 24, 2016Date of Patent: May 22, 2018Assignee: VMware, Inc.Inventors: Yury Baskakov, Alexander Thomas Garthwaite, Rajesh Venkatasubramanian, Irene Zhang, Seongbeom Kim, Nikhil Bhatia, Kiran Tati
-
Publication number: 20170102876Abstract: One or more embodiments provide techniques for accessing a memory page of a virtual machine for which loading might have been deferred, according to an embodiment of the invention, includes the steps of examining metadata of the memory page and determining that a flag in the metadata for indicating that the contents of the memory page needs to be updated is set, and updating the contents of the memory page.Type: ApplicationFiled: December 22, 2016Publication date: April 13, 2017Inventors: Yury BASKAKOV, Alexander GARTHWAITE, Jesse POOL
-
Patent number: 9547510Abstract: A system and method are disclosed for improving operation of a memory scheduler operating on a host machine supporting virtual machines (VMs) in which guest operating systems and guest applications run. For each virtual machine, the host machine hypervisor categorizes memory pages into memory usage classes and estimates the total number of pages for each memory usage class. The memory scheduler uses this information to perform memory reclamation and allocation operations for each virtual machine. The memory scheduler further selects between ballooning reclamation and swapping reclamation operations based in part on the numbers of pages in each memory usage class for the virtual machine. Calls to the guest operating system provide the memory usage class information. Memory reclamation not only can improve the performance of existing VMs, but can also permit the addition of a VM on the host machine without substantially impacting the performance of the existing and new VMs.Type: GrantFiled: December 10, 2013Date of Patent: January 17, 2017Assignee: VMware, Inc.Inventors: Xavier Deguillard, Ishan Banerjee, Qasim Ali, Yury Baskakov, Kiran Tati, Rajesh Venkatasubramanian
-
Patent number: 9529728Abstract: Updating contents of certain memory pages in a virtual machine system is deferred until they are needed. Specifically, certain page update operations are deferred until the page is accessed for a load or store operation. Each page within the virtual machine system includes associated metadata, which includes a page signature characterizing the contents of a corresponding page or a reference to a page with canonical contents, and a flag that indicates the page needs to be updated before being accessed. The metadata may also include a flag to indicate that a backing store of the memory page has contents of a known content class. When such a memory page is mapped to a shared page with contents of that known content class, a flag in the metadata to indicate that contents of the memory page needs to be updated is not set.Type: GrantFiled: October 7, 2010Date of Patent: December 27, 2016Assignee: VMware, Inc.Inventors: Yury Baskakov, Alexander Garthwaite, Jesse Pool
-
Patent number: 9529609Abstract: A system and method are disclosed for improving operation of a memory scheduler operating on a host machine supporting virtual machines (VMs) in which guest operating systems and guest applications run. For each virtual machine, the host machine hypervisor categorizes memory pages into memory usage classes and estimates the total number of pages for each memory usage class. The memory scheduler uses this information to perform memory reclamation and allocation operations for each virtual machine. The memory scheduler further selects between ballooning reclamation and swapping reclamation operations based in part on the numbers of pages in each memory usage class for the virtual machine. Calls to the guest operating system provide the memory usage class information. Memory reclamation not only can improve the performance of existing VMs, but can also permit the addition of a VM on the host machine without substantially impacting the performance of the existing and new VMs.Type: GrantFiled: December 10, 2013Date of Patent: December 27, 2016Assignee: VMware, Inc.Inventors: Xavier DeGuillard, Ishan Banerjee, Qasim Ali, Yury Baskakov, Kiran Tati, Rajesh Venkatasubramanian
-
Patent number: 9501422Abstract: Large pages that may impede memory performance in computer systems are identified. In operation, mappings to selected large pages are temporarily demoted to mappings to small pages and accesses to these small pages are then tracked. For each selected large page, an activity level is determined based on the tracked accesses to the small pages included in the large page. By strategically selecting relatively low activity large pages for decomposition into small pages and subsequent memory reclamation while restoring the mappings to relatively high activity large pages, memory consumption is improved, while limiting performance impact attributable to using small pages.Type: GrantFiled: June 11, 2014Date of Patent: November 22, 2016Assignee: VMware, Inc.Inventors: Yury Baskakov, Peng Gao, Joyce Kay Spencer
-
Publication number: 20160170906Abstract: Memory performance in a computer system that implements large page mapping is improved even when memory is scarce by identifying page sharing opportunities within the large pages at the granularity of small pages and breaking up the large pages so that small pages within the large page can be freed up through page sharing. In addition, the number of small page sharing opportunities within the large pages can be used to estimate the total amount of memory that could be reclaimed through page sharing.Type: ApplicationFiled: February 24, 2016Publication date: June 16, 2016Inventors: Yury BASKAKOV, Alexander Thomas GARTHWAITE, Rajesh VENKATASUBRAMANIAN, Irene ZHANG, Seongbeom KIM, Nikhil BHATIA, Kiran TATI
-
Patent number: 9342248Abstract: A computer implemented method for reducing the latency of an anticipated read of disk blocks from a swap file in a virtualized environment. The environment includes a host swap file maintained by a host operating system and a guest swap file maintained but a guest operating system. First, the method identifies a sequence of disk blocks that was written in the guest swap file. The method then detects within the sequence of blocks a first disk block that contains a reference to a second disk block that is stored in the host swap file. The method then replaces the first disk block in the guest swap file with the second disk block.Type: GrantFiled: April 29, 2014Date of Patent: May 17, 2016Assignee: VMware, Inc.Inventors: Yury Baskakov, Kapil Arya, Alexander Thomas Garthwaite
-
Patent number: 9330015Abstract: Large pages that may impede memory performance in computer systems are identified. In operation, mappings to selected large pages are temporarily demoted to mappings to small pages and accesses to these small pages are then tracked. For each selected large page, an activity level is determined based on the tracked accesses to the small pages included in the large page. By strategically selecting relatively low activity large pages for decomposition into small pages and subsequent memory reclamation while restoring the mappings to relatively high activity large pages, memory consumption is improved, while limiting performance impact attributable to using small pages.Type: GrantFiled: June 11, 2014Date of Patent: May 3, 2016Assignee: VMware, Inc.Inventors: Yury Baskakov, Peng Gao, Joyce Kay Spencer
-
Patent number: 9298377Abstract: A computer implemented method for reducing the latency of an anticipated read of disk blocks from a swap file in a virtualized environment. First, the method identifies a sequence of disk blocks that was written in a guest swap file. The method then detects a first reference within the sequence of blocks that references a first disk block stored in a host swap file and a second reference within the sequence of blocks that references a second disk block stored in the host swap file. The method then moves the second disk block to a location in a host swap file that is adjacent to the first disk block. In some examples, the first block and second block are both moved to a new location in the host swap file where they are adjacent to one another.Type: GrantFiled: April 29, 2014Date of Patent: March 29, 2016Assignee: VMware, Inc.Inventors: Yury Baskakov, Kapil Arya, Alexander Thomas Garthwaite
-
Patent number: 9292452Abstract: Memory performance in a computer system that implements large page mapping is improved even when memory is scarce by identifying page sharing opportunities within the large pages at the granularity of small pages and breaking up the large pages so that small pages within the large page can be freed up through page sharing. In addition, the number of small page sharing opportunities within the large pages can be used to estimate the total amount of memory that could be reclaimed through page sharing.Type: GrantFiled: July 3, 2013Date of Patent: March 22, 2016Assignee: VMware, Inc.Inventors: Yury Baskakov, Alexander Thomas Garthwaite, Rajesh Venkatasubramanian, Irene Zhang, Seongbeom Kim, Nikhil Bhatia, Kiran Tati