Patents by Inventor Irene ZHANG
Irene ZHANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200370824Abstract: A method for removing heavy hydrocarbons from a feed gas by: feeding, into an absorber, a top reflux stream and a second reflux stream below the top reflux stream, wherein the absorber produces an absorber bottom product stream and an absorber overhead product stream; depressurizing and feeding the absorber bottom product stream to a stripper to produce a stripper bottom product stream and a stripper overhead product stream; cooling and feeding a portion of the absorber overhead product stream back to the absorber as the top reflux stream; and pressurizing and feeding the stripper overhead product stream back to the absorber as the second reflux stream. Systems for carrying out the method are also provided.Type: ApplicationFiled: May 23, 2019Publication date: November 26, 2020Applicant: Fluor Technologies CorporationInventors: John MAK, Jacob THOMAS, Dhirav PATEL, Curt GRAHAM, Irene ZHANG
-
Patent number: 9977747Abstract: Memory performance in a computer system that implements large page mapping is improved even when memory is scarce by identifying page sharing opportunities within the large pages at the granularity of small pages and breaking up the large pages so that small pages within the large page can be freed up through page sharing. In addition, the number of small page sharing opportunities within the large pages can be used to estimate the total amount of memory that could be reclaimed through page sharing.Type: GrantFiled: February 24, 2016Date of Patent: May 22, 2018Assignee: VMware, Inc.Inventors: Yury Baskakov, Alexander Thomas Garthwaite, Rajesh Venkatasubramanian, Irene Zhang, Seongbeom Kim, Nikhil Bhatia, Kiran Tati
-
Publication number: 20160253201Abstract: Methods and apparatus for saving and/or restoring state information for virtualized computing systems are described. An example apparatus includes a physical memory and a virtual machine monitor to: in response to a request to suspend operation of a virtual machine, place a trace on a memory page in the physical memory to detect at least one of a read access or a write access that occurs when state information of the virtual machine is saved in response to the request, the memory page associated with virtual memory hosted by the virtual machine, while the virtual machine continues to operate after the request, initiate storing of the virtual memory of the virtual machine, and in response to a trigger of the trace, store an indication that the memory page is an active memory page.Type: ApplicationFiled: May 6, 2016Publication date: September 1, 2016Inventors: Irene Zhang, Kenneth Charles Barr, Ganesh Venkitachalam, Irfan Ahmad, Alex Garthwaite, Jesse Pool
-
Publication number: 20160170906Abstract: Memory performance in a computer system that implements large page mapping is improved even when memory is scarce by identifying page sharing opportunities within the large pages at the granularity of small pages and breaking up the large pages so that small pages within the large page can be freed up through page sharing. In addition, the number of small page sharing opportunities within the large pages can be used to estimate the total amount of memory that could be reclaimed through page sharing.Type: ApplicationFiled: February 24, 2016Publication date: June 16, 2016Inventors: Yury BASKAKOV, Alexander Thomas GARTHWAITE, Rajesh VENKATASUBRAMANIAN, Irene ZHANG, Seongbeom KIM, Nikhil BHATIA, Kiran TATI
-
Patent number: 9292452Abstract: Memory performance in a computer system that implements large page mapping is improved even when memory is scarce by identifying page sharing opportunities within the large pages at the granularity of small pages and breaking up the large pages so that small pages within the large page can be freed up through page sharing. In addition, the number of small page sharing opportunities within the large pages can be used to estimate the total amount of memory that could be reclaimed through page sharing.Type: GrantFiled: July 3, 2013Date of Patent: March 22, 2016Assignee: VMware, Inc.Inventors: Yury Baskakov, Alexander Thomas Garthwaite, Rajesh Venkatasubramanian, Irene Zhang, Seongbeom Kim, Nikhil Bhatia, Kiran Tati
-
Patent number: 9053065Abstract: A process for lazy checkpointing is enhanced to reduce the number of read/write accesses to the checkpoint file and thereby speed up the checkpointing process. The process for restoring a state of a virtual machine (VM) running in a physical machine from a checkpoint file that is maintained in persistent storage includes the steps of detecting access to a memory page of the virtual machine that has not been read into physical memory of the VM from the checkpoint file, determining a storage block of the checkpoint file to which the accessed memory page maps, writing contents of the storage block in a buffer, and copying contents of a block of memory pages that includes the accessed memory page from the buffer to corresponding locations of the memory pages in the physical memory of the VM. The storage block of the checkpoint file may be compressed or uncompressed.Type: GrantFiled: December 10, 2012Date of Patent: June 9, 2015Assignee: VMware, Inc.Inventors: Alexander Thomas Garthwaite, Yury Baskakov, Irene Zhang, Kevin Scott Christopher, Jesse Pool
-
Patent number: 9053064Abstract: A process for lazy checkpointing a virtual machine is enhanced to reduce the number of read/write accesses to the checkpoint file and thereby speed up the checkpointing process. The process for saving a state of a virtual machine running in a physical machine to a checkpoint file maintained in persistent storage includes the steps of copying contents of a block of memory pages, which may be compressed, into a staging buffer, determining after the copying if the buffer is full, and upon determining that the buffer is full, saving the buffer contents in a storage block of the checkpoint file.Type: GrantFiled: December 10, 2012Date of Patent: June 9, 2015Assignee: VMware, Inc.Inventors: Alexander Thomas Garthwaite, Yury Baskakov, Irene Zhang, Kevin Scott Christopher, Jesse Pool
-
Publication number: 20150012722Abstract: Memory performance in a computer system that implements large page mapping is improved even when memory is scarce by identifying page sharing opportunities within the large pages at the granularity of small pages and breaking up the large pages so that small pages within the large page can be freed up through page sharing. In addition, the number of small page sharing opportunities within the large pages can be used to estimate the total amount of memory that could be reclaimed through page sharing.Type: ApplicationFiled: July 3, 2013Publication date: January 8, 2015Inventors: Yury BASKAKOV, Alexander Thomas Garthwaite, Rajesh Venkatasubramanian, Irene Zhang, Seongbeom Kim, Nikhil Bhatia, Kiran Tati
-
Publication number: 20140164722Abstract: A process for lazy checkpointing a virtual machine is enhanced to reduce the number of read/write accesses to the checkpoint file and thereby speed up the checkpointing process. The process for saving a state of a virtual machine running in a physical machine to a checkpoint file maintained in persistent storage includes the steps of copying contents of a block of memory pages, which may be compressed, into a staging buffer, determining after the copying if the buffer is full, and upon determining that the buffer is full, saving the buffer contents in a storage block of the checkpoint file.Type: ApplicationFiled: December 10, 2012Publication date: June 12, 2014Applicant: VMware, Inc.Inventors: Alexander Thomas GARTHWAITE, Yury BASKAKOV, Irene ZHANG, Kevin Scott CHRISTOPHER, Jesse POOL
-
Publication number: 20140164723Abstract: A process for lazy checkpointing is enhanced to reduce the number of read/write accesses to the checkpoint file and thereby speed up the checkpointing process. The process for restoring a state of a virtual machine (VM) running in a physical machine from a checkpoint file that is maintained in persistent storage includes the steps of detecting access to a memory page of the virtual machine that has not been read into physical memory of the VM from the checkpoint file, determining a storage block of the checkpoint file to which the accessed memory page maps, writing contents of the storage block in a buffer, and copying contents of a block of memory pages that includes the accessed memory page from the buffer to corresponding locations of the memory pages in the physical memory of the VM. The storage block of the checkpoint file may be compressed or uncompressed.Type: ApplicationFiled: December 10, 2012Publication date: June 12, 2014Applicant: VMWARE, INC.Inventors: Alexander Thomas GARTHWAITE, Yury BASKAKOV, Irene ZHANG, Kevin Scott CHRISTOPHER, Jesse POOL
-
Publication number: 20100070678Abstract: Prior to or while the state of a virtual machine (“VM”) is being saved, such as in connection with the suspension or checkpointing of a VM, a set of one or more “active” memory pages is identified, this set of active memory pages comprising memory pages that are in use within the VM before operation of the VM is suspended. This set of active memory pages may constitute a “working set” of memory pages. To restore the state of the VM and resume operation, in some embodiments, (a) access to persistent storage is restored to the VM, device state for the VM is restored, and one or more of the set of active memory pages are loaded into physical memory; (b) operation of the VM is resumed; and (c) additional memory pages from the saved state of the VM are loaded into memory after operation of the VM has resumed.Type: ApplicationFiled: September 14, 2009Publication date: March 18, 2010Applicant: VMWARE, INC.Inventors: Irene ZHANG, Kenneth Charles BARR, Ganesh VENKITACHALAM, Irfan AHMAD, Alex GARTHWAITE, Jesse POOL