Patents by Inventor Thomas A. Phelan
Thomas A. Phelan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11093402Abstract: Techniques for using a cache to accelerate virtual machine (VM) I/O are provided. In one embodiment, a host system can intercept an I/O request from a VM running on the host system, where the I/O request is directed to a virtual disk residing on a shared storage device. The host system can then process the I/O request by accessing a cache that resides on one or more cache devices directly attached to the host system, where the accessing of the cache is transparent to the VM.Type: GrantFiled: August 27, 2019Date of Patent: August 17, 2021Assignee: VMware, Inc.Inventors: Thomas A Phelan, Mayank Rawat, Deng Liu, Kiran Madnani, Sambasiva Bandarupalli
-
Patent number: 11080244Abstract: Systems, methods, and software described herein to provide data to large-scale processing framework (LSPF) nodes in LSPF clusters. In one example, a method to provide data includes receiving an access request from a LSPF node to access data in accordance with a version of a distributed file system. The method further includes, responsive to the access request, accessing the data for the LSPF node in accordance with a different version of the distributed file system, and presenting the data to the LSPF node in accordance with the version of the distributed file system used by the LSPF node.Type: GrantFiled: May 28, 2014Date of Patent: August 3, 2021Assignee: Hewlett Packard Enterprise Development LPInventors: Thomas A. Phelan, Gunaseelan Lakshminarayanan, Michael Moretti, Joel Baxter, Lakshman Chinnakotla
-
Patent number: 11068414Abstract: A cache is maintained with write order numbers that indicate orders of writes into the cache, so that periodic partial flushes of the cache can be executed while maintaining write order consistency. A method of storing data into the cache includes receiving a request to write data into the cache, identifying lines in the cache for storing the data, writing the data into the lines of the cache, storing a write order number, and associating the write order number with the lines of the cache. A method of flushing a cache having cache lines associated with write order numbers includes the steps of identifying lines in the cache that are associated with either a selected write order number or a write order number that is less than the selected write order number, and flushing data stored in the identified lines to a persistent storage.Type: GrantFiled: June 28, 2019Date of Patent: July 20, 2021Assignee: VMWARE, INC.Inventors: Thomas A. Phelan, Erik Cota-Robles
-
Patent number: 11042665Abstract: Systems, methods, and software described herein facilitate interfacing between processing nodes and a plurality of data repositories. In one example, a method of interfacing between a processing node and a plurality of data repositories includes identifying, for the processing node, a data access request using a first data access format, wherein the data access request includes a data connector identifier. The method further includes translating the access request to a second data access format based on the data connector identifier, and identifying a data repository in the plurality of data repositories to service the data access request based on the data connector identifier. The method also provides accessing data for the data access request in the data repository via the second data access format.Type: GrantFiled: November 4, 2014Date of Patent: June 22, 2021Assignee: Hewlett Packard Enterprise Development LPInventors: Thomas A. Phelan, Michael J. Moretti, Joel Baxter, Gunaseelan Lakshminarayanan, Kumar Sreekanti
-
Patent number: 10928116Abstract: A modular water storage tank, for a refrigerator, includes a hollow storage body comprising a first body member having first and second open ends, the first open end comprising a first male extension having a first connection member on an outer surface thereof. A female end cap comprises a second connection member on an inner surface thereof that is configured to engage with the first connection member to secure the female end cap to the first open end. A male end cap comprises a second male extension having a third connection member on an outer surface thereof that is configured to engage with a fourth connection member disposed on an inner surface at the second open end of the first body member to secure the male end cap to the second open end of the first body member.Type: GrantFiled: February 27, 2019Date of Patent: February 23, 2021Assignee: Electrolux Home Products, Inc.Inventors: Thomas McCollough, Keith Thomas Phelan
-
Patent number: 10915449Abstract: Systems, methods, and software described herein facilitate servicing of data requests based on quality of service assigned to processing jobs. In one example, a method of prioritizing data requests in a computing system based on quality of service includes identifying a plurality of data requests from a plurality of processing jobs. The method further includes prioritizing the plurality of data requests based on a quality of service assessed to each of the plurality of processing jobs, and assigning cache memory in the computing system to each of the plurality of data requests based on the prioritization.Type: GrantFiled: December 16, 2014Date of Patent: February 9, 2021Assignee: Hewlett Packard Enterprise Development LPInventors: Thomas A. Phelan, Michael J. Moretti, Joel Baxter, Gunaseelan Lakshminarayanan
-
Publication number: 20210011775Abstract: The technology disclosed herein enables optimized managing of cluster deployment on a plurality of host nodes. In a particular embodiment, a method includes defining parameters of a cluster for executing a process that will execute in a plurality of containers distributed across one or more of the plurality of host nodes. The method further provides adding a first container portion of the plurality of containers to a first host node portion of the plurality of host nodes. After adding the first container portion, the method includes determining that a remaining host node portion of the plurality of host nodes will not support more of the plurality of containers and adjusting the parameters of the cluster to allow the process to execute on the first host node portion.Type: ApplicationFiled: July 9, 2019Publication date: January 14, 2021Inventors: Joel Baxter, Thomas A. Phelan
-
Patent number: 10860352Abstract: Embodiments disclosed herein provide systems, methods, and computer readable media for managing data consumption rate in a virtual data processing environment. In a particular embodiment, a method provides, in a cache node of a host system, identifying read completions for one or more virtual machines instantiated in the host system, with the one or more virtual machines processing one or more processing jobs. The method further provides allocating the read completions to individual processing jobs of the one or more processing jobs and accumulating the read completions on a per-job basis, with the cache node determining a data consumption rate for each processing job of the one or more processing jobs.Type: GrantFiled: July 25, 2014Date of Patent: December 8, 2020Assignee: Hewlett Packard Enterprise Development LPInventors: Thomas A. Phelan, Joel Baxter
-
Patent number: 10810044Abstract: Described herein are systems, methods, and software to enhance the allocation of cache resources to virtual nodes. In one implementation, a configuration system in large-scale processing environment is configured to identify a request to initiate a large-scale processing framework (LSPF) cluster, wherein the LSPF cluster comprises a plurality of virtual nodes, and identify host computing resources of a host computing system allocated to each virtual node of the LSPF cluster. The configuration system further allocates cache memory of a cache service to each of the virtual nodes based on the host computing resources, and initiate the LSPF cluster in the computing environment.Type: GrantFiled: January 4, 2018Date of Patent: October 20, 2020Assignee: Hewlett Packard Enterprise Development LPInventors: Thomas A. Phelan, Ramaswami Kishore
-
Publication number: 20200271367Abstract: A modular water storage tank, for a refrigerator, includes a hollow storage body comprising a first body member having first and second open ends, the first open end comprising a first male extension having a first connection member on an outer surface thereof. A female end cap comprises a second connection member on an inner surface thereof that is configured to engage with the first connection member to secure the female end cap to the first open end. A male end cap comprises a second male extension having a third connection member on an outer surface thereof that is configured to engage with a fourth connection member disposed on an inner surface at the second open end of the first body member to secure the male end cap to the second open end of the first body member.Type: ApplicationFiled: February 27, 2019Publication date: August 27, 2020Inventors: Thomas McCollough, Keith Thomas Phelan
-
Patent number: 10740148Abstract: Systems, methods, and software described herein facilitate accelerated input and output operations with respect to virtualized environments. In an implementation, a computing system passes a process identifier to a kernel driver for a host environment, wherein the process identifier identifies a guest process spawned in a virtual machine and wherein the kernel driver uses the process identifier to determine an allocation of host memory corresponding to guest memory for the guest process and returns the allocation of host memory. Additionally, the computing system performs a mapping of the allocation of host memory to an allocation of guest memory for the guest element.Type: GrantFiled: July 14, 2014Date of Patent: August 11, 2020Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Thomas A. Phelan, Michael J. Moretti, Dragan Stancevic
-
Patent number: 10642529Abstract: A computer system that employs a solid-state memory device as a physical storage resource includes a hypervisor that is capable of supporting TRIM commands issued by virtual machines running in the computer system. When a virtual machine issues a TRIM command to its corresponding virtual storage device to invalidate data stored therein, the TRIM command is received at an interface layer in the hypervisor that translates the TRIM command to a SCSI command known as UMMAP. A SCSI virtualization layer converts the UNMAP command to a file system command to delete portions of the virtual storage device that is maintained as a file in the hypervisor's file system. Upon receiving the delete commands, the hypervisor's file system driver generates a TRIM command to invalidate the data stored in the solid-state memory device at locations corresponding to the portions of the file that are to be deleted.Type: GrantFiled: April 25, 2018Date of Patent: May 5, 2020Assignee: VMware, Inc.Inventors: Deng Liu, Thomas A. Phelan
-
Patent number: 10564999Abstract: Systems, methods, and software described herein provide for enhancements to large scale data processing architectures. In one implementation, a service architecture for large scale data processing includes a host computing system, and a virtual machine executing on the host computing system. The service architecture further includes a plurality of application containers executing on the virtual machine, wherein each of the application containers comprises a large scale processing node running one or more java virtual machines.Type: GrantFiled: October 19, 2017Date of Patent: February 18, 2020Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Thomas Phelan, Joel Baxter, Michael J. Moretti, Gunaseelan Lakshminarayanan, Swami Viswanathan
-
Patent number: 10534714Abstract: Systems, methods, and software described herein allocate cache memory to job processes executing on a processing node. In one example, a method of allocating cache memory to a plurality of job process includes identifying the plurality of job processes executing on a processing node, and identifying a data object to be accessed by the plurality of job processes. The method further provides allocating a portion of the cache memory to each job process in the plurality of job processes and, for each job process in the plurality of job processes, identifying a segment of data from the data object, wherein the segment of data comprises a requested portion of data and a predicted portion of data. The method also includes providing the segments of data to the allocated portions of the cache memory.Type: GrantFiled: December 18, 2014Date of Patent: January 14, 2020Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Michael J. Moretti, Joel Baxter, Thomas Phelan
-
Publication number: 20190384712Abstract: Techniques for using a cache to accelerate virtual machine (VM) I/O are provided. In one embodiment, a host system can intercept an I/O request from a VM running on the host system, where the I/O request is directed to a virtual disk residing on a shared storage device. The host system can then process the I/O request by accessing a cache that resides on one or more cache devices directly attached to the host system, where the accessing of the cache is transparent to the VM.Type: ApplicationFiled: August 27, 2019Publication date: December 19, 2019Inventors: Thomas A. Phelan, Mayank Rawat, Deng Liu, Kiran Madnani, Sambasiva Bandarupalli
-
Patent number: 10496545Abstract: Systems, methods, and software described herein facilitate an enhanced service architecture for large-scale data processing. In one implementation, a method of providing data to a large-scale data processing architecture includes identifying a data request from a container in a plurality of containers executing on a host system, wherein the plurality of containers each run an instance of a large-scale processing framework. The method further provides identifying a storage repository for the data request, and accessing data associated with the data request from the storage repository. The method also includes caching the data in a portion of a cache memory on the host system allocated to the container, wherein the cache memory comprises a plurality of portions each allocated to one of the plurality of containers.Type: GrantFiled: November 24, 2015Date of Patent: December 3, 2019Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Thomas A. Phelan, Michael Moretti, Joel Baxter, Lakshminarayanan Gunaseelan, Ramaswami Kishore
-
Publication number: 20190324922Abstract: A cache is maintained with write order numbers that indicate orders of writes into the cache, so that periodic partial flushes of the cache can be executed while maintaining write order consistency. A method of storing data into the cache includes receiving a request to write data into the cache, identifying lines in the cache for storing the data, writing the data into the lines of the cache, storing a write order number, and associating the write order number with the lines of the cache. A method of flushing a cache having cache lines associated with write order numbers includes the steps of identifying lines in the cache that are associated with either a selected write order number or a write order number that is less than the selected write order number, and flushing data stored in the identified lines to a persistent storage.Type: ApplicationFiled: June 28, 2019Publication date: October 24, 2019Inventors: Thomas A. Phelan, Erik Cota-Robles
-
Patent number: 10437727Abstract: Techniques for using a cache to accelerate virtual machine (VM) I/O are provided. In one embodiment, a host system can intercept an I/O request from a VM running on the host system, where the I/O request is directed to a virtual disk residing on a shared storage device. The host system can then process the I/O request by accessing a cache that resides on one or more cache devices directly attached to the host system, where the accessing of the cache is transparent to the VM.Type: GrantFiled: April 27, 2018Date of Patent: October 8, 2019Assignee: VMWARE, INC.Inventors: Thomas A Phelan, Mayank Rawat, Deng Liu, Kiran Madnani, Sambasiva Bandarupalli
-
Patent number: 10423454Abstract: Systems, methods, and software described herein facilitate the allocation of large scale processing jobs to host computing systems. In one example, a method of operating an administration node to allocate processes to a plurality of host computing systems includes identifying a job process for a large scale processing environment (LSPE), and identifying a data repository associated with the job process. The method further includes obtaining data retrieval performance information related to the data repository and the host systems in the LSPE. The method also provides identifying a host system in the host systems for the job process based on the data retrieval performance information, and initiating a virtual node for the job process on the identified host system.Type: GrantFiled: March 10, 2015Date of Patent: September 24, 2019Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Thomas A. Phelan, Michael J. Moretti, Joel Baxter, Gunaseelan Lakshminarayanan, Kumar Sreekanti
-
Patent number: 10387331Abstract: A cache is maintained with write order numbers that indicate orders of writes into the cache, so that periodic partial flushes of the cache can be executed while maintaining write order consistency. A method of storing data into the cache includes receiving a request to write data into the cache, identifying lines in the cache for storing the data, writing the data into the lines of the cache, storing a write order number, and associating the write order number with the lines of the cache. A method of flushing a cache having cache lines associated with write order numbers includes the steps of identifying lines in the cache that are associated with either a selected write order number or a write order number that is less than the selected write order number, and flushing data stored in the identified lines to a persistent storage.Type: GrantFiled: June 5, 2012Date of Patent: August 20, 2019Assignee: VMWARE, INC.Inventors: Thomas A. Phelan, Erik Cota-Robles