Patents Examined by Jonathan R Labud
-
Patent number: 11556363Abstract: Techniques for transferring virtual machines and resource management in a virtualized computing environment are described. In one embodiment, for example, an apparatus may include at least one memory, at least one processor, and logic for transferring a virtual machine (VM), at least a portion of the logic comprised in hardware coupled to the at least one memory and the at least one processor, the logic to generate a plurality of virtualized capability registers for a virtual device (VDEV) by virtualizing a plurality of device-specific capability registers of a physical device to be virtualized by the VM, the plurality of virtualized capability registers comprising a plurality of device-specific capabilities of the physical device, determine a version of the physical device to support via a virtual machine monitor (VMM), and expose a subset of the virtualized capability registers associated with the version to the VM. Other embodiments are described and claimed.Type: GrantFiled: March 31, 2017Date of Patent: January 17, 2023Assignee: INTEL CORPORATIONInventors: Sanjay Kumar, Philip R. Lantz, Kun Tian, Utkarsh Y. Kakaiya, Rajesh M. Sankaran
-
Patent number: 11550513Abstract: Container images are managed in a clustered container host system with a shared storage device. Hosts of the system each include a virtualization software layer that supports execution of virtual machines (VMs), one or more of which are pod VMs that have implemented therein a container engine that supports execution of containers within the respective pod VM. A method of deploying containers includes determining, from pod objects published by a master device of the system and accessible by all hosts of the system, that a new pod VM is to be created, creating the new pod VM, and spinning up one or more containers in the new pod VM using images of containers previously spun up in another pod VM, wherein the images of the containers previously spun up in the other pod VM are stored in the storage device.Type: GrantFiled: January 24, 2020Date of Patent: January 10, 2023Assignee: VMware, Inc.Inventor: Benjamin J. Corrie
-
Patent number: 11550614Abstract: Techniques for packaging and deploying algorithms utilizing containers for flexible machine learning are described. In some embodiments, users can create or utilize simple containers adhering to a specification of a machine learning service in a provider network, where the containers include code for how a machine learning model is to be trained and/or executed. The machine learning service can automatically train a model and/or host a model using the containers. The containers can use a wide variety of algorithms and use a variety of types of languages, libraries, data types, etc. Users can thus implement machine learning training and/or hosting with extremely minimal knowledge of how the overall training and/or hosting is actually performed.Type: GrantFiled: October 9, 2020Date of Patent: January 10, 2023Assignee: Amazon Technologies, Inc.Inventors: Thomas Albert Faulhaber, Jr., Gowda Dayananda Anjaneyapura Range, Jeffrey John Geevarghese, Taylor Goodhart, Charles Drummond Swan
-
Patent number: 11520618Abstract: Aspects of the present disclosure involve systems, methods, devices, and the like for segmentation of the processor architecture platform. In one embodiment, a system and method are introduced which enable the use of a segmented platform in an extended network. The segmented platform is introduced for processing using standardized plugins enabling the use of processing and services available at the segmented network. In another embodiment, processing on the segmented platform can include the integration of microservices for the completion of the transaction.Type: GrantFiled: December 27, 2019Date of Patent: December 6, 2022Assignee: PayPal, Inc.Inventors: Roman Pyasetskyy, Joshua Allen, Archana Murali, Joshua Van Blake, Gaetan Le Brun, Ernesto Alejandro Menendez Castillo, Evgeny Stukalov, Myo Ohn, Kirankumar Badi, Rashmi Prakash, Vinit Agarwal, Keith Gorman
-
Patent number: 11520619Abstract: Disclosed here are systems and methods that allow users, upon detecting errors within a running workflow, to either 1) pause the workflow and directly correct its design before resuming the workflow, or 2) pause the workflow, correct the erred action within the workflow, resume running the workflow, and afterwards apply the corrections to the design of the workflow. The disclosure comprises functionality that pauses a single workflow and other relevant workflows as soon as the error is detected and while it is corrected. The disclosed systems and methods improve communication technology between the networks and servers of separate parties relevant and/or dependent on successful execution of other workflows.Type: GrantFiled: March 26, 2020Date of Patent: December 6, 2022Assignee: Nintex USA, IncInventors: Joshua Joo Hou Tan, Alain Marie Patrice Gentilhomme
-
Patent number: 11500666Abstract: A container isolation method for a netlink resource includes receiving, by a kernel executed by a processor, a trigger instruction from an application program. The method also includes creating, by the kernel according to the trigger instruction, a container corresponding to the application program, creating a netlink namespace for the container, and sending a notification to the application program indicating that the netlink namespace is created. The method further includes receiving, by the kernel, a netlink message from the container, wherein the netlink message comprises entries generated when the container runs. The method additionally includes storing, by the kernel, the entries based on an identifier of the netlink namespace for the container, to send an entry required by the container to user space of the container.Type: GrantFiled: January 30, 2020Date of Patent: November 15, 2022Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Guocheng Zhong, Liang Zhang, Jianrui Yang, Jinmiao Liu
-
Patent number: 11494210Abstract: A unique identifier is stored in shared data storage that is accessible to at least a first virtual storage processor and a second virtual storage processor within a virtual storage appliance. The unique identifier is generated when the virtual storage appliance is first started up, and then used by the first virtual storage processor to obtain at least one Internet Protocol (IP) address for use by a management stack that initially executes in the first virtual storage processor. In response to failure of the first virtual storage processor, the unique identifier is used by the second virtual storage processor to obtain, for use by the management stack while the management stack executes in the second virtual storage processor after the failure, the same IP address obtained by the first virtual storage processor.Type: GrantFiled: July 25, 2019Date of Patent: November 8, 2022Assignee: EMC IP Holding Company LLCInventors: Dmitry Vladimirovich Krivenok, Christopher R. Dion, Michael L. Burriss
-
Patent number: 11481248Abstract: An SMI task to be completed across multiple SMI events. An OS agent can be employed to determine a current load on a computing device. Based on the load, the OS agent can create an SMI message that specifies a maximum duration for an SMI event and that segments the SMI data for the SMI task. The OS agent can provide the SMI message to BIOS as part of requesting that the SMI task be performed. During the resulting SMI event, the BIOS can reassemble the segmented SMI data and then perform the SMI task. If this processing cannot be completed within the specified maximum duration for an SMI event, the BIOS can pause its processing and cause a subsequent SMI event to occur during which the processing can be resumed. In this way, the SMI task can be completed across multiple SMI events while ensuring that no single SMI event exceeds the specified maximum duration.Type: GrantFiled: August 5, 2020Date of Patent: October 25, 2022Assignee: Dell Products L.P.Inventors: Balasingh P. Samuel, Richard M. Tonry, Nicholas D. Grobelny
-
Patent number: 11455188Abstract: Disclosed is a method for task pruning that can be utilized in existing resource allocation systems to improve the systems' robustness without requiring changing to existing mapping heuristics. The pruning mechanism leverages a probability model, which calculates the probability of a task competing before its deadline in the presence of task dropping, and only schedules tasks that are likely to succeed. Pruning tasks whose chance of success is low improves the chance of success for other tasks. Tasks that are unlikely to succeed are either deferred from current scheduling event or are preemptively dropped from the system. The pruning method can benefit service providers by allowing them to utilize their resources more efficiently and use them only for tasks that can meet their deadlines. The pruning method further helps end users by making the system more robust in allowing more tasks to complete on time.Type: GrantFiled: April 28, 2020Date of Patent: September 27, 2022Assignee: UNIVERSITY OF LOUISIANA AT LAFAYETTEInventors: James Gentry, Mohsen Amini Salehi, Chavit Denninnart
-
Patent number: 11455197Abstract: A plurality of requests are received for computing processing. At least some of the plurality of requests are replicated. The requests are replicated based on a fractional replication factor. Each received request and each replicated request are transmitted to a computer resource for processing. At least some embodiments provide the capability for meeting tail latency targets with improved performance and reduced cost.Type: GrantFiled: November 18, 2019Date of Patent: September 27, 2022Assignee: International Business Machines CorporationInventors: Robert Birke, Mathias Bjoerkqvist, Yiyu L. Chen, Martin L. Schmatz
-
Patent number: 11436055Abstract: A first command is fetched for execution on a GPU. Dependency information for the first command, which indicates a number of parent commands that the first command depends on, is determined. The first command is inserted into an execution graph based on the dependency information. The execution graph defines an order of execution for plural commands including the first command. The number of parent commands are configured to be executed on the GPU before executing the first command. A wait count for the first command, which indicates the number of parent commands of the first command, is determined based on the execution graph. The first command is inserted into cache memory in response to determining that the wait count for the first command is zero or that each of the number of parent commands the first command depends on has already been inserted into the cache memory.Type: GrantFiled: November 19, 2019Date of Patent: September 6, 2022Assignee: Apple Inc.Inventors: Kutty Banerjee, Michael Imbrogno
-
Patent number: 11429415Abstract: A method of dynamically tuning a hypervisor includes detecting that a high-performance virtual machine was launched on the hypervisor. The method further includes, in response to the detecting, modifying, by a processing device, a configuration of the hypervisor to increase performance of the high-performance virtual machine on the hypervisor.Type: GrantFiled: March 27, 2019Date of Patent: August 30, 2022Assignee: Red Hat Israel, Ltd.Inventor: Yaniv Kaul
-
Patent number: 11416275Abstract: Exemplary embodiments described herein relate to a destination path for use with multiple different types of VMs, and techniques for using the destination path to convert, copy, or move data objects stored in one type of VM to another type of VM. The destination path represents a standardized (canonical) way to refer to VM objects from a proprietary VM. A destination location may be specified using the canonical destination path, and the location may be converted into a hypervisor-specific destination location. A source data object may be copied or moved to the destination location using a hypervisor-agnostic path.Type: GrantFiled: September 6, 2019Date of Patent: August 16, 2022Assignee: NetApp Inc.Inventors: Sung Ryu, Shweta Behere, Jeffrey Teehan
-
Patent number: 11416297Abstract: Systems and methods are disclosed for optimizing distribution of resources to data elements, comprising receiving one or more user-defined objectives associated with a group of data elements, wherein at least one of the user-defined objectives includes an objective related to a selected target group; receiving one or more constraints associated with the group of data elements, wherein at least one of the constraints comprises resources apportionable to each data element in the group of data elements; developing a first prediction of a performance of the group of data elements during a time period based on the one or more user-defined objectives and the one or more constraints; and apportioning at least a portion of the resources to each data element in the group of data elements based on the first prediction once the time period has started.Type: GrantFiled: June 8, 2020Date of Patent: August 16, 2022Assignee: ADAP.TV, Inc.Inventors: Amir Cory, Shubo Liu
-
Patent number: 11409556Abstract: A component of a computing service obtains respective indications of placement policies that contain host selection rules for application execution environments such as guest virtual machines. With respect to a request for a particular application execution environment, a group of applicable placement policies is identified. A candidate pool of hosts is selected using the group of placement policies, and members of the pool are ranked to identify a particular host on which the requested application execution environment is instantiated.Type: GrantFiled: April 3, 2020Date of Patent: August 9, 2022Assignee: Amazon Technologies, Inc.Inventors: Joshua Dawie Mentz, Diwakar Gupta, Michael Groenewald, Alan Hadley Goodman, Marnus Freeman
-
Patent number: 11372668Abstract: A container image registry is managed in a virtualized computing system. The container image registry manages container images for deploying containers in a host cluster, the host cluster includes hosts and a virtualization layer executing on hardware platforms of the hosts, and the virtualization layer supports execution of virtual machines (VMs). The method includes: creating a namespace for an orchestration control plane integrated with the virtualization layer, the namespace including constraints for deploying workloads in the VMs; invoking, by a registry service in response to creation of the namespace, a management application programming interface (API) of the container image registry to create a project for the container images; and invoking, by the registry service, the management API of the container image registry to both add members to the project, and assign image registry roles to the members, in response to bindings of users and namespace roles derived from the constraints.Type: GrantFiled: April 2, 2020Date of Patent: June 28, 2022Assignee: VMware, Inc.Inventors: Yanping Cao, Mark Russell Johnson, Pratik Kapadia, Xiaoyun An
-
Patent number: 11372664Abstract: Techniques disclosed herein relate to migrating virtual computing instances such as virtual machines (VMs). In one embodiment, VMs are migrated across different virtual infrastructure platforms by, among other things, translating between resource models used by virtual infrastructure managers (VIMs) that manage the different virtual infrastructure platforms. VM migrations may also be validated prior to being performed, including based on resource policies that define what is and/or is not allowed to migrate, thereby providing compliance and controls for borderless data centers. In addition, an agent-based technique may be used to migrate VMs and physical servers to virtual infrastructure, without requiring access to an underlying hypervisor layer.Type: GrantFiled: May 20, 2019Date of Patent: June 28, 2022Assignee: VMWARE, INC.Inventors: Sachin Thakkar, Serge Maskalik, Allwyn Sequeira, Debashis Basak
-
Patent number: 11372682Abstract: Example embodiments of the present invention provide a method, a system, and a computer program product for managing tasks in a system. The method comprises running a first task on a system, wherein the first task has a first priority of execution time and the execution of which first task locks a resource on the system, and running a second task on the system, wherein the second task has a second priority of execution time earlier than the first priority of execution time of the first task and the execution of which second task requires the resource on the system locked by the first task. The system then may promote the first task having the later first priority of execution time to a new priority of execution time at least as early as the second priority of execution time of the second task and resume execution of the first task having the later first priority of execution time.Type: GrantFiled: March 11, 2020Date of Patent: June 28, 2022Assignee: EMC IP Holding Company LLCInventors: Alexandr Veprinsky, Felix Shvaiger, Anton Kucherov, Arieh Don
-
Patent number: 11360811Abstract: Computer systems, data processing methods, and computer-readable media are provided to run original networks. An exemplary computer system includes first and second processors a memory storing offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor. The runtime system, when runs on the first processor, causes the first processor to implement a plurality of virtual devices comprising a data processing device configured to obtain an offline model and corresponding input data of an original network from the memory, an equipment management device configured to control turning on or off of the second processor, and a task execution device configured to control the second processor to run the offline model of the original network.Type: GrantFiled: December 3, 2019Date of Patent: June 14, 2022Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Linyang Wu, Qi Guo, Xunyu Chen, Kangyu Wang
-
Patent number: 11354153Abstract: A resource utilization level and a data size may be determined for each organization within a computing pod located within an on-demand computing services organization configured to provide computing services. One of the organizations may be selected for migration away from the computing pod based on the resource utilization levels and the data sizes. The designated organization may have a respective resource utilization level that is high in relation to its respective data size.Type: GrantFiled: January 22, 2020Date of Patent: June 7, 2022Assignee: salesforce.com, Inc.Inventors: Xiaodan Wang, Ilya Zaslavsky, Prakash Ramaswamy, Sridevi Gopala Krishnan, Mikhail Chainani, Scott Ware, Lauren Valdivia