Patents Examined by Meng-Ai An
  • Patent number: 10802876
    Abstract: A method of determining a multi-agent schedule includes defining a well-formed, non-preemptive task set that includes a plurality of tasks, with each task having at least one subtask. Each subtask is associated with at least one resource required for performing that subtask. In accordance with the method, an allocation, which assigns each task in the task set to an agent, is received and a determination is made, based on the task set and the allocation, as to whether a subtask in the task set is schedulable at a specific time. A system for implementing the method is also provided.
    Type: Grant
    Filed: May 22, 2013
    Date of Patent: October 13, 2020
    Assignee: Massachusetts Institute of Technology
    Inventors: Julie Ann Shah, Matthew Craig Gombolay
  • Patent number: 10776147
    Abstract: Migration configuration data for an organization migration to move application data and application services of a to-be-migrated organization hosted at a source system instance to a target system instance is received. Migration components respectively representing to-be-migrated systems of record in a to-be-migrated organization are registered. In response to receiving an instruction to enter a specific organization migration state, migration steps for each migration component in the migration components are identified for execution in the specific organization migration state. Each migration component in the migration components automatically executes migration steps determined for each such migration component for execution in the specific organization migration state.
    Type: Grant
    Filed: January 29, 2018
    Date of Patent: September 15, 2020
    Assignee: salesforce.com, inc.
    Inventors: Alex Ovesea, Ilya Zaslavsky, Chen Liu, Alan Arbizu, Mikhail Chainani, Xiaodan Wang, Sridevi Gopala Krishnan
  • Patent number: 10776160
    Abstract: A method and system for optimizing the interaction and execution of multiple service tasks associated with a logical transaction. The multiple components or “legs” of the transaction consisting of tasks executable by a computing service or software as a service (SAAS) endpoint are identified. The system determines a strategy type or belief level associated with each of the service tasks included in a transaction. The belief level may be categorized as either “optimistic” or “pessimistic” based on one or more performance parameters (e.g., a probability of failure of a service and an expense associated with a failure of the service) derived from historical data associated with a particular transaction or service task. A sequence of execution for the multiple service tasks associated with the transaction is determined based at least in part on the belief level associated with each of the multiple service tasks.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: September 15, 2020
    Assignee: McGraw Hill LLC
    Inventor: Kevin Kalajan
  • Patent number: 10776159
    Abstract: A distributed storage-based file delivery system and a distributed storage-based file delivery method are provided. The system comprises: a scheduling server; at least one source group, where each source group include a plurality of distributed file storage clusters, and each distributed file storage cluster include a plurality of data nodes. The scheduling server is configured to, according to operators of the distributed file storage clusters and load information of each data node, perform task scheduling based on a received task, and generate task instructions, where the task is received from a client or a data node. The data nodes to which the task instructions are directed are configured to execute the task and/or perform task distribution according to the task instructions, such that data within all distributed file storage clusters in a same source group remains synchronized.
    Type: Grant
    Filed: September 11, 2016
    Date of Patent: September 15, 2020
    Assignee: WANGSU SCIENCE & TECHNOLOGY CO., LTD.
    Inventors: Liang Chen, Gengxin Lin
  • Patent number: 10761895
    Abstract: Techniques for resource allocation are described. Some embodiments provide a computing system and method for resource allocation in a virtualized computing environment comprising at least one physical computing system hosting multiple virtual machines, that performs at least some of the described techniques. In one embodiment, a user connection server is configured to receive a request for allocation of a virtual machine, for a user. The user connection server determines an attribute value of the user. Based on the attribute value of the user, allocation of physical computing resources for the virtual machine is determined. A management server is configured to boot the virtual machine for access by the user, the virtual machine booted with the determined allocation of physical computing resources for the virtual machine.
    Type: Grant
    Filed: September 26, 2013
    Date of Patent: September 1, 2020
    Assignee: VMware, Inc.
    Inventors: Sudhish Panamthanath Thankappan, Sivaprasad K. Govindankutty, Jubish Kulathumkal Jose
  • Patent number: 10754675
    Abstract: A system and method include receiving, by a controller/service virtual machine, a first request associated with an element of a visualization environment using an application programming interface (API). The first request includes a context-specific identifier. The controller/service virtual machine resides on a host machine of the virtualization environment, and the element is operatively associated with the host machine. The system and method further include determining, by the controller/service virtual machine, a type of the context-specific identifier in the first request, and mapping, by the controller/service virtual machine, the context-specific identifier to a unique identifier associated with the element based upon the determined type.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: August 25, 2020
    Assignee: NUTANIX, INC.
    Inventors: Akshay Deodhar, Venkata Vamsi Krishna Kothuri, Binny Gill
  • Patent number: 10754701
    Abstract: Systems and methods are described for determining a location in an on-demand code execution environment to execute user-specified code. The on-demand code execution environment may include many points of presence (POPs), some of which have limited computing resources. An execution profile for a set of user-specified code can be determined that indicates the resources likely to be used during execution of the code. Each POP of the environment may compare that execution profile to resource restrictions of the POP, to determine whether execution of the code should be permitted. In some instances, where execution of the code should not be permitted at a given POP, an alternative POP may be selected to execute the code.
    Type: Grant
    Filed: December 16, 2015
    Date of Patent: August 25, 2020
    Assignee: Amazon Technologies, Inc.
    Inventor: Timothy Allen Wagner
  • Patent number: 10740152
    Abstract: Technologies for dynamic acceleration of general-purpose code include a computing device having a general-purpose processor core and one or more hardware accelerators. The computing device identifies an acceleration candidate in an application that is targeted to the processor core. The acceleration candidate may be a long-running computation of the application. The computing device translates the acceleration candidate into a translated executable targeted to the hardware accelerator. The computing device determines whether to offload execution of the acceleration candidate and, if so, executes the translated executable with the hardware accelerator. The computing device may translate the acceleration candidate into multiple translated executables, each targeted to a different hardware accelerator. The computing device may select among the translated executables in response to determining to offload execution.
    Type: Grant
    Filed: December 6, 2016
    Date of Patent: August 11, 2020
    Assignee: Intel Corporation
    Inventors: Jayaram Bobba, Niranjan K. Soundararajan
  • Patent number: 10740142
    Abstract: Embodiments of the present invention disclose an intelligent device, a task processing method, and a baseband processor. An intelligent device includes a baseband processor and an application processor. The baseband processor is configured to obtain task trigger information which is used to trigger a task corresponding to an application in the intelligent device. Additionally, the baseband processor is configured to determine whether the task is a hosting task of the application, where the hosting task is a task that the application processor instructs in advance the baseband processor to process. Also, the baseband processor is configured to call and execute hosting code corresponding to the task if a detection result is that the task is the hosting task of the application.
    Type: Grant
    Filed: November 20, 2014
    Date of Patent: August 11, 2020
    Assignee: Huawei Technologies Co., Ltd.
    Inventor: Yuanrong Zhang
  • Patent number: 10740130
    Abstract: A method, computer program product, and computing system for executing a first virtual machine on a hypervisor. A first communication channel is established between the first virtual machine and a first group of underlying hardware associated with the first virtual machine.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: August 11, 2020
    Assignee: EMC IP Holding Company LLC
    Inventor: Jared C. Lyon
  • Patent number: 10740146
    Abstract: Embodiments herein describe techniques for executing VMs on hosts that include an accelerator. The hosts can use the accelerators to perform specialized tasks such as floating-point arithmetic, encryption, image processing, etc. Moreover, VMs can be migrated between hosts. To do so, the state of the processor is saved on the current host thereby saving the state of the VM. For example, by saving the processor state, once the data corresponding to the VM is loaded into a destination host, the processor can be initialized to the saved state in order to resume the VM. In addition to saving the processor state, the embodiments herein save the state of the accelerator on a FPGA. That is, unlike previous systems where tasks executed by the accelerator are discarded when migrating the VM, the state of the accelerator can be saved and used to initialize an FPGA accelerator in the destination host.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: August 11, 2020
    Assignee: XILINX, INC.
    Inventor: Sundararajarao Mohan
  • Patent number: 10733010
    Abstract: The current document is directed to automated application-release-management facilities that, in a described implementation, coordinate continuous development and release of cloud-computing applications. The application-release-management process is specified, in the described implementation, by application-release-management pipelines, each pipeline comprising one or more stages, with each stage comprising one or more tasks. The currently described methods and systems check whether endpoints and external tasks are reachable prior to initiating execution of application-release-management pipelines. Automatic reachability checking is scheduled for idle intervals, when the workflow-execution-engine component of the automated application-release-management facility is not executing release pipelines.
    Type: Grant
    Filed: August 11, 2017
    Date of Patent: August 4, 2020
    Assignee: VMWARE, INC.
    Inventors: Ravi Kasha, Karthikeyan Ramasamy, Bhawesh Ranjan
  • Patent number: 10733032
    Abstract: A method, information processing system, and computer program product are provided for managing operating system interference on applications in a parallel processing system. A mapping of hardware multi-threading threads to at least one processing core is determined, and first and second sets of logical processors of the at least one processing core are determined. The first set includes at least one of the logical processors of the at least one processing core, and the second set includes at least one of a remainder of the logical processors of the at least one processing core. A processor schedules application tasks only on the logical processors of the first set of logical processors of the at least one processing core. Operating system interference events are scheduled only on the logical processors of the second set of logical processors of the at least one processing core.
    Type: Grant
    Filed: August 24, 2017
    Date of Patent: August 4, 2020
    Assignee: International Business Machines Corporation
    Inventors: John Divirgilio, Liana L. Fong, John Lewars, Seetharami R. Seelam, Brian F. Veale
  • Patent number: 10733006
    Abstract: Examples described herein may include virtualized environments having multiple computing nodes accessing a storage pool. User interfaces are described which may allow a user to enter one or more IP address generation formula for various components of computing nodes. Examples of system described herein may evaluate the IP address generation formula(s) to generate a set of IP addresses that may be assigned to computing nodes in the system. This may advantageously allow for systematic and efficient assigning of IP addresses across large numbers of computing nodes.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: August 4, 2020
    Assignee: Nutanix, Inc.
    Inventors: Brian Finn, Jan Olderdissen, Shane Chu, YJ Yang
  • Patent number: 10725804
    Abstract: An example method is provided to maintain state information of a virtual machine in a virtualized computing environment through a self-triggered approach. The method may comprise detecting, by a first host from a cluster in the virtualized computing environment, that the first host is disconnected from a network connecting the first host to a distributed storage system accessible by the cluster. The method may also comprise suspending, by the first host, a virtual machine supported by the first host and storing state information associated with the virtual machine. The method may further comprise selecting a second host from the cluster and migrating the suspended virtual machine to the second host such that the suspended virtual machine is able to resume from suspension on the second host based on the stored state information.
    Type: Grant
    Filed: November 19, 2015
    Date of Patent: July 28, 2020
    Assignee: VMWARE, INC.
    Inventors: Hariharan Jeyaraman Ganesan, Jinto Antony, Madhusudhanan Gangadharan, Muthukumar Murugan
  • Patent number: 10719369
    Abstract: Systems for provisioning virtual network interfaces (VNIs) for tasks running on a virtual machine instance in a distributed computing environment are provided. The systems receive a request to launch a task corresponding to a plurality of containers in an instance in association with an instruction to provide a VNI for the task with a set of network security rules. The system may select an instance with sufficient resources to launch the task and enable communication using the VNI. The system may inhibit processes running on the instance other than containers associated with the task from communicating via the VNI.
    Type: Grant
    Filed: June 1, 2017
    Date of Patent: July 21, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Anirudh Balachandra Aithal, Ryan John Marchand, Kiran Kumar Meduri
  • Patent number: 10713095
    Abstract: A method of controlling a multi-core processor includes allocating at least one core of the multi-core processor to at least one process for execution; generating a translation table with respect to the at least one process to translate a logical ID of the at least one core allocated to the at least one process to a physical ID; and controlling the at least one process based on the translation table generated with respect to the at least one process.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: July 14, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Donghoon Yoo, Bernhard Egger
  • Patent number: 10691493
    Abstract: An apparatus in one embodiment comprises a processing platform configured to implement multi-layer infrastructure comprising compute, storage and network resources at a relatively low level of the multi-layer infrastructure, an application layer at a relatively high level of the multi-layer infrastructure, and one or more additional layers arranged between the relatively high level and the relatively low level. The processing platform is further configured to determine policies for respective different ones of the layers of the multi-layer infrastructure, the policy for a given one of the layers of the multi-layer infrastructure defining rules and requirements relating to that layer, to enforce the policies at the respective layers of the multi-layer infrastructure, and to monitor performance of an application executing in the multi-layer infrastructure. One or more configuration parameters of the multi-layer infrastructure are adjusted based at least in part on a result of the monitoring.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: June 23, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Patrick Barry, Ryan Andersen, Nitin John
  • Patent number: 10691638
    Abstract: The subject matter of this specification can be implemented in, among other things, a method that includes receiving, from within a guest operating system, a request to create a data file in a guest file system of the guest operating system. The method further includes in response to the receipt of the request to create the data file, creating an external data file in a first storage device for a file system outside the guest file system, creating a sparse file in the guest file system, and storing metadata that directs requests to access the sparse file from within the guest operating system to the external data file in the first storage device.
    Type: Grant
    Filed: January 18, 2017
    Date of Patent: June 23, 2020
    Assignee: Parallels International GmbH
    Inventors: Maxim Lyadvinsky, Nikolay Dobrovolskiy, Serguei M. Beloussov
  • Patent number: 10691503
    Abstract: A method for live migration of a virtual machine includes receiving a data packet that is sent to a migrated virtual machine on the source physical machine in a stage when the migrated virtual machine is suspended, and caching the received data packet; and sending the cached data packet to the migrated virtual machine on the destination physical machine after it is sensed that the migrated virtual machine is restored at the destination, to speed up restoration of a TCP connection inside the virtual machine. The apparatus of the present disclosure includes a caching unit and a data restoration unit. The method and apparatus of the present disclosure improve a restoration speed of the TCP connection, make live migration of a virtual machine more imperceptible for users, and improve user experience.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: June 23, 2020
    Assignee: Alibaba Group Holding Limited
    Inventors: Jingxuan Li, Junwei Zhang, Jinsong Liu, Honghao Liu