Patents Examined by Jorge A Chu Joy-Davila
  • Patent number: 10891147
    Abstract: Aspects of the embodiments are directed to forming a virtual machine management (VMM) domain in a heterogeneous datacenter. Aspects can include mapping an endpoint group to multiple VMM domains, each VMM domain associated with one or more virtual machine management systems of a single type that each share one or more management system characteristics; instantiating a virtual switch instance, the virtual switch instance associated with a the VMM domain; and instantiating the endpoint group mapped to the VMM domain as a network component associated with the virtual switch instance.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: January 12, 2021
    Assignee: Cisco Technology, Inc.
    Inventors: Vijayan Ramakrishnan, Saurabh Jain, Vijay Chander, Ronak K. Desai, Praveen Jain, Munish Mehta, Yibin Yang
  • Patent number: 10884794
    Abstract: A control apparatus is communicably connected to a plurality of processing apparatuses, including a processor configured to determine whether the sum of an execution time of a first process, an execution time of a second process, and a time taken for a first processing apparatus among the plurality of processing apparatuses to rewrite a logic for executing the first process to a logic for executing the second process is equal to or smaller than a unit time; determine whether a data traffic between the plurality of processing apparatuses is equal to or smaller than a threshold when the first processing apparatus executes the first and second processes, and cause the first processing apparatus to execute the first and second processes when it is determined that the sum is equal to or smaller than the unit time and the data traffic is equal to or smaller than the threshold.
    Type: Grant
    Filed: July 26, 2017
    Date of Patent: January 5, 2021
    Assignee: FUJITSU LIMITED
    Inventor: Noboru Yoneoka
  • Patent number: 10884787
    Abstract: Systems and methods are described for implementing execution guarantees in an on-demand code execution system or other distributed code execution environment, such that the on-demand code execution system attempts to execute code only a desired number of times. The on-demand code execution system can utilize execution identifiers to distinguish between new and duplicative requests, and can decline to allocate computing resources for duplicative requests. The on-demand code execution system can further detect errors during execution, and rollback the execution to undo the execution's effects. The on-demand code execution system can then restart execution until the code has been execute the desired number of times.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: January 5, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Timothy Allen Wagner, Marc John Brooker, Jonathan Paul Thompson, Ajay Nair
  • Patent number: 10877809
    Abstract: The present disclosure provides a method for information processing. The method is applied in an electronic device and comprises: acquiring, upon detecting that the electronic device has been switched from a first state to a second state, a priority list storing a priority of each application that is initiated by a user among all applications when the electronic device is switched to the first state again; and selecting one or more applications from the priority list for preloading. Also provided is an apparatus for information processing.
    Type: Grant
    Filed: September 18, 2015
    Date of Patent: December 29, 2020
    Assignees: Beijing Lenovo Software Ltd., Lenovo (Beijing) Limited
    Inventors: Wei Hou, Jingjing Liu
  • Patent number: 10877797
    Abstract: One or more embodiments provide techniques for executing a workflow in a private data center. The cloud data center receives a request from a user. The cloud data center notifies the private data center that the request necessitates execution of the workflow in the private data center. A handler in the private data center maps a component of the cloud data center to the request. The handler determines whether a pairing exists between the component in the cloud data center and a component of the private data center. Upon determining that the pairing exists, the handler executes the workflow in the private data center. The handler publishes the results of the workflow to the cloud data center.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: December 29, 2020
    Assignee: VMware, Inc.
    Inventors: Dobrin Slavov Ivanov, Kalin Georgiev Fetvadjiev
  • Patent number: 10877796
    Abstract: Methods, systems, and computer-readable media for job execution with scheduled reserved compute instances are disclosed. One or more queues are mapped to a compute environment. The queue(s) are configured to store data indicative of jobs. The compute environment is associated with one or more scheduled reserved compute instances, and the one or more scheduled reserved compute instances are reserved for use in the compute environment for a window of time. The queue(s) are mapped to the compute environment prior to the window of time opening. During the window, at least one of the scheduled reserved compute instances is added to the compute environment. The scheduled reserved compute instance(s) are provisioned from a pool of available compute instances. During the window, execution is initiated of one or more jobs from the queue(s) on the scheduled reserved compute instance(s) in the compute environment.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: December 29, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: James Edward Kinney, Jr., Dougal Stuart Ballantyne, Nishant Mehta
  • Patent number: 10877793
    Abstract: A hypervisor associates a combined register space with a virtual device to be presented to a guest operating system of a virtual machine, the combined register space comprising a default register space and an additional register space. Responsive to detecting an access of the additional register space by the guest operating system of the virtual machine, the hypervisor performs an operation on behalf of the virtual machine, the operation pertaining to the access of the additional register space.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: December 29, 2020
    Assignee: Red Hat Israel, Ltd.
    Inventors: Michael S. Tsirkin, Paolo Bonzini
  • Patent number: 10877794
    Abstract: Virtual machines may migrate between sets of implementation resources in a manner that allows the virtual machines to efficiently and effectively adapt to new implementation resources. Migration agents can be added to the virtual machines under consideration for migration. The migration agents may detect and augment relevant virtual machine capabilities, as well as trigger reconfiguration of virtual machine components in accordance with migration templates.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: December 29, 2020
    Assignee: Amazon Technologies, Inc.
    Inventor: Pradeep Vincent
  • Patent number: 10877789
    Abstract: An information processing apparatus includes a memory and a processor coupled to the memory. The processor is configured to collect charge information on electric power for running a virtual machine in respective cloud services provided by respective data centers located in different regions. The processor is configured to obtain operation information of respective first virtual machines running in the cloud services. The processor is configured to generate a migration plan of migrating the first virtual machines on basis of the charge information and the operation information.
    Type: Grant
    Filed: August 2, 2017
    Date of Patent: December 29, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Naoto Ohnishi
  • Patent number: 10877677
    Abstract: To optimize front-end operations performed on virtual machines, a storage tiering module preemptively guides the placement of virtual volumes in storage tiers within a storage system. Upon detecting a front-end operation request, the storage tiering module identifies a storage requirement, such as an expected provisioning activity level during the front-end operation. Based on the identified storage requirement, the storage tiering module selects an appropriate storage tier. Subsequently, in preparation for the front-end operation, the storage tiering module places the virtual volume at the selected storage tier. Because the storage tiering module places the virtual volume in a tier that reflects the resource consumption expected during the front-end operation, the storage system does not incur the performance degradation that often precedes tier movement in conventional, reactive approaches to storage tiering.
    Type: Grant
    Filed: October 25, 2016
    Date of Patent: December 29, 2020
    Assignee: VMware, Inc.
    Inventors: Jinto Antony, Nagendra Singh Tomar
  • Patent number: 10860377
    Abstract: Systems, methods, and computer-readable media for identifying and managing memory allocation for one or more threads are described. A computer system may detect that a threshold memory utilization has been met, and may determine an aggregate memory allocation for a thread. The aggregate memory allocation may be a difference between a first memory allocation for the thread at a first time that the threshold memory utilization was met and a second memory allocation for the thread at a second time that the threshold memory utilization was met. The computer device may provide an indication that the thread has met or exceeded a threshold memory allocation when the aggregate memory allocation is greater than or equal to the threshold memory allocation. The computer device may disable the thread when the aggregate memory allocation is greater than or equal to the threshold memory allocation. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: March 22, 2017
    Date of Patent: December 8, 2020
    Assignee: SALESFORCE.COM, INC.
    Inventor: Brian Toal
  • Patent number: 10846136
    Abstract: Disclosed embodiments describe a system for managing spillover via a plurality of cores of a multi-core device intermediary to a plurality of clients and one or more services. The system may include a spillover limit of a resource. and a plurality of packet engines operating on a corresponding core of a plurality of cores of the device. The system may include a pool manager allocating to each of the plurality of packet engines a number of resource uses from an exclusive quota pool and shared quota pool based on the spillover limit of a resource. The device determines that the number of resources used by a packet engine has reached the allocated number of resource uses of the packet engine, and responsive to the determination, forwards to a backup virtual server a request of a client received by the device for the virtual server.
    Type: Grant
    Filed: July 30, 2015
    Date of Patent: November 24, 2020
    Assignee: Citrix Systems, Inc.
    Inventors: Manikam Muthiah, Josephine Suganthi, Sandeep Kamath
  • Patent number: 10838770
    Abstract: Embodiments analyze historical events to calculate the impact of multi-system events and, in response, allocate resources. Embodiments determine a multi-system event is occurring based on historical multi-system event data; correlate the multi-system event with one or more predicted resource allocation results of the multi-system event based on historical multi-system event data; and in response to the correlation, initiate mitigation of the one or more predicted resource allocation results, including re-allocation of at least one affected resource to a new system.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: November 17, 2020
    Assignee: BANK OF AMERICA CORPORATION
    Inventors: Darla Nutter, Angelyn Marie Day, Clifford Todd Barnett, John J. Towey, Jr.
  • Patent number: 10810048
    Abstract: Systems and methods for dynamic allocation of compilation machines are disclosed. A method includes: initiating and storing a task to be compiled and marking the task in a waiting state to wait for compilation of a compilation machine; fetching a compile command, and analyzing a current compile state of the compilation machine and further determining based on the current compile state whether to set the task to be compiled to continue waiting for compilation or enter a compile stage, wherein if to continue waiting for compilation, then the task to be compiled may be further held in storage; otherwise if to enter the compile stage, then the task may be transmitted to the compilation machine for compilation. Thus, the tasks can be automatically assigned to the compilation machines to achieve efficient use of the compilation machines and reduce the otherwise potential error rate due to human intervention.
    Type: Grant
    Filed: June 8, 2016
    Date of Patent: October 20, 2020
    Assignee: HUIZHOU TCL MOBILE COMMUNICATION CO., LTD
    Inventors: Fan Yang, Qiujuan Xie, Enli Xiang, Julan Chen, Jiaqiong Feng
  • Patent number: 10794153
    Abstract: A method can include receiving scheduled tasks associated with subsystems of a wellsite system wherein the scheduled tasks are associated with achievement of desired states of the wellsite system; transmitting task information for at least a portion of the scheduled tasks to computing devices associated with the subsystems; receiving state information via the wellsite system; assessing the state information with respect to one or more of the desired states; based at least in part on the assessing, scheduling a task; and transmitting task information for the task to one or more of the computing devices associated with the subsystems.
    Type: Grant
    Filed: April 18, 2016
    Date of Patent: October 6, 2020
    Assignee: Schlumberger Technology Corporation
    Inventors: Richard John Meehan, Benoit Foubert, Jean-Pierre Poyet
  • Patent number: 10789083
    Abstract: Disclosed herein are a method and apparatus for virtual desktop service. The apparatus includes a connection manager configured to perform an assignment task of assigning a virtual machine to a user terminal using virtual desktop service, a resource pool configured to allocate software resources to a virtual desktop, wherein the software resources include an OS, applications, and user profiles, and a virtual machine infrastructure configured to support hardware resources including a CPU and a memory, wherein the connection manager is configured to perform a coordination task of coordinating a delivery protocol used between the user terminal and servers that provide the virtual desktop service, wherein the resource pool has a management function, wherein the management function is based on usage pattern information about a user's average usage of resources, and wherein the management function uses a physical distance on network from the user terminal to a server.
    Type: Grant
    Filed: November 17, 2016
    Date of Patent: September 29, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Dae-Won Kim, Sun-Wook Kim, Jong-Bae Moon, Myeong-Hoon Oh, Byeong-Thaek Oh, Soo-Cheol Oh, Seong-Woon Kim, Ji-Hyeok Choi, Hag-Young Kim, Wan Choi
  • Patent number: 10778807
    Abstract: An objective of the present disclosure is to provide a method and apparatus for scheduling resources in a cloud system. The method according to the present disclosure comprises steps of: determining, according to stability of computing resources in the cloud system, respective resource priority levels of the respective computing resources; determining a scheduling algorithm corresponding to a current job when it is needed to schedule the resources; and allocating the resources based on the scheduling algorithm and resource priority levels of currently available respective computing resources. Compared with the prior art, the present disclosure has the following advantages: by differentiating the priorities of the computing resources and supporting a plurality of scheduling algorithms, resource scheduling is performed based on a variety of scheduling algorithms and resource priorities, which enhances the flexibility of resource scheduling and enhances the resource utilization and system throughput.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: September 15, 2020
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Muhua Zhang, Xianjun Meng, Ru Ying
  • Patent number: 10768995
    Abstract: Managing a communications network involves allocating hosts (100) for instances (105) of a virtual network function component (155). From a request to allocate, a number N is obtained indicating a minimum number of the instances to be available, and a number M indicating how many additional instances are to be allocated. If the allocations are requested to be to different hosts (anti affinity) and if the sharing of the instances by the virtual network function component can be adapted in the event of unavailability, then allocating is carried out automatically (230) of N+M of the instances to less than N+M of the hosts, so that if any one of the allocated hosts becomes unavailable there are sufficient hosts so that the virtual network function component can still be shared across at least N of the instances. Fewer hosts are needed, saving costs.
    Type: Grant
    Filed: October 23, 2015
    Date of Patent: September 8, 2020
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Giuseppe Celozzi, Luca Baldini, Daniele Gaito, Gaetano Patria
  • Patent number: 10733032
    Abstract: A method, information processing system, and computer program product are provided for managing operating system interference on applications in a parallel processing system. A mapping of hardware multi-threading threads to at least one processing core is determined, and first and second sets of logical processors of the at least one processing core are determined. The first set includes at least one of the logical processors of the at least one processing core, and the second set includes at least one of a remainder of the logical processors of the at least one processing core. A processor schedules application tasks only on the logical processors of the first set of logical processors of the at least one processing core. Operating system interference events are scheduled only on the logical processors of the second set of logical processors of the at least one processing core.
    Type: Grant
    Filed: August 24, 2017
    Date of Patent: August 4, 2020
    Assignee: International Business Machines Corporation
    Inventors: John Divirgilio, Liana L. Fong, John Lewars, Seetharami R. Seelam, Brian F. Veale
  • Patent number: 10725829
    Abstract: A server includes a processing device to execute a resource manager to receive, from a client device, a job to complete a data-processing task using processing resources of a data-processing cluster, and configure a scheduler to be associated with the data-processing cluster and to manage sharing the processing resources with at least a second job. The scheduler includes a job queue. The processing device is further to partition the job queue into a delegator queue and an application queue, wherein the delegator queue is associated with a delegator container and the application queue is associated with a child application container. The processing device is further to manage, in completion of the job, the processing resources of the data-processing cluster according to capacities allocated to the delegator queue and to the application queue, respectively.
    Type: Grant
    Filed: January 22, 2018
    Date of Patent: July 28, 2020
    Assignee: salesforce.com, inc.
    Inventors: Benson Qiu, Siddhi Mehta, Aakash Pradeep, Shangkar Mayanglambam