Patents Examined by Jonathan R Labud
  • Patent number: 10831519
    Abstract: Techniques for packaging and deploying algorithms utilizing containers for flexible machine learning are described. In some embodiments, users can create or utilize simple containers adhering to a specification of a machine learning service in a provider network, where the containers include code for how a machine learning model is to be trained and/or executed. The machine learning service can automatically train a model and/or host a model using the containers. The containers can use a wide variety of algorithms and use a variety of types of languages, libraries, data types, etc. Users can thus implement machine learning training and/or hosting with extremely minimal knowledge of how the overall training and/or hosting is actually performed.
    Type: Grant
    Filed: February 21, 2018
    Date of Patent: November 10, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas Albert Faulhaber, Jr., Gowda Dayananda Anjaneyapura Range, Jeffrey John Geevarghese, Taylor Goodhart, Charles Drummond Swan
  • Patent number: 10810031
    Abstract: An example method of tracking memory modified by an assigned device includes allocating, by a hypervisor running a virtual machine, guest memory to a guest running on the virtual machine, where a device is assigned to the virtual machine. The method also includes reading, while the virtual machine is running on the hypervisor, a first input/output (I/O) state that indicates whether the device is currently processing one or more I/O requests, where the first I/O state is writable by the guest. The method further includes determining whether the first I/O state indicates that the device is currently processing one or more I/O requests. The method also includes determining to not transmit a memory page to a destination in response to determining that the first I/O state indicates that the device is currently processing one or more I/O requests. The memory page corresponds to the first I/O state.
    Type: Grant
    Filed: September 28, 2015
    Date of Patent: October 20, 2020
    Assignee: RED HAT ISRAEL, LTD.
    Inventor: Michael Tsirkin
  • Patent number: 10768997
    Abstract: A type of a request that is currently being processed at a system is determined. A distribution is selected from a set of processing time distributions, the distribution forming a model that is applicable to the type. A threshold point is computed for the model. A processing time that exceeds a threshold point processing time is regarded as exhibiting tail latency. Tail latency includes a delay in processing of the request due to a reason other than a utilization of a resource of the system exceeding a threshold utilization and a size of a queue in the system exceeding a threshold size. An evaluation is made that the request will experience tail latency during processing at the system and the processing of the request at the system is aborted. The request is offloaded for processing at a peer system in a load-balanced group of systems.
    Type: Grant
    Filed: December 5, 2016
    Date of Patent: September 8, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kanak B. Agarwal, Wenzhi Cui, Wesley M. Felter, Yu Gu, Eric J. Rozner
  • Patent number: 10761883
    Abstract: According to one embodiment, a program executing apparatus includes an event management portion to configure migration timing of the program on the basis of a state of the program being executed at reception of a migration request from the program, and a state transmission portion to migrate state information of the program on the basis of the migration timing configured in the event management portion.
    Type: Grant
    Filed: March 21, 2018
    Date of Patent: September 1, 2020
    Assignee: HITACHI LTD.
    Inventors: Tadashi Takeuchi, Takaaki Haruna
  • Patent number: 10754679
    Abstract: A method for handling network I/O device virtualization is provided. The method comprises, translating, by a virtual machine monitor, a guest physical address of a virtual machine to a host physical address in response to an I/O request from at least one virtual machine among a plurality of virtual machines, transmitting, by a virtual machine emulator, an instruction request including the translated address information to an extended device driver associated with the virtual machine from which the I/O request is forwarded, inserting, by the extended device driver, the translated address into a transmission queue, and performing a direct memory access for the I/O request using a physical I/O device according to the transmission queue.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: August 25, 2020
    Assignee: TMAX CLOUD CO., LTD.
    Inventors: Seong-Joong Kim, Da-Hyun Jang
  • Patent number: 10747567
    Abstract: Examples described herein may provide cluster check services which may determine whether at least one infrastructure dependent virtual machine (e.g., at least one domain controller and/or at least one DNS server) is located outside of a cluster. In this manner, the storage utilized by at least one of the infrastructure dependent virtual machines may not be part of the cluster itself, which may reduce and/or avoid failure and/or downtime caused by unavailability of the service(s) provided by the infrastructure dependent virtual machine.
    Type: Grant
    Filed: April 16, 2018
    Date of Patent: August 18, 2020
    Assignee: Nutanix, Inc.
    Inventors: Anupam Chakraborty, Renigunta Vinay Datta
  • Patent number: 10725806
    Abstract: A volume rehost tool migrates a storage volume from a source virtual server within a distributed storage system to a destination storage server within the distributed storage system. The volume rehost tool can prevent client access to data on the volume through the source virtual server until the volume has been migrated to the destination virtual server. The tool identifies a set of storage objects associated with the volume, removes configuration information for the set of storage objects, and removes a volume record associated with the source virtual server for the volume. The tool can then create a new volume record associated with the destination virtual server, apply the configuration information for the set of storage objects to the destination virtual server, and allow client access to the data on the volume through the destination virtual server.
    Type: Grant
    Filed: February 16, 2016
    Date of Patent: July 28, 2020
    Assignee: NetApp Inc.
    Inventors: Vani Vully, Avishek Chowdhury, Balaji Ramani
  • Patent number: 10691481
    Abstract: A system and method include determining underprovisioning of a guest physical memory of a virtual machine running on a computing node. The node includes hardware resources that are mapped the guest physical memory by a hypervisor. The hypervisor receives page fault information from the virtual machine based on page faults in the virtual machine. The hypervisor generates a table that includes virtual memory address-process indicator pair entries and corresponding page fault numbers. The hypervisor removes those entries that have a corresponding page fault number that is less than a first threshold value. The hypervisor determines a size of a revolving memory based on the number of remaining entries and a page size of the guest physical memory. If the revolving memory size is less than a second threshold value in relation to the allocated size of the guest physical memory, the hypervisor indicates underprovisioning of the guest physical memory.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: June 23, 2020
    Assignee: NUTANIX, INC.
    Inventors: Miao Cui, Malcolm Crossley, Gaurav Poothia
  • Patent number: 10691477
    Abstract: An example method for virtual machine (VM) live migration using intelligent order of pages to transfer includes receiving a request to live migrate a VM, transferring memory pages of the VM that are identified as at least one of read-only or executable in a first iteration of VM memory page transfer of the live migration, transferring, as part of a second iteration of the transfer, prioritized memory pages of the VM that have not been transferred as part of the first iteration, and transferring, as part of a third iteration of the transfer, other memory pages of the VM that have not been transferred as part of the first and second iterations and that are not identified as ignored memory pages of the VM, wherein the other memory pages of the VM comprise de-prioritized memory pages of the VM that are transferred last in the third iteration.
    Type: Grant
    Filed: September 27, 2016
    Date of Patent: June 23, 2020
    Assignee: Red Hat Israel, Ltd.
    Inventor: Yaniv Kaul
  • Patent number: 10684876
    Abstract: Exemplary embodiments described herein relate to a destination path for use with multiple different types of VMs, and techniques for using the destination path to convert, copy, or move data objects stored in one type of VM to another type of VM. The destination path represents a standardized (canonical) way to refer to VM objects from a proprietary VM. A destination location may be specified using the canonical destination path, and the location may be converted into a hypervisor-specific destination location. A source data object may be copied or moved to the destination location using a hypervisor-agnostic path.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: June 16, 2020
    Assignee: NETAPP, INC.
    Inventors: Sung Ryu, Shweta Behere, Jeffrey Teehan
  • Patent number: 10678591
    Abstract: Systems and methods are disclosed for optimizing distribution of resources to data elements, comprising receiving one or more user-defined objectives associated with a group of data elements, wherein at least one of the user-defined objectives includes an objective related to a selected target group; receiving one or more constraints associated with the group of data elements, wherein at least one of the constraints comprises resources apportionable to each data element in the group of data elements; developing a first prediction of a performance of the group of data elements during a time period based on the one or more user-defined objectives and the one or more constraints; and apportioning at least a portion of the resources to each data element in the group of data elements based on the first prediction once the time period has started.
    Type: Grant
    Filed: September 22, 2015
    Date of Patent: June 9, 2020
    Assignee: ADAP.TV, Inc.
    Inventors: Amir Cory, Shubo Liu
  • Patent number: 10678722
    Abstract: Systems, methods, and computer program products to perform an operation comprising processing a first logical partition on a shared processor for the duration of a dispatch cycle, issuing, by a hypervisor, at a predefined time prior to completion of the dispatch cycle, a lightweight hypervisor decrementer (HDEC) interrupt, and responsive to the lightweight HDEC interrupt, initiating an asynchronous hardware operation on the shared processor prior to completion of the dispatch cycle.
    Type: Grant
    Filed: December 6, 2016
    Date of Patent: June 9, 2020
    Assignee: International Business Machines Corporation
    Inventors: Stuart Z. Jacobs, David A. Larson, Michael J. Vance
  • Patent number: 10671131
    Abstract: Systems and methods are disclosed for determining a current machine state of a processing device, predicting a future processing task to be performed by the processing device at a future time, and predicting a list of intervening processing tasks to be performed by a first time (e.g. a current time) and the start of the future processing task. The future processing task has an associated initial state. A feed-forward thermal prediction model determines a predicted future machine state at the time for starting the future processing task. Heat mitigation processes can be applied in advance of the starting of the future processing task, to meet the future initial machine state for starting the future processing task.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: June 2, 2020
    Assignee: Apple Inc.
    Inventors: Nagarajan Kalyanasundaram, Jay S. Nigen, James S. Ismail, Richard H. Tan
  • Patent number: 10671418
    Abstract: A server computer system identifies a set of image templates for building a cloud server image and a compatible deployable template for launching the cloud server image in a template repository. The server computer system associates the set of image templates with the compatible deployable template in the template repository. Upon receiving a user selection, the server computer system obtains the set of image templates and the compatible deployable temple.
    Type: Grant
    Filed: January 9, 2013
    Date of Patent: June 2, 2020
    Assignee: Red Hat, Inc.
    Inventors: Dan Macpherson, Scott Wayne Seago
  • Patent number: 10649806
    Abstract: A computer system implemented a method for elastic resource management for executing a machine learning (ML) program.
    Type: Grant
    Filed: April 11, 2018
    Date of Patent: May 12, 2020
    Assignee: PETUUM, INC.
    Inventors: Aurick Qiao, Qirong Ho, Eric Xing
  • Patent number: 10628221
    Abstract: Example embodiments of the present invention provide a method, a system, and a computer program product for managing tasks in a system. The method comprises running a first task on a system, wherein the first task has a first priority of execution time and the execution of which first task locks a resource on the system, and running a second task on the system, wherein the second task has a second priority of execution time earlier than the first priority of execution time of the first task and the execution of which second task requires the resource on the system locked by the first task. The system then may promote the first task having the later first priority of execution time to a new priority of execution time at least as early as the second priority of execution time of the second task and resume execution of the first task having the later first priority of execution time.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: April 21, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Alexandr Veprinsky, Felix Shvaiger, Anton Kucherov, Arieh Don
  • Patent number: 10613888
    Abstract: A component of a computing service obtains respective indications of placement policies that contain host selection rules for application execution environments such as guest virtual machines. With respect to a request for a particular application execution environment, a group of applicable placement policies is identified. A candidate pool of hosts is selected using the group of placement policies, and members of the pool are ranked to identify a particular host on which the requested application execution environment is instantiated.
    Type: Grant
    Filed: December 15, 2015
    Date of Patent: April 7, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Joshua Dawie Mentz, Diwakar Gupta, Michael Groenewald, Alan Hadley Goodman, Marnus Freeman
  • Patent number: 10613883
    Abstract: Systems and methods for the management of migrations of virtual machine instances are provided. In response to a request to migrate a virtual machine instance, a migration manager may provide estimates regarding the requested migration before initiating the migration. During the migration process, the migration manager may report status or request instructions regarding the migration based on various determined migration events, thereby facilitating external control of the migration process.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: April 7, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Diwakar Gupta, Thomas Friebel, Sebastian Martin Biemueller, Bret David Kiraly
  • Patent number: 10592259
    Abstract: Various examples for application management detection are described. In one example, depending upon whether an installation token includes a unique token value, a client device can determine whether an application is managed or unmanaged. Additionally, the client device can determine whether the application is managed or unmanaged based on whether a keychain installation token includes a unique token value, a value of a keychain installation token, and a value of a launched flag for the application. Using the concepts described herein, an unmanaged application can proceed to execute with limited functionality, present a notification that it should be reinstalled by the management service, stop executing, or take other measures.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: March 17, 2020
    Assignee: AIRWATCH LLC
    Inventors: Lucas Chen, Raghuram Rajan, Jonathan Blake Brannon
  • Patent number: 10592282
    Abstract: The technology disclosed relates to providing strong ordering in multi-stage processing of near real-time (NRT) data streams. In particular, it relates to maintaining current batch-stage information for a batch at a grid-scheduler in communication with a grid-coordinator that controls dispatch of batch-units to the physical threads for a batch-stage. This includes operating a computing grid, and queuing data from the NRT data streams as batches in pipelines for processing over multiple stages in the computing grid.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: March 17, 2020
    Assignee: salesforce.com, inc.
    Inventors: Elden Bishop, Jeffrey Chao