Patents Issued in September 24, 2019
  • Patent number: 10423417
    Abstract: A fault tolerant multi-threaded processor uses the temporal and/or spatial separation of instructions running in two or more different threads. An instruction is fetched, decoded and executed by each of two or more threads to generate a result for each of the two or more threads. These results are then compared using comparison hardware logic and if there is a mismatch between the results obtained, then an error or event is raised. The comparison is performed on an instruction by instruction basis so that errors are identified (and hence can be resolved) quickly.
    Type: Grant
    Filed: June 17, 2015
    Date of Patent: September 24, 2019
    Assignee: MIPS Tech, LLC
    Inventor: Julian Bailey
  • Patent number: 10423418
    Abstract: A method for managing tasks in a computer system comprising a processor and a memory, the method includes performing a first task by the processor, the first task comprising task-relating branch instructions and task-independent branch instructions and executing the branch prediction method, the execution resulting in task-relating branch prediction data in the branch prediction history table. In response to determining that the first task is to be interrupted or terminated, the method includes storing the task-relating branch prediction data of the first task in the task structure of the first task. In response to determining that a second task is to be continued, the method includes reading task-relating branch prediction data of the second task from the task structure of the second task, storing the task-relating branch prediction data of the second task in the branch prediction history table, and ensuring that task-independent branch prediction data is maintained.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: September 24, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Wolfgang Gellerich, Peter M. Held, Martin Schwidefsky, Chung-Lung K. Shum
  • Patent number: 10423419
    Abstract: A computer-implemented method for predicting a taken branch that ends an instruction stream in a pipelined high frequency microprocessor includes receiving, by a processor, a first instruction within a first instruction stream, the first instruction comprising a first instruction address; searching, by the processor, an index accelerator predictor one time for the stream; determining, by the processor, a prediction for a taken branch ending the branch stream; influencing, by the processor, a metadata prediction engine based on the prediction; observing a plurality of taken branches from the exit accelerator predictor; maintaining frequency information based on the observed taken branches; determining, based on the frequency information, an updated prediction of the observed plurality of taken branches; and updating, by the processor, the index accelerator predictor with the updated prediction.
    Type: Grant
    Filed: June 27, 2016
    Date of Patent: September 24, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: James J. Bonanno, Michael J. Cadigan, Jr., Adam B. Collura, Daniel Lipetz
  • Patent number: 10423420
    Abstract: A computer-implemented method for predicting a taken branch that ends an instruction stream in a pipelined high frequency microprocessor includes receiving, by a processor, a first instruction within a first instruction stream, the first instruction comprising a first instruction address; searching, by the processor, an index accelerator predictor one time for the stream; determining, by the processor, a prediction for a taken branch ending the branch stream; influencing, by the processor, a metadata prediction engine based on the prediction; observing a plurality of taken branches from the exit accelerator predictor; maintaining frequency information based on the observed taken branches; determining, based on the frequency information, an updated prediction of the observed plurality of taken branches; and updating, by the processor, the index accelerator predictor with the the updated prediction.
    Type: Grant
    Filed: March 1, 2017
    Date of Patent: September 24, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: James J. Bonanno, Michael J. Cadigan, Jr., Adam B. Collura, Daniel Lipetz
  • Patent number: 10423421
    Abstract: A processor includes at least one processing core that includes an operation dispatch for dispatching operations from an instruction pipeline, a plurality of arithmetic logic units for executing the operations, a plurality of multiplexers, each of which connects the operation dispatch to a respective arithmetic logic unit, and a controller configured to selectively enable at least one multiplexer to connect the operation dispatch to at least one arithmetic logic unit based on a reliability mode associated with the operation.
    Type: Grant
    Filed: December 28, 2012
    Date of Patent: September 24, 2019
    Assignee: Intel Corporation
    Inventor: Dennis R. Bradford
  • Patent number: 10423422
    Abstract: A processor may include a baseline branch predictor and an empirical branch bias override circuit. The baseline branch predictor may receive a branch instruction associated with a given address identifier, and generate, based on a global branch history, an initial prediction of a branch direction for the instruction. The empirical branch bias override circuit may determine, dependent on a direction of an observed branch direction bias in executed branch instruction instances associated with the address identifier, whether the initial prediction should be overridden, may determine, in response to determining that the initial prediction should be overridden, a final prediction that matches the observed branch direction bias, or may determine, in response determining that the initial prediction should not be overridden, a final prediction that matches the initial prediction.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: September 24, 2019
    Assignee: Intel Corporation
    Inventors: Niranjan K. Soundararajan, Sreenivas Subramoney, Rahul Pal, Ragavendra Natarajan
  • Patent number: 10423423
    Abstract: Within a processor, speculative finishes of load instructions only are tracked in a speculative finish table by maintaining an oldest load instruction of a thread in the speculative finish table after data is loaded for the oldest load instruction, wherein a particular queue index tag assigned to the oldest load instruction by an execution unit points to a particular entry in the speculative finish table, wherein the oldest load instruction is waiting to be finished dependent upon an error check code result. Responsive to a flow unit receiving the particular queue index tag with an indicator that the error check code result for data retrieved for the oldest load instruction is good, finishing the oldest load instruction in the particular entry pointed to by the queue index tag and writing an instruction tag stored in the entry for the oldest load instruction out of the speculative finish table for completion.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: September 24, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Susan E. Eisen, David A. Hrusecky, Christopher M. Mueller, Dung Q. Nguyen, A. James Van Norstrand, Jr., Kenneth L. Ward
  • Patent number: 10423424
    Abstract: Techniques are disclosed for performing an auxiliary operation via a compute engine associated with a host computing device. The method includes determining that the auxiliary operation is directed to the compute engine, and determining that the auxiliary operation is associated with a first context comprising a first set of state parameters. The method further includes determining a first subset of state parameters related to the auxiliary operation based on the first set of state parameters. The method further includes transmitting the first subset of state parameters to the compute engine, and transmitting the auxiliary operation to the compute engine. One advantage of the disclosed technique is that surface area and power consumption are reduced within the processor by utilizing copy engines that have no context switching capability.
    Type: Grant
    Filed: September 28, 2012
    Date of Patent: September 24, 2019
    Assignee: NVIDIA CORPORATION
    Inventors: Lincoln G. Garlick, Philip Browning Johnson, Rafal Zboinski, Jeff Tuckey, Samuel H. Duncan, Peter C. Mills
  • Patent number: 10423425
    Abstract: An information handling system includes a memory, a remote access controller, and a host processor. The memory to store an extensible firmware interface (EFI) system resource table (ESRT) and an ESRT capsule. The remote access controller to detect an insertion of a hot-pluggable device into the information handling system, to retrieve firmware details for the hot-pluggable device, to create a firmware capsule payload based on the firmware details, and to store the firmware capsule payload in the memory. The host processor to operate in a pre-boot mode, and in an operating system runtime mode. The host processor, while in the operating system runtime, to retrieve the firmware capsule payload from the memory, to update a cached operating system ESRT based on the firmware capsule payload, to retrieve updated firmware for the hot-pluggable device, and to create the ESRT capsule based on the updated firmware.
    Type: Grant
    Filed: June 13, 2016
    Date of Patent: September 24, 2019
    Assignee: Dell Products, LP
    Inventors: Sumanth Vidyadhara, Raveendra Babu Madala
  • Patent number: 10423426
    Abstract: Certain aspects of the present disclosure relates to processing managing an operating system to set up a computer association tool. The technique includes processing an Operating System Deployment (OSD) functionality of a Microsoft System Center Configuration Manager (SCCM) to configure a server, wherein the OSD causes the server to a Pre-boot Execution Environment (PXE Boot). The SCCM may be launched for the PXE boot process to be associated with the server and configuring the SCCM to associate with a specific OSD Task Sequence. Boot from a Network Interface Card (NIC) that has an associated MAC address, using the PXE, wherein the PXE boot process then hands the operation over to the designated OSD Task Sequencer (TS) which handles the configuration process according to at least one variable.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: September 24, 2019
    Assignee: OPEN INVENTION NETWORK LLC
    Inventors: Colin Lee Feeser, Robert Moore Gilbert, Richard A. Paul, Jr., Robert Keith Cahoon
  • Patent number: 10423427
    Abstract: An integrated computing system configuration system includes a computing system that executes an engine to receive component specifications for each of one or more components supplied by a plurality of suppliers, and receive user input for selecting a subset of the components to be implemented in a customized integrated computing system by generating a base integrated computing system configuration that comprises the component specifications of the subset of the components. The engine may then determine whether at least one component meets a rule using the component specification associated with the at least one component, the rule specifying an architectural standard level to be provided by the at least one component, and when the at least one component does not meet the rule, perform one or more corrective operations such that the rule is met.
    Type: Grant
    Filed: August 12, 2015
    Date of Patent: September 24, 2019
    Assignee: VCE IP Holding Company LLC
    Inventor: Jeffery J. Hayward
  • Patent number: 10423428
    Abstract: A method controls the change in operating system in selected service nodes of a high-performance computer (CHP). The method includes: a step (i) of defining, for the selected service nodes, a reduced version of a new operating system to be installed, a boot kernel, a so-called “reference” tree node software image suitable for the new operating system and including a definition of an instantiation to be established in the service nodes, and an activation module capable of locally installing the reference image in each service node; a step (ii) wherein the defined reference image, boot kernel, activation module, and reduced operating system version are transferred into the service nodes; and a step (iii) wherein the transferred activation module is used in each service node in order to locally install the transferred reference image.
    Type: Grant
    Filed: March 20, 2015
    Date of Patent: September 24, 2019
    Assignee: BULL SAS
    Inventors: Julien Georges, Thierry Iceta, Emmanuel Flacard
  • Patent number: 10423429
    Abstract: Reconfiguring processing groups for cascading data workloads including receiving a request to reconfigure a computing system to execute a workload, wherein the computing system comprises a first processing group and a second processing group, wherein the first processing group comprises a first central processing unit (CPU), a first graphics processing unit (GPU), and a second GPU, and wherein the second processing group comprises a second CPU and a third GPU; reconfiguring the computing system including activating a processor link spanning the first processor group and the second processor group between the second GPU and the third GPU; and executing the workload using the first GPU, second GPU, and third GPU including cascading data, via processor links, from the first CPU to the first GPU, from the first GPU to the second GPU, and from the second GPU to the third GPU.
    Type: Grant
    Filed: January 2, 2018
    Date of Patent: September 24, 2019
    Assignee: International Business Machines Corporation
    Inventors: Mehulkumar J. Patel, Krishna P. Prabhu, Guha Prasad Venkataraman
  • Patent number: 10423430
    Abstract: Embodiments are disclosed for methods and systems for selectively initializing elements of an operating system of a computing device. In some embodiments, a method of selectively loading classes during an initialization of an operating system of a computing device comprises starting a service-loading process, loading critical services via the service-loading process, and launching a human-machine interface. The method may further include launching a last-used application via the human-machine interface, and launching remaining services responsive to requests for use of the remaining services.
    Type: Grant
    Filed: July 9, 2015
    Date of Patent: September 24, 2019
    Assignee: Harman International Industries, Incorporated
    Inventors: Prakash Raman, Pranjal Chakraborty, Eugine Varghese
  • Patent number: 10423431
    Abstract: Computing systems, computer-readable media, and methods may include determining, for a hydrocarbon field, one or more formation properties and one or more fluid properties and determining, for the hydrocarbon field, a location of one or more wells and a configuration of the one or more wells. The method may include dividing the hydrocarbon field into one or more grid cells. The method may include simulating fluid flow in at least one of the one or more grid cells based on a multi-point well connection process. The multi-point well connection process may determine flow conditions between the one or more wells and the at least of the one or more grid cells. The method may include determining one or more parameters of the one or more wells based at least in part on the fluid flow.
    Type: Grant
    Filed: July 31, 2015
    Date of Patent: September 24, 2019
    Assignee: Schlumberger Technology Corporation
    Inventor: Radek Pecher
  • Patent number: 10423432
    Abstract: A dynamic cloud stack testing system comprises a cloud network with cloud components and a cloud stack server coupled to the network. The server includes an interface, a memory, a cloud stack configuration engine, and a cloud stack testing engine. The interface receives a cloud stack request from a user device that includes functionality parameters. The memory stores historic cloud stack combinations. The cloud stack configuration engine identifies cloud components associated with the functionality parameters and determines a cloud stack configuration that incorporates them. It determines whether the configuration is a unique cloud stack configuration by comparing it to the plurality of historic cloud stack configurations. The cloud stack testing engine, in response to determining that the cloud stack configuration is unique, determines a cloud stack configuration test. The cloud stack testing engine executes the test, and stores results and the associated cloud stack configuration in the memory.
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: September 24, 2019
    Assignee: Bank of America Corporation
    Inventors: Sandeep Kumar Chauhan, Sasidhar Purushothaman
  • Patent number: 10423433
    Abstract: Systems and methods for storing and managing pools of network addresses. An example method may comprise: receiving, by a processing device, a request for a network address to be associated with a network interface of a machine, wherein the machine is represented by one of: a virtual machine or a computer system; identifying a hierarchy of groups that include the machine; searching the hierarchy of groups to identify a group having an associated pool of network addresses; and selecting a network address from the pool of network addresses.
    Type: Grant
    Filed: February 23, 2015
    Date of Patent: September 24, 2019
    Assignee: Red Hat Israel, Inc.
    Inventors: Michael Kolesnik, Mordechay Asayag
  • Patent number: 10423434
    Abstract: A computer system authenticates a logical port for a virtual machine. A logical network maintains logical network data for a logical switch having the logical port. A virtual switch identifies a logical port authentication request for the virtual machine and transfers the logical port authentication request. A logical port authenticator receives the logical port authentication request and transfers the logical port authentication request for delivery to an authentication database. The logical port authenticator receives a logical port authentication response transferred by the authentication database that grants the logical port authentication request for the virtual machine and transfers authorization data for the logical port. The virtual switch transfers user data for the virtual machine when the virtual machine uses the logical port responsive to the authorization data.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: September 24, 2019
    Assignee: Nicira, Inc.
    Inventors: Maheedhar Nallapareddy, Akshay Katrekar
  • Patent number: 10423435
    Abstract: Disclosed are examples of memory allocation and reallocation for virtual machines operating in a shared memory configuration creating a swap file for at least one virtual machine. One example method may include allocating guest physical memory to the swap file to permit the at least one virtual machine to access host physical memory previously occupied by the guest physical memory. The example method may also include determining whether an amount of available host physical memory is below a minimum acceptable level threshold, and if so then freeing at least one page of host physical memory and intercepting a memory access attempt performed by the at least one virtual machine and allocating host physical memory to the virtual machine responsive to the memory access attempt.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: September 24, 2019
    Assignee: OPEN INVENTION NETWORK LLC
    Inventors: Farid Khafizov, Andrey Mokhov
  • Patent number: 10423436
    Abstract: Techniques for managing energy use of a computing deployment are provided. In one embodiment, a computer system can establish a performance model for one or more components of the computing deployment, where the performance model models a relationship between one or more tunable parameters of the one or more components and an end-to-end performance metric, and where the end-to-end performance metric reflects user-observable performance of a service provided by the computing deployment. The computer system can further execute an algorithm to determine values for the one or more tunable parameters that minimize power consumption of the one or more components, where the algorithm guarantees that the determined values will not cause the end-to-end performance metric, as calculated by the performance model, to cross a predefined threshold. The computer system can then enforce the determined values by applying changes to the one or more components.
    Type: Grant
    Filed: December 11, 2014
    Date of Patent: September 24, 2019
    Assignee: VMware Inc.
    Inventors: Xing Fu, Tariq Magdon-Ismail
  • Patent number: 10423437
    Abstract: Implementations of the disclosure provide for hot-plugging of virtual functions in a virtualized environment. In one implementation, a computer system comprising a memory to store parameters of virtual functions and a processing device, operatively coupled to the memory is provided. A determination that a virtual machine has no available virtual functions associated with a specified network. A logical network device associated with the specified network is identified. A determination is made that a number of virtual functions associated with the logical network device is below a threshold number of virtual functions. In response, a new virtual function associated with the logical network device is created. Thereupon, a virtual device of the virtual machine is associated with the new virtual function.
    Type: Grant
    Filed: August 17, 2016
    Date of Patent: September 24, 2019
    Assignee: Red Hat Israel, Ltd.
    Inventors: Alona Kaplan, Michael Kolesnik
  • Patent number: 10423438
    Abstract: In a multi-tenant environment, separate virtual machines can be used for configuring and operating different subsets of programmable integrated circuits, such as a Field Programmable Gate Array (FPGA). The programmable integrated circuits can communicate directly with each other within a subset, but cannot communicate between subsets. Generally, all of the subsets of programmable ICs are within a same host server computer within the multi-tenant environment, and are sandboxed or otherwise isolated from each other so that multiple customers can share the resources of the host server computer without knowledge or interference with other customers.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: September 24, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Islam Mohamed Hatem Abdulfattah Mohamed Atta, Mark Bradley Davis, Robert Michael Johnson, Christopher Joseph Pettey, Asif Khan, Nafea Bshara
  • Patent number: 10423439
    Abstract: Disclosed are examples of observing and measuring virtual machine (VM) activity in a VM communication system environment. According to one example embodiment, an example operation may include transmitting a request from a physical host device to monitor at least one virtual machine among various virtual machines currently operating in a virtual communication system. Additional operations may include determining which of the virtual machines are actively accessing a predetermined virtual application, such as a virtual storage application. The operations may also include receiving present operating activity results regarding the virtual machines responsive to the transmitted request.
    Type: Grant
    Filed: October 2, 2018
    Date of Patent: September 24, 2019
    Assignee: OPEN INVENTION NETWORK LLC
    Inventor: John Michael Suit
  • Patent number: 10423440
    Abstract: Provided are techniques for an operating system (OS) to be modified on a running system such that running programs, including system services, so not have to be stopped and restarted for the modification to take effect. The techniques include detecting, by a processing thread, when the processing thread has entered a shared library; in response to the detecting, setting a thread flag corresponding to the thread in an operating system (OS); detecting an OS flag, set by the OS, indicating that the OS is updating the shared library; in response to detecting the OS flag, suspending processing by the processing thread and transferring control from the thread to the OS; resuming processing by the processing thread in response to detecting that the OS has completed the updating; and executing the shared library in response to the resuming.
    Type: Grant
    Filed: November 22, 2016
    Date of Patent: September 24, 2019
    Assignee: International Business Machines Corporation
    Inventor: Stephen B. Peckham
  • Patent number: 10423441
    Abstract: Embodiments generally relate to a computer-implemented method and system of automatically generating a task on a first messaging application at a first client device associated with a first user. The method includes: parsing, by the first client device, message content from an active field on the first messaging application of the first client device to identify at least one predefined character in the message content; and receiving, in relation to the message content, a selection of a user name associated with a second user. Task metadata may be automatically generated based on at least the first user, second user and a portion of the message content. The task metadata may then be attached, by the first client device, to the message content. First task data based on the task metadata may then be automatically generated at the first client device.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: September 24, 2019
    Inventor: James Cattermole
  • Patent number: 10423442
    Abstract: One embodiment provides a method, comprising: receiving a plurality of jobs for processing, wherein each of the plurality of jobs comprises a plurality of tasks and wherein at least one of the plurality of jobs is dependent on another of the plurality of jobs; receiving task dependencies between tasks of the at least one of the plurality of jobs and tasks of the another of the plurality of jobs, wherein the task dependencies identify dependent tasks from the tasks of the at least one of the plurality of jobs and dependee tasks from the tasks of the another of the plurality of jobs; scheduling the processing of the dependent tasks as being based upon only the completed processing of the dependee tasks; and performing job processing of the dependent tasks after processing of the dependee tasks irrespective of the overall job processing status of the another of the plurality of jobs.
    Type: Grant
    Filed: May 25, 2017
    Date of Patent: September 24, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Himanshu Gupta, Nitin Gupta, Sameep Mehta
  • Patent number: 10423443
    Abstract: Described herein are systems and methods for implementing a task assignment protocol. In one or more embodiments, a task management system receives task data and resource data. Responsive to the receipt, the task management system receives input for selection of a resource. A candidate subset of tasks that match the properties of the resource is then identified. Upon receipt of selection of the candidate task, a task icon updates. The task icon may update to reflect the resource identifier associated the assigned resource. Additionally, or alternatively, a resource icon is updated to reflect the availability of the resource. The protocol repeats until one or more resources are allocated to the tasks. A resource chart additionally displays to aid in evaluation of resource availability.
    Type: Grant
    Filed: April 3, 2017
    Date of Patent: September 24, 2019
    Assignee: Oracle International Corporation
    Inventors: Sanjay Kumar Bhandari, Satya Anur, Tianyi Wang, Vijay Manguluru, Andrew Watanabe, Laura Akel
  • Patent number: 10423444
    Abstract: A migration system includes a memory, a physical processor, first and second hypervisors, first and second virtual machines, and first and second networking devices. The first hypervisor is located at a migration source location and the second hypervisor is located at a migration destination location. The first virtual machine includes a guest OS which includes a first agent. The second virtual machine includes the guest OS which includes a second agent. The first hypervisor is configured to request the guest OS executing on the first hypervisor to copy a configuration of the first networking device and to store the configuration in a place-holder networking device. The second hypervisor is configured to start the second virtual machine at a destination location, request the guest OS executing on the second virtual machine to copy the configuration from the place-holder networking device and to store the configuration in the second networking device.
    Type: Grant
    Filed: August 9, 2016
    Date of Patent: September 24, 2019
    Assignee: Red Hat Israel, Ltd.
    Inventor: Michael Tsirkin
  • Patent number: 10423445
    Abstract: A platform that provides a way to automatically compose and execute even complex workflows without writing code is described. A set of pre-built functional building blocks can be provided. The building blocks perform data transformation and machine learning functions. The functional blocks have well known plug types. The building blocks can be composed build complex compositions. Input and output files are converted to a standard data type so that modules are pluggable.
    Type: Grant
    Filed: August 31, 2016
    Date of Patent: September 24, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Debi Mishra, Parry Husbands, Sudarshan Raghunathan, Andy Linfoot, Damon Hachmeister
  • Patent number: 10423446
    Abstract: Data processing apparatus comprises one or more interconnected processing elements each configured to execute processing instructions of a program task; coherent memory circuitry storing one or more copies of data accessible by each of the processing elements, so that data written to a memory address in the coherent memory circuitry by one processing element is consistent with data read from that memory address in the coherent memory circuitry by another of the processing elements; the coherent memory circuitry comprising a memory region to store data, accessible by the processing elements, defining one or more attributes of a program task and context data associated with a most recent instance of execution of that program task; the apparatus comprising scheduling circuitry to schedule execution of a task by a processing element in response to the one or more attributes defined by data stored in the memory region corresponding to that task; and each processing element which executes a program task is configur
    Type: Grant
    Filed: November 28, 2016
    Date of Patent: September 24, 2019
    Assignee: ARM Limited
    Inventors: Curtis Glenn Dunham, Jonathan Curtis Beard, Roxana Rusitoru
  • Patent number: 10423447
    Abstract: Methods for scheduling operations in a scheduler hierarchy of a storage system. One method includes scheduling a first IO having a first cost at a first flow scheduler of a first flow configured to schedule IOs accessing a volume as executed on a first core processor. A global cost is updated with the first cost, wherein the global cost is shared by a plurality of flows of a plurality of core processors. An intervening cost is determined of at least one IO possibly scheduled before the first set of IOs by one or more flow schedulers of one or more flows configured to schedule IOs accessing the volume as executed on the plurality of core processors. A current cost is updated based on the first cost and the intervening cost. IOs and MBPS limits are set independently for the volume, each controlling scheduling through a corresponding accumulating current cost.
    Type: Grant
    Filed: January 20, 2017
    Date of Patent: September 24, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Sourabh Yerfule, Gurunatha Karaje, Mandar Samant, Sagar Trehan
  • Patent number: 10423448
    Abstract: Described herein are techniques and systems for onboarding a service from client-managed computing infrastructure to network computing infrastructure. As part of the onboarding, a database that stores onboarding information is accessed and a set of tasks is identified. A state diagram is generated based on the onboarding information. The techniques and systems are configured to calculate, within the state diagram, a task execution path that is associated with a highest probability of success for moving the client organization from a current environment associated with the client-managed computing infrastructure to a target environment associated with the network computing infrastructure. The task execution path can be used to identify and provide subsets of tasks as part of an autonomously guided onboarding process. The task execution path can be re-calculated based on a determination that an individual task has not been completed within an expected amount of time to complete the individual task.
    Type: Grant
    Filed: October 6, 2017
    Date of Patent: September 24, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Warren Johnson, Sean Dastouri, Ian Liu
  • Patent number: 10423449
    Abstract: Method of allocating tasks in a computing environment including: receiving a software application having tasks for processing; splitting the software application into the tasks; selecting a task for processing in a first computing environment without encryption, a second computing environment with homomorphic encryption or a third computing environment without encryption based on the following algorithm: analyzing the tasks for the presence of a security marker indicating a security level of the tasks; when there is no security marker, selecting the task for processing in the least costly of first computing environment without encryption or the third computing environment without encryption; and when the security marker is medium or high and the processing of the task involves any computation, selecting the task for processing in the least costly of the second computing environment with homomorphic encryption or the third computing environment.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: September 24, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gopal K. Bhageria, Pooja Malik, Sathya Santhar, Vikram Yadav
  • Patent number: 10423450
    Abstract: One embodiment provides a system for scheduling I/O resources of a virtual machine. During operation, in response to receiving a plurality of I/O requests, the system identifies a plurality of target virtual disks to which the I/O requests are to be sent, wherein a virtual disk corresponds to a previously created I/O queue. The system assigns a respective I/O request to the corresponding I/O queue for an identified target virtual disk. The system schedules I/O resources to be used by the respective I/O request based on a scheduling parameter that corresponds to the identified target virtual disk.
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: September 24, 2019
    Assignee: Alibaba Group Holding Limited
    Inventor: Chao Zhang
  • Patent number: 10423451
    Abstract: Computerized methods, computer systems, and computer-readable media for governing how virtual processors are scheduled to particular logical processors are provided. A scheduler is employed to balance a load imposed by virtual machines, each having a plurality of virtual processors, across various logical processors (comprising a physical machine) that are running threads in parallel. The threads are issued by the virtual processors and often cause spin waits that inefficiently consume capacity of the logical processors that are executing the threads. Upon detecting a spin-wait state of the logical processor(s), the scheduler will opportunistically grant time-slice extensions to virtual processors that are running a critical section of code, thus, mitigating performance loss on the front end.
    Type: Grant
    Filed: July 20, 2015
    Date of Patent: September 24, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Thomas Fahrig, David Cutler
  • Patent number: 10423452
    Abstract: A method, executed by a computer, for allocating resources to virtual machines includes monitoring resource usage for a selected resource for one or more capped virtual machines and one or more uncapped virtual machines, and responsive to detecting a first resource violation, the first resource violation corresponding to resource usage for a capped virtual machine and a second resource violation, the second resource violation corresponding to resource usage for an uncapped virtual machine, adjusting allocation of the selected resource for each of the one or more capped virtual machines previous to adjusting allocation of the selected resource for any of the uncapped virtual machines. A computer program product and computer system corresponding to the above method are also disclosed herein.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: September 24, 2019
    Assignee: International Business Machines Corporation
    Inventors: Joseph W. Cropper, Charles J. Volzka, Sadek Jbara
  • Patent number: 10423453
    Abstract: Systems and methods are described for performing distributed computations over a data set potentially owned or controlled by many stakeholders, each of whom may set their own policies governing access to and/or other use of their individual data.
    Type: Grant
    Filed: November 1, 2016
    Date of Patent: September 24, 2019
    Assignee: Intertrust Technologies Corporation
    Inventors: Jarl Nilsson, William Knox Carey
  • Patent number: 10423454
    Abstract: Systems, methods, and software described herein facilitate the allocation of large scale processing jobs to host computing systems. In one example, a method of operating an administration node to allocate processes to a plurality of host computing systems includes identifying a job process for a large scale processing environment (LSPE), and identifying a data repository associated with the job process. The method further includes obtaining data retrieval performance information related to the data repository and the host systems in the LSPE. The method also provides identifying a host system in the host systems for the job process based on the data retrieval performance information, and initiating a virtual node for the job process on the identified host system.
    Type: Grant
    Filed: March 10, 2015
    Date of Patent: September 24, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Thomas A. Phelan, Michael J. Moretti, Joel Baxter, Gunaseelan Lakshminarayanan, Kumar Sreekanti
  • Patent number: 10423455
    Abstract: A system receives a request to deploy a virtual machine on one of a plurality of nodes running a plurality of virtual machines in a cloud computing system. The system receives a predicted lifetime for the virtual machine and an indication of an average lifetime of virtual machines running on each of the plurality of nodes. The system allocates the virtual machine to a first node when a first policy of collocating virtual machines with similar lifetimes on a node is adopted and the predicted lifetime is within a predetermined range of the average lifetime of virtual machines running on the first node. The system allocates the virtual machine to a second node when a second policy of collocating virtual machines with dissimilar lifetimes on a node is adopted and the predicted lifetime is not within the predetermined range of the average lifetime of virtual machines running on the second node.
    Type: Grant
    Filed: February 3, 2017
    Date of Patent: September 24, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ricardo Bianchini, Eli Cortez, Marcus Felipe Fontoura, Anand Bonde
  • Patent number: 10423456
    Abstract: Apparatuses and methods related to dynamical adjustment of thresholds are disclosed. A method for dynamic adjustment of thresholds may include obtaining costs related to computing resources that have been used by a user. A resource utilization threshold may be dynamically adjusted based on at least one parameter. The at least one parameter may comprise the costs related to the computing resources that have been used by the user. A utilization rate of the computing resources by the user may be compared to the adjusted threshold.
    Type: Grant
    Filed: July 31, 2014
    Date of Patent: September 24, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Jing Dong, Phyllis Gallagher
  • Patent number: 10423457
    Abstract: Outcome-based adjustment of a software-defined environment (SDE) includes determining a business operation and a corresponding set of tasks to be performed in a software defined environment (SDE), establishing a first resource configuration to perform the corresponding set of tasks to achieve a business outcome target, determining a first resource cost for performing the corresponding set of tasks, assigning a priority level to tasks within the corresponding set of tasks, determining a set of performance indicators corresponding to a task having a first priority level, monitoring the SDE to identify a triggering event, responsive to identifying the triggering event, establishing a second resource configuration based, at least in part, on a performance level of a performance indicator in the set of performance indicators, the second resource configuration addressing the triggering event, and determining a second resource cost for performing the corresponding set of tasks according to the second resource config
    Type: Grant
    Filed: May 12, 2017
    Date of Patent: September 24, 2019
    Assignee: International Business Machines Corporation
    Inventors: Brad L. Brech, Scott W. Crowder, Hubertus Franke, Jeffrey A. Frey, Nagui Halim, Matt R. Hogstrom, Chung-Sheng Li, David B. Lindquist, Stefan Pappe, Pratap C. Pattnaik, Balachandar Rajaraman, Radha P. Ratnaparkhi, Rodney A. Smith, Michael D. Williams
  • Patent number: 10423458
    Abstract: A parallel processing system creates a list when determining a start and an end times of processings for nodes and one or more of the nodes used by the processings, the list indicating an order of executing the processings and a number and positions of nodes used by the processings on coordinate axes, nodes included in the nodes and adjacent to each other in coordinate axis directions on the coordinate axes being coupled to each other, identifies a number of unused nodes on the coordinate axes at a time when the execution of a processing ends before an end time of the processing, and determines, based on the number and the list, a processing, a start time of which is to be advanced, from the processings at a time when the execution of the one of the processings ends before the end time of the one of the processings.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: September 24, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Tsutomu Ueno, Tsuyoshi Hashimoto
  • Patent number: 10423459
    Abstract: A resource manager arranges the resources in a computer system into one or more resource pools. The resource manager allocates a number of active resources and a number of backup resources to a particular resource pool. For each resource managed by the resource manager, the resource manager acquires information that describes the capacity and reliability of the resource. Capacity and reliability information for the particular resource pool is determined based on the capacity and reliability information associated with the resources assigned to the pool. In response to a request, the resource manager may provide an application with resources from several resource pools. The likelihood that the resource manager will be able to provide sufficient resources to the application may be determined based at least in part on the reliability information associated with the several resource pools.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: September 24, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Christopher Richard Jacques de Kadt, Benjamin Warren Mercier, Carlos Vara Callau, Timothy Daniel Cole, Aaron Gifford Freshwater, Sayantan Chakravorty, Allan Henry Vermeulen
  • Patent number: 10423460
    Abstract: Systems and methods that restore failed reconfiguration of nodes in distributed systems. By analyzing reports from read/write quorums of nodes associated with a configuration, automatic recovery for data partitions can be facilitated. Moreover, a configuration manager component tracks current configurations for replication units and determines whether a reconfiguration is to be performed (e.g., due to node failures, node recovery, replica additions/deletions, replica moves, or replica role changes, and the like.) Reconfigurations of data activated as being replicated from a first configuration to a second configuration may be performed in a transactionally consistent manner based on dynamic quorums associated with the second configuration and the first configuration.
    Type: Grant
    Filed: January 7, 2017
    Date of Patent: September 24, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Santeri Olavi Voutilainen, Gopala Krishna Reddy Kakivaya, Ajay Kalhan, Lu Xun
  • Patent number: 10423461
    Abstract: Pooled virtual machine resources are described. A system determines whether a number of virtual machine resources that are in a pool is less than a specified number. The system creates a calculated number of virtual machine resources for the pool if the number of virtual machine resources that are in the pool is less than the specified number, the calculated number being equal to the specified number minus the number of virtual machine resources that are in the pool. The system receives a request to create a virtual machine environment that requires at least one virtual machine resource. The system allocates a virtual machine resource from the pool to the virtual machine environment.
    Type: Grant
    Filed: November 3, 2016
    Date of Patent: September 24, 2019
    Assignee: salesforce.com, inc.
    Inventors: Kunal Sanghavi, Vijaysenthil Veeriah, Varun Gupta
  • Patent number: 10423462
    Abstract: Embodiments of the present invention provide systems and methods for dynamically allocating data to multiple nodes. The method includes determining the usage of multiple buffers and the capability factors of multiple servers. Data is then allocated to multiple buffers associated with multiple active servers, based on the determined usage and capability factors, in order to keep the processing load on the multiple servers balanced.
    Type: Grant
    Filed: February 9, 2016
    Date of Patent: September 24, 2019
    Assignee: International Business Machines Corporation
    Inventors: Mi W. Shum, DongJie Wei, Samuel H. K. Wong, Xin Ying Yang, Xiang Zhou
  • Patent number: 10423463
    Abstract: Methods, systems, and computer-readable media for computational task offloading for virtualized graphics are disclosed. A virtual GPU attached to a virtual compute instance is provisioned in a multi-tenant provider network. The virtual compute instance is implemented using a physical compute instance, and the virtual GPU is implemented using a physical GPU. Using a microcode compilation service, program code is compiled into microcode for a target GPU type associated with the virtual GPU. The microcode is executed on the virtual GPU.
    Type: Grant
    Filed: June 9, 2016
    Date of Patent: September 24, 2019
    Assignee: Amazon Technologies, Inc.
    Inventor: Nicholas Patrick Wilt
  • Patent number: 10423464
    Abstract: In one example in accordance with the present disclosure, a method may include performing a transactional operation such that if one step of the transactional operation is performed, each other step of the transactional operation is performed. The transactional operation may include making a first copy, stored in a first persistent memory, of a next ticket number stored in a second persistent memory and updating the next ticket number in the second persistent memory. The method may also include determining when to serve a first thread based on the first copy of the next ticket number.
    Type: Grant
    Filed: October 25, 2016
    Date of Patent: September 24, 2019
    Assignee: Hewlett Packard Enterprise Patent Development LP
    Inventors: Mark Lillibridge, Milind M. Chabbi, Haris Volos
  • Patent number: 10423465
    Abstract: Methods and systems for allocating disk space and other limited resources (e.g., network bandwidth) for a cluster of data storage nodes using distributed semaphores with atomic updates are described. The distributed semaphores may be built on top of a distributed key-value store and used to reserve disk space, global disk streams for writing data to disks, and per node network bandwidth settings. A distributed semaphore comprising two or more semaphores that are accessed with different keys may be used to reduce contention and allow a globally accessible semaphore to scale as the number of data storage nodes within the cluster increases over time. In some cases, the number of semaphores within the distributed semaphore may be dynamically adjusted over time and may be set based on the total amount of disk space within the cluster and/or the number of contention fails that have occurred to the distributed semaphore.
    Type: Grant
    Filed: February 21, 2018
    Date of Patent: September 24, 2019
    Assignee: Rubrik, Inc.
    Inventor: Noel Moldvai
  • Patent number: 10423466
    Abstract: A method, system, and device provide for the streaming of ordered requests from one or more Senders to one or more Receivers over an un-ordered interconnect while mitigating structural deadlock conditions.
    Type: Grant
    Filed: October 18, 2016
    Date of Patent: September 24, 2019
    Assignee: Arm Limited
    Inventors: Ashok Kumar Tummala, Jamshed Jalal, Paul Gilbert Meyer, Dimitrios Kaseridis