Patents Examined by Kevin X Lu
  • Patent number: 11630685
    Abstract: According to some embodiments, an automated provisioning system may receive a customer demand associated with an application to be executed in a cloud-based computing environment. The automated provisioning system may include a process allocator to communicate with Virtual Machine (“VM”) and container provisioners and determine cluster data. A machine learning based microservice setup platform, coupled to the automated provisioning system, may receive the cluster data and information about the customer demand. The machine learning based microservice setup platform may then execute policy rules based on the cluster data (and information about the customer demand) and generate a recommendation for the customer demand. The automated provisioning system may then assign the customer demand to one of a VM-based infrastructure and a container-based infrastructure in accordance with the recommendation generated by the machine learning based microservice setup platform.
    Type: Grant
    Filed: January 15, 2020
    Date of Patent: April 18, 2023
    Assignee: SAP SE
    Inventors: Prabal Mahanta, Nishii Bharill, Swati Verma
  • Patent number: 11625273
    Abstract: Throughput capacity may be changed to sustain throughput for accessing individual items in a database. A table hosted at storage nodes that provide access to the table in a database may be identified as allocated with a client-specified throughput capacity for accessing the table. Performance of access requests to the table at the storage nodes may be tracked. Based on the performance of the access requests, a change may be determined that modifies a throughput capacity for the table to sustain a guaranteed throughput for each access request independent of other access requests received for the table.
    Type: Grant
    Filed: November 23, 2018
    Date of Patent: April 11, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Mostafa Elhemali, Dolev Ish-am, Jonathan L. Meed, Richard Krog, Adel Gawdat, Kai Zhao, Saumil Ramesh Hukerikar
  • Patent number: 11599821
    Abstract: Implementations detailed herein include description of a computer-implemented method. In an implementation, the method at least includes receiving an application instance configuration, an application of the application instance to utilize a portion of an attached accelerator during execution of a machine learning model and the application instance configuration including: an indication of the central processing unit (CPU) capability to be used, an arithmetic precision of the machine learning model to be used, an indication of the accelerator capability to be used, a storage location of the application, and an indication of an amount of random access memory to use.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: March 7, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Sudipta Sengupta, Poorna Chand Srinivas Perumalla, Dominic Rajeev Divakaruni, Nafea Bshara, Leo Parker Dirac, Bratin Saha, Matthew James Wood, Andrea Olgiati, Swaminathan Sivasubramanian
  • Patent number: 11593184
    Abstract: Methods, systems and apparatuses for graph processing are disclosed. One graph streaming processor includes a thread manager, wherein the thread manager is operative to dispatch operation of the plurality of threads of a plurality of thread processors before dependencies of the dependent threads have been resolved, maintain a scorecard of operation of the plurality of threads of the plurality of thread processors, and provide an indication to at least one of the plurality of thread processors when a dependency between the at least one of the plurality of threads that a request has or has not been satisfied. Further, a producer thread provides a response to the dependency when the dependency has been satisfied, and each of the plurality of thread processors is operative to provide processing updates to the thread manager, and provide queries to the thread manager upon reaching a dependency.
    Type: Grant
    Filed: August 11, 2021
    Date of Patent: February 28, 2023
    Assignee: Blaize, Inc.
    Inventors: Lokesh Agarwal, Sarvendra Govindammagari, Venkata Ganapathi Puppala, Satyaki Koneru
  • Patent number: 11579912
    Abstract: A method includes identifying a source virtual machine to be migrated from a source domain to a target domain, extracting file-in-use metadata and shared asset metadata from virtual machine metadata of the source virtual machine, and copying one or more files identified in the file-in-use metadata to a target virtual machine in the target domain. For each of one or more shared assets identified in the shared asset metadata, the method further includes (a) determining whether or not the shared asset already exists in the target domain, (b) responsive to the shared asset already existing in the target domain, updating virtual machine metadata of the target virtual machine to specify the shared asset, and (c) responsive to the shared asset not already existing in the target domain, copying the shared asset to the target domain and updating virtual machine metadata of the target virtual machine to specify the shared asset.
    Type: Grant
    Filed: February 13, 2020
    Date of Patent: February 14, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Vaideeswaran Ganesan, Suren Kumar, Vinod Durairaj
  • Patent number: 11567804
    Abstract: A virtual machine management service obtains a request to instantiate a virtual machine image (VMI) to implement a virtual network function (VNF). The request specifies a set of processor requirements corresponding to instantiation of the VMI. In response to the request, the service identifies, from a server comprising a set of processor cores, available processor capacity. The service determines, based on the available processor capacity and the set of processor requirements, whether to instantiate the VMI on to a subset of processor cores of the server. Based on this determination, the service instantiates the VMI on to the subset of processor cores to implement the VNF.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: January 31, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Yanping Qu, Sabita Jasty, Kaushik Pratap Biswas, Yegappan Lakshmanan
  • Patent number: 11550607
    Abstract: Processor core power management in a virtualized environment. A hypervisor, executing on a processor device of a computing host, the processor device having a plurality of processor cores, receives from a guest operating system of a virtual machine, a request to set a virtual central processing unit (VCPU) of the virtual machine to a first requested P-state level of a plurality of P-state levels. Based on the request, the hypervisor associates the VCPU with a first processor core having a P-state that corresponds to the first requested P-state level.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: January 10, 2023
    Assignee: Red Hat, Inc.
    Inventor: Bandan Das
  • Patent number: 11550634
    Abstract: A method for minimizing allocation failures in a cloud computing system without overprovisioning may include determining a predicted supply for a virtual machine series in a system unit of the cloud computing system during an upcoming time period. The predicted supply may be based on a shared available current capacity and a shared available future added capacity for the virtual machine series in the system unit. The method may also include predicting an available capacity for the virtual machine series in the system unit during the upcoming time period. The predicted available capacity may be based at least in part on a predicted demand for the virtual machine series in the system unit during the upcoming time period and the predicted supply. The method may also include taking at least one mitigation action in response to determining that the predicted demand exceeds the predicted supply during the upcoming time period.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: January 10, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Saurabh Agarwal, Maitreyee Ramprasad Joshi, Vinayak Ramnath Karnataki, Neha Keshari, Gowtham Natarajan, Yash Purohit, Sanjay Ramanujan, Karthikeyan Subramanian, Ambrose Thomas Treacy, Shandan Zhou
  • Patent number: 11544093
    Abstract: Examples herein relate to checkpoint replication and copying of updated checkpoint data. For example, a memory controller coupled to a memory can receive a write request with an associated address to write or update checkpoint data and track updates to checkpoint data based on at least two levels of memory region sizes. A first level is associated with a larger memory region size than a memory region size associated with the second level. In some examples, the first level is a cache-line memory region size and the second level is a page memory region size. Updates to the checkpoint data can be tracked at the second level unless an update was previously tracked at the first level. Reduced amounts of updated checkpoint data can be transmitted during a checkpoint replication by using multiple region size trackers.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: January 3, 2023
    Assignee: Intel Corporation
    Inventors: Zhe Wang, Andrew V. Anderson, Alaa R. Alameldeen, Andrew M. Rudoff
  • Patent number: 11539784
    Abstract: Methods are provided. A method includes announcing to a network meta information describing each of a plurality of distributed data sources. The method further includes propagating the meta information amongst routing elements in the network. The method also includes inserting into the network a description of distributed datasets that match a set of requirements of the analytics task. The method additionally includes delivering, by the routing elements, a copy of the analytics task to locations of respective ones of the plurality of distributed data sources that include the distributed datasets that match the set of requirements of the analytics task.
    Type: Grant
    Filed: June 22, 2016
    Date of Patent: December 27, 2022
    Assignee: International Business Machines Corporation
    Inventors: Bong Jun Ko, Theodoros Salonidis, Rahul Urgaonkar, Dinesh C. Verma
  • Patent number: 11537419
    Abstract: Disclosed is a source host including a processor. The processor operates a virtual machine (VM) to communicate network traffic over a communication link. The processor also initiates migration of the VM to a destination host. The processor also suspends the VM during migration of the VM to the destination host. The source host also includes a live migration circuit coupled to the processor. The live migration circuit manages a session associated with the communication link while the VM is suspended during migration. The live migration circuit buffers changes to a session state and transfers the buffered session state changes to the destination host for replay after the VM is reactivated on the destination host. The live migration circuit keeps the sessions alive during migration to alleviate connection losses.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: December 27, 2022
    Assignee: Intel Corporation
    Inventors: Stephen T. Palermo, Krishnamurthy Jambur Sathyanarayana, Sean Harte, Thomas Long, Eliezer Tamir, Hari K. Tadepalli
  • Patent number: 11507477
    Abstract: System and method for providing fault tolerance in virtualized computer systems use a first guest and a second guest running on virtualization software to produce outputs, which are produced when a workload is executed on the first and second guests. An output of the second guest is compared with an output of the first guest to determine if there is an output match. If there is no output match, the first guest is paused and a resynchronization of the second guest is executed to restore a checkpointed state of the first guest on the second guest. After the resynchronization of the second guest, the paused first guest is caused to resume operation.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: November 22, 2022
    Assignee: VMware, Inc.
    Inventors: Ganesh Venkitachalam, Rohit Jain, Boris Weissman, Daniel J. Scales, Vyacheslav Vladimirovich Malyugin, Jeffrey W. Sheldon, Min Xu
  • Patent number: 11487427
    Abstract: Concurrent threads may be synchronized at the level of the memory words they access rather than at the level of the lock that protects the execution of critical sections. Each lock may be associated with an array of flags and each flag may indicate ownership of certain memory words. A pessimistic thread may set flags corresponding to memory words it is accessing in the critical section, while an optimistic thread may read the corresponding flag before any memory access to ensure that the flag is not set and that therefore the associated memory word is not being accessed by the other thread. Thus, optimistic threads that do not have conflicts with the pessimistic thread may not have to wait for the pessimistic thread to release the lock before proceeding.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: November 1, 2022
    Assignee: Oracle International Corporation
    Inventors: Alex Kogan, David Dice, Virendra J. Marathe
  • Patent number: 11481249
    Abstract: The present disclosure discloses a service migration method including: sending, by a VNFM module, a virtual machine VM request command to a first VIM module, where the first VIM module manages a first host on which post-upgrade new-version software is configured; receiving, by the VNFM module, a VM request response from the first VIM module, where the VM request response includes information about a first VM that the first VIM module requests on the first host, and virtual machine own data that can run on the first host and that is configured by the first VIM module is configured on the first VM; and sending, by the VNFM module, a service migration command to a virtualized network function VNF module, where the service migration command is used to instruct the VNF module to migrate a running service on a second VM to the first VM.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: October 25, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Long Li, Xuewen Gong
  • Patent number: 11467886
    Abstract: Virtual machines can be migrated between computing environments. For example, a system can receive a request to perform a migration process involving migrating a virtual machine from a source computing environment to a target computing environment. The target computing environment may be a cloud computing environment. In response to the request, the system can receive first configuration data for a first version of the virtual machine that is located in the source computing environment. The first configuration data can describe virtualized features of the first version of the virtual machine. The system can use the first configuration data to generate second configuration data for a second version of the virtual machine that is to be deployed in the target computing environment. The system can then deploy the second version of the virtual machine within one or more containers of the target computing environment in accordance with the second configuration data.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: October 11, 2022
    Assignee: RED HAT, INC.
    Inventors: Mordechay Asayag, Arik Hadas
  • Patent number: 11416274
    Abstract: A computer-implemented method includes detecting, by a bridge container running inside a container scope, connection information about a first service instance running to provide a respective first service outside the container scope. A first virtual container is initialized inside the container scope. The first virtual container is connected to the first service instance, utilizing the connection information about the first service instance, to virtualize the first service instance inside the container scope. It is detected that that a first source container inside the container scope requires the first service of the first service instance. The first source container is connected to the first virtual container to enable the first source container to access the first service instance through the first virtual container.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: August 16, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ping Xiao, Guan Jun Liu, Guo Qiang Li, Zhi Feng Zhao
  • Patent number: 11416282
    Abstract: Systems, apparatuses and methods are disclosed for scheduling threads comprising of code blocks in a graph streaming processor (GSP) system. One system includes a scheduler for scheduling plurality of threads, the plurality of threads includes a set of instructions operating on the graph streaming processors of GSP system. The scheduler comprises a plurality of stages where each stage is coupled to an input command buffer and an output command buffer. A portion of the scheduler is implemented in hardware and comprises of a command parser operative to interpret commands within a corresponding input command buffer, a thread generator coupled to the command parser operate to generate the plurality of threads, and a thread scheduler coupled to the thread generator for dispatching the plurality of threads for operating on the plurality of graph streaming processors.
    Type: Grant
    Filed: April 14, 2019
    Date of Patent: August 16, 2022
    Assignee: Blaize, Inc.
    Inventors: Satyaki Koneru, Val G. Cook, Ke Yin
  • Patent number: 11366684
    Abstract: In one approach, an import mechanism allows new hardware intrinsics to be utilized by writing or updating a library of source code, rather than specifically modifying the virtual machine for each new intrinsic. Thus, once the architecture is in place to allow the import mechanism to function, the virtual machine itself (e.g. the code which implements the virtual machine) no longer needs to be modified in order to allow new intrinsics to be utilized by end user programmers. Since source code is typically more convenient to write than the language used to implement the virtual machine and the risk of miscoding the virtual machine is minimized when introducing new intrinsics, the import mechanism described herein increases the efficiency at which new hardware intrinsics can be introduced.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: June 21, 2022
    Assignee: Oracle International Corporation
    Inventors: John Robert Rose, Vladimir Ivanov
  • Patent number: 11354164
    Abstract: A computerized method for processing a set of robotic process automation (RPA) tasks receives service level requirement inputs that specify a first set of RPA tasks to be performed within a specified period of time. A response to the service level requirement inputs is computed to determine a number of computing resources required to perform the first set of RPA tasks in the specified period of time. Availability of computing resources from a set of computing resources is determined to generate an allocated set of computing resources. The allocated set of computing resources are deployed. A subset of the first set of RPA tasks is queued for each computing resource and each computing resource is monitored and redeployed as it completes tasks in its queue. Quality of Service (QOS) is achieved by prioritizing certain tasks above others.
    Type: Grant
    Filed: June 30, 2018
    Date of Patent: June 7, 2022
    Assignee: Automation Anywhere, Inc.
    Inventors: James Dennis, VJ Anand, Abhijit Kakhandiki
  • Patent number: 11307903
    Abstract: Embodiments of the present invention set forth techniques for allocating execution resources to groups of threads within a graphics processing unit. A compute work distributor included in the graphics processing unit receives an indication from a process that a first group of threads is to be launched. The compute work distributor determines that a first subcontext associated with the process has at least one processor credit. In some embodiments, CTAs may be launched even when there are no processor credits, if one of the TPCs that was already acquired has sufficient space. The compute work distributor identifies a first processor included in a plurality of processors that has a processing load that is less than or equal to the processor loads associated with all other processors included in the plurality of processors. The compute work distributor launches the first group of threads to execute on the first processor.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: April 19, 2022
    Assignee: NVIDIA Corporation
    Inventors: Jerome F. Duluk, Jr., Luke Durant, Ramon Matas Navarro, Alan Menezes, Jeffrey Tuckey, Gentaro Hirota, Brian Pharris