Patents Examined by Jacob D Dascomb
  • Patent number: 11373011
    Abstract: A security module is disclosed. In embodiments, the security module includes a common host platform configured to co-host a plurality of certified functions via a plurality of interconnected hardware resources. The common host platform may be configured to host a first certified function independently certified via a first certifying authority, and a second certified function independently certified via a second certifying authority. The first certified function may be hosted on a first sub-set of dedicated hardware resources and a first sub-set of shared hardware resources. The second certified function may hosted on a second sub-set of dedicated hardware resources and the first sub-set of shared hardware resources including one or more hardware resources shared with the first certified function.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: June 28, 2022
    Assignee: Rockwell Collins, Inc.
    Inventors: James A. Marek, Sarah A. Miller, Adriane R. Van Auken
  • Patent number: 11366909
    Abstract: Data processing systems and methods, according to various embodiments, are adapted for efficiently processing data to allow for the streamlined assessment of risk ratings for one or more vendors. In various embodiments, the systems/methods may use one or more particular vendor attributes (e.g., as determined from scanning one or more webpages associated with the particular vendor) and the contents of one or more completed privacy templates for the vendor to determine a vendor risk rating for the particular vendor. As a particular example, the system may scan a website associated with the vendor to automatically determine one or more security certifications associated with the vendor and use that information, along with information from a completed privacy template for the vendor, to calculate a vendor risk rating that indicates the risk of doing business with the vendor.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: June 21, 2022
    Assignee: OneTrust, LLC
    Inventor: Jonathan Blake Brannon
  • Patent number: 11366652
    Abstract: The functionality offered through a game development application can be extended using a plurality of extension modules. A customer portal can expose information about the available modules to a developer, where the extension modules can include components such as a customer interface enabling an authorized entity to access or modify the functionality, as well as an application programming interface (API) or other interface for enabling the functionality to be accessed during gameplay. The API can be associated with various resources that can be allocated to the customer over a period of time, or that can be allocated dynamically in order to process discrete tasks. The allocation of resources can occur after release of the game and during live gameplay. Similar functionality can be accessed by a game administrator to trigger specific actions during gameplay.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: June 21, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Anthony Pressacco, Michael Eric Deem
  • Patent number: 11360758
    Abstract: A communication processing device including: a memory that stores data relating to a pre-update firmware, and second data relating to a post-update firmware, and that stores first reference destination address indicating the storage area of reference destination included in the first data in association with the reference destination; a rewriting unit configured to rewrite at least some of the first reference destination address stored in the memory with second reference destination address indicating the storage area of the reference destination in the second data; and a control unit configured to, when referring to the reference destination in the first data, refer to the second data on the basis of the second reference destination address stored in the memory.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: June 14, 2022
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Takumi Harada, Hirotaka Ujikawa, Manabu Yoshino, Noriyuki Oota, Kenichi Suzuki
  • Patent number: 11347498
    Abstract: Systems and methods for modifying bytecode at runtime are provided. A virtual machine can execute bytecode of an application. The virtual machine can receive a modification for the application that includes modified bytecode for the application. The virtual machine can identify a portion of the bytecode of the application that corresponds to the modified bytecode. The virtual machine can update the portion of the bytecode of the application at runtime using the modification.
    Type: Grant
    Filed: February 26, 2013
    Date of Patent: May 31, 2022
    Assignee: Red Hat, Inc.
    Inventors: Filip Elias, Filip Nguyen
  • Patent number: 11347562
    Abstract: Described herein are systems, methods, and software to manage configurations between dependent clusters. In one implementation, a management system maintains a data structure that indicates relationships between clusters in a computing environment. The management system further identifies a configuration modification to a first cluster and identifies other clusters associated with the first cluster based on the data structure. Once the other clusters are identified, the management system may determine configuration modifications for the other clusters based on the data structure and initiate deployment of the configuration modifications.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: May 31, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Joel Baxter
  • Patent number: 11340948
    Abstract: A method for controlling transactional processing system having transactions that include multiple tasks, a throughput limit a transaction processing time limit includes allocating a plurality of threads to be used by multiple tasks to achieve a throughput approximating the throughput limit. The method assigns the multiple tasks to the plurality of threads and assigns respectively different processing delays to the plurality of threads. The processing delays span an interval less than the transaction processing time limit. The method processes the multiple tasks within the transaction processing time limit by executing the plurality of threads at times determined by the respective processing delays.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: May 24, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jovin Vasanth Kumar Deva Sahayam Arul Raj, Avinash G. Pillai, Apsara Karen Selvanayagam, Jinghua Chen
  • Patent number: 11334397
    Abstract: Techniques for migrating virtual machines in logical clusters based on demand for the applications are disclosed. In one example, a system may include a logical cluster that spans across a first datacenter located at a first site and a second datacenter located at a second site, the second datacenter being a replication of the first datacenter. The first datacenter may include a virtual machine executing an application. Further, the system may include a management node communicatively coupled to the first datacenter and the second datacenter. The management node may include a dynamic affinity policy engine to monitor the application running in the first datacenter, determine a demand for the application from the first datacenter and the second datacenter based on the monitoring, and recommend migration of the virtual machine hosting the application from the first datacenter to the second datacenter based on the demand for the application.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: May 17, 2022
    Assignee: VMWARE, INC.
    Inventors: Ravi Kumar Reddy Kottapalli, Srinivas Sampatkumar Hemige
  • Patent number: 11321141
    Abstract: A method comprises receiving a request to execute an instance of a given software container, determining source code entities of source code of the given software container, and generating a given software container profile for the given software container based at least in part on rankings associated with the source code entities. The method also comprises creating a resource management plan for the given software container utilizing one or more machine learning algorithms, the resource management plan comprising resource management metric thresholds determined based at least in part on historical resource utilization data for additional software containers having associated software container profiles similar to the given software container profile.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: May 3, 2022
    Assignee: Dell Products L.P.
    Inventors: Mohammad Rafey, Siddharth Agrawal
  • Patent number: 11321114
    Abstract: A virtualized application runs on top of a guest operating system (OS) of a virtual machine and is supported by a file system of the guest OS. The method of supporting the virtualized application with the file system includes provisioning a first virtual disk as a data store of the file system and a second virtual disk for the virtualized application, wherein the first and second virtual disks store first and second files of the virtualized application, respectively, retrieving metadata of the virtualized application, updating a master file table of the file system according to the retrieved metadata to map the first files to logical blocks of the file system, updating the master file table to map the second files to additional logical blocks according to the retrieved metadata, and creating a mapping for the additional logical blocks, that is used during an input/output operation, according to the retrieved metadata.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: May 3, 2022
    Assignee: VMware, Inc.
    Inventors: Jairam Choudhary, Arun Passi
  • Patent number: 11307899
    Abstract: A method, computer program product, and computing system for generating a virtual storage appliance configuration file. A storage system may be queried for physical configuration information associated with deploying a virtual storage appliance based upon, at least in part, the virtual storage appliance configuration file. One or more virtual storage appliance deployment vulnerabilities associated with the storage system may be identified based upon, at least in part, the virtual storage appliance configuration file and the physical configuration information. A notification including the identified one or more virtual storage appliance deployment vulnerabilities may be generated.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: April 19, 2022
    Assignee: EMC IP HOLDING COMPANY, LLC
    Inventors: Dmitry V. Krivenok, Jared C. Lyon
  • Patent number: 11263037
    Abstract: According to a computer-implemented method, a first virtual machine (VM) is deployed on a first hypervisor from a non-clustered server pool to run a workload of one or more applications. A dummy VM is configured on a second hypervisor from the non-clustered server pool to reserve same resources as the first VM without powering the dummy VM. The first VM is powered with a cold start on the second hypervisor using the resources on the dummy VM. Also, the first VM is provided with a same VM configuration on the second hypervisor that was on the first hypervisor.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: March 1, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ramesh Kumble, Pramod Belsare, Satish Lodam
  • Patent number: 11263054
    Abstract: Disclosed are aspects of memory-aware placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. In some embodiments, a computing environment is monitored to identify graphics processing unit (GPU) data for a plurality of virtual GPU (vGPU) enabled GPUs of the computing environment, a plurality of vGPU requests are received. A respective vGPU request includes a GPU memory requirement. GPU configurations are determined in order to accommodate vGPU requests. The GPU configurations are determined based on an integer linear programming (ILP) vGPU request placement model. Configured vGPU profiles are applied for vGPU enabled GPUs, and vGPUs are created based on the configured vGPU profiles. The vGPU requests are assigned to the vGPUs.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: March 1, 2022
    Assignee: VMWARE, INC.
    Inventors: Anshuj Garg, Uday Pundalik Kurkure, Hari Sivaraman, Lan Vu
  • Patent number: 11263053
    Abstract: Tags are applied to gather information about an application that has been deployed across a plurality of resources, so that application resources can be brought under management and a blueprint for the deployed application can be constructed using information gathered from the tags. A method of identifying resources of a deployed application for management comprises applying tags to currently deployed resources of the application including virtual machines (VMs), storing tag data in association with the VMs to which the tag has been applied in an inventory data store, searching the inventory data store for VMs to which first tags have been applied, wherein the first tags each have tag value that identifies the deployed application, searching the inventory data store for second tags that have been applied to the VMs, and adding the resources identified by the first and second tags to a group of application resources to be managed.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: March 1, 2022
    Assignee: VMware, Inc.
    Inventors: Bryan P. Halter, Alexey Intelegator, Chinmay Gore, Maximilian Choly
  • Patent number: 11243799
    Abstract: An apparatus includes a plurality of virtual machines, a hypervisor coupled to the plurality of virtual machines, and a graphical processing unit (GPU) coupled to the hypervisor. The plurality of virtual machines are allocated a plurality of time slices. The hypervisor initiates a world switch to a first virtual machine of the plurality of virtual machines. The GPU makes a determination as to whether to adjust the time slice associated with the first virtual machine based on an assessment of time slice adjustment parameters related to an execution time of at least one of the plurality of virtual machines.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: February 8, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Alexander Fuad Ashkar, Hans Fernlund
  • Patent number: 11237868
    Abstract: Systems and methods for machine learning-based power capping and virtual machine placement in cloud platforms are disclosed. A method includes applying a machine learning model to predict whether a request for deployment of a virtual machine corresponds to deployment of a user-facing (UF) virtual machine or a non-user-facing (NUF) virtual machine. The method further includes sorting a list of candidate servers based on both a chassis score and a server score for each server to determine a ranked list of the candidate servers, where the server score depends at least on whether the request for the deployment of the virtual machine is determined to be a request for a deployment of a UF virtual machine or a request for a deployment of an NUF virtual machine. The method further includes deploying the virtual machine to a server with highest rank among the ranked list of the candidate servers.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: February 1, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ioannis Manousakis, Marcus F. Fontoura, Alok Gautam Kumbhare, Ricardo G. Bianchini, Nithish Mahalingam, Reza Azimi
  • Patent number: 11232034
    Abstract: A cache circuit associated with a hypervisor system is disclosed. The cache circuit comprises a cache memory circuit comprising a plurality of cachelines, wherein each cacheline is configured to store data associated with one or more virtual machines (VMs) of a plurality of VMs associated with the hypervisor system and a plurality of tag array entries respectively associated with the plurality of cachelines. In some embodiments, each tag array entry of the plurality of tag entries comprises a tag field configured to store a tag identifier (ID) that identifies an address of a main memory circuit to which a data stored in the corresponding cacheline is associated and a VM tag field configured to store a VM ID associated with a VM to which the data stored in the corresponding cacheline is associated.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: January 25, 2022
    Assignee: Infineon Technologies AG
    Inventors: Manoj Kumar Harihar, Romain Ygnace
  • Patent number: 11221880
    Abstract: The present invention provides an adaptive computing resource allocation approach for virtual network functions, including the following two steps: Step 1: predicting VNFs' real-time computing resource requirements; Step 1.1: offline profiling different types of VNFs, to obtain a parameter relation between the required amount of computing resources and the ingress packet rate; Step 1.2: online monitoring the network traffic information of each VNF, and predicting VNFs' required amount of computing resources with combination of the parameters in Step 1.1; Step 2: reallocating computing resources based on VNFs' resource requirements. The computing resource allocation approach includes a direct allocation approach and an incremental approach. The adaptive computing resource allocation approach for virtual network functions of the present invention allocates computing resources based on VNFs' actual requirements, and remedies performance bottlenecks caused by fair allocation.
    Type: Grant
    Filed: July 4, 2017
    Date of Patent: January 11, 2022
    Assignee: Shanghai Jiao Tong University
    Inventors: Haibing Guan, Ruhui Ma, Jian Li, Xiaokang Hu
  • Patent number: 11221885
    Abstract: A method for allocating resources for a machine learning model is disclosed. A machine learning model to be executed on a special purpose machine learning model processor is received. A computational data graph is generated from the machine learning model. The computational dataflow graph represents the machine learning model which includes nodes, connector directed edges, and parameter directed edges. The operations of the computational dataflow graph is scheduled and then compiled using a deterministic instruction set architecture that specifies functionality of a special purpose machine learning model processor. An amount of resources required to execute the computational dataflow graph is determined. Resources are allocated based on the determined amounts of resources required to execute the machine learning model represented by the computational dataflow graph.
    Type: Grant
    Filed: May 7, 2020
    Date of Patent: January 11, 2022
    Assignee: Google LLC
    Inventors: Jonathan Ross, John Michael Stivoric
  • Patent number: 11221884
    Abstract: According to one aspect of the present disclosure, a method and technique for hybrid virtual machine configuration management is disclosed. The method includes assigning to a first set of virtual resources associated with a virtual machine a first priority and assigning to a second set of virtual resources associated with the virtual machine a second priority lower than the first priority. An operating system of the virtual machine is provided with the first and second priorities assigned to the respective first and second sets of virtual resources. The operating system dispatches to process a workload the virtual resources from the first set before dispatching the virtual resources from the second set.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: January 11, 2022
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Wen-Tzer T. Chen, William A. Maron, Mysore S. Srinivas, Basu Vaidyanathan