Patents Examined by Lewis A. Bullock, Jr.
  • Patent number: 11900093
    Abstract: A pipeline can be constructed for implementing a software-stack resolution process. For example, a system can receive a request from a client device for a recommended software-stack for a target software item. The system can also receive pipeline configuration data specifying configurable pipeline units to be included in the pipeline. The pipeline can include a search process for identifying and analyzing a group of software-stack candidates associated with the target software item. The system can construct the pipeline using the configurable pipeline units based on the pipeline configuration data. One or more of the configurable pipeline units can be arranged in the pipeline to guide the search process by adjusting one or more parameters of the search process. The system can then execute the pipeline and transmit a response to the client device indicating a recommended software-stack for the target software item.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: February 13, 2024
    Assignee: Red Hat, Inc.
    Inventors: Fridolin Pokorny, Christoph Goern
  • Patent number: 11893137
    Abstract: According to a disclosed embodiment, data analysis is secured with a microservice architecture and data anonymization in a multitenant application. Tenant data is received by a first microservice in a multitenant application. The tenant data is isolated from other tenant data in the first microservice and stored separately from other tenant data in a tenant database. The tenant data is anonymized in the first microservice and thereafter provided to a second microservice. The second microservice stores the anonymized tenant data in an analytics database. The second microservice, upon request, analyzes anonymized tenant data from a plurality of tenants from the analytics database and provides an analytics result to the first microservice.
    Type: Grant
    Filed: September 21, 2021
    Date of Patent: February 6, 2024
    Assignee: SAP SE
    Inventors: Konstantin Schwed, Sergey Smirnov
  • Patent number: 11893405
    Abstract: A client device includes resource caches, and a processor coupled to the resource caches. The processor receives resources from different resource feeds, and caches user interfaces (UI) of the resources from the different resource feeds, with at least one resource feed having a resource cache separate from the resource cache of the other resource feeds. Statuses of the resource feeds are determined, with at least one status indicating availability of the at least one resource feed having the separate resource cache. UI elements from the separate resource cache are retrieved for display in response to the at least one resource feed associated with the separate resource cache not being available.
    Type: Grant
    Filed: August 13, 2020
    Date of Patent: February 6, 2024
    Inventors: Georgy Momchilov, Avijit Gahtori, Mukund Ingale
  • Patent number: 11893404
    Abstract: A system is provided that enables efficient traffic forwarding in a hypervisor. During operation, the hypervisor determines that a packet is from a first virtual machine (VM) running on the hypervisor and destined to a second VM running on a remote hypervisor. The hypervisor then includes a virtual local area network (VLAN) identifier of a transit VLAN (TVLAN) in a layer-2 header of the packet. The TVLAN is dedicated for inter-VM traffic associated with a distributed virtual routing (DVR) instance operating on the hypervisor and the remote hypervisor. Subsequently, the hypervisor sets a first media access control (MAC) address of the hypervisor as a source MAC address and a second MAC address of the remote hypervisor as a destination MAC address in the layer-2 header. The hypervisor then determines an egress port for the packet based on the second MAC address.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: February 6, 2024
    Assignee: Nutanix, Inc.
    Inventor: Ankur Kumar Sharma
  • Patent number: 11886326
    Abstract: Techniques for configuring test operations on a per-module basis are disclosed. A system receives a command for configuring, on a per-module basis, test operations recited in a set of module code corresponding to a particular module of a plurality of modules in a module system. The module system specifies accessibility of each module in the plurality of modules to other modules in the plurality of modules. The system stores configuration information based on the command and configures a test operation included in an element of the particular module based on the stored configuration information. Configuring the test operation includes one of: (a) enabling the test operation without affecting other code recited in-line with the test operation in the element of the particular module; or (b) disabling the test operation without affecting other code recited in-line with the test operation in the element of the particular module.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: January 30, 2024
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Chris Hegarty, Daniel Jean-Michel Fuchs, Sean James Coffey
  • Patent number: 11886903
    Abstract: Systems and methods for providing a continuous uptime of guest Virtual Machines (“VMs”) during upgrade of a virtualization host device. The methods comprising: connecting all of the guest VMs' frontends or drivers to at least one old control VM which is currently running on the virtualization host device and which contains old virtualization software; creating at least one upgraded control VM that contains new virtualization software and that is to replace the old VM in the virtualization host device; connecting the guest VMs' frontends or drivers to the upgraded VM; and uninstalling the old control VM from the virtualization host device.
    Type: Grant
    Filed: October 20, 2020
    Date of Patent: January 30, 2024
    Inventor: Marcus Granado
  • Patent number: 11886898
    Abstract: Various aspects are disclosed for graphics processing unit (GPU)-remoting latency aware migration. In some aspects, a host executes a GPU-remoting client that includes a GPU workload. GPU-remoting latencies are identified for hosts of a cluster. A destination host is identified based on having a lower GPU-remoting latency than the host currently executing the GPU-remoting client. The GPU-remoting client is migrated from its current host to the destination host.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: January 30, 2024
    Assignee: VMware, Inc.
    Inventors: Lan Vu, Uday Pundalik Kurkure, Hari Sivaraman
  • Patent number: 11886900
    Abstract: Facilitating running a multi-process application using a set of unikernels includes receiving an indication of a request to fork a first process running in a first unikernel virtual machine. It further includes, in response to receiving the indication of the request to fork the process running in the first unikernel virtual machine, deploying a second unikernel virtual machine to run a second process that is a child of the first process. Unikernel scaling includes determining that a unikernel virtual machine to be deployed is associated with at least a portion of a kernel image that is already cached. It further includes, in response to determining that the unikernel virtual machine to be deployed is associated with the at least portion of the kernel image that is already cached, mapping the unikernel virtual machine to the at least portion of the kernel image that is already cached.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: January 30, 2024
    Assignee: NanoVMs, Inc.
    Inventors: Ian Eyberg, William Yongwoo Jhun
  • Patent number: 11875170
    Abstract: Examples described herein relate to a manageability controller for controlling a display of a screen video. The manageability controller may receive screen video data from a hypervisor running on a host operating system (OS) that is executable by a main processing resource separate from the manageability processing resource. The screen video data may include a host OS screen video data corresponding to the host OS, a virtual machine (VM) screen video data corresponding to a VM running on the hypervisor, or both. Further, the manageability controller may store the host OS screen video data or the VM screen video data in a physical video memory based on a screen selection input.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: January 16, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Lee A. Preimesberger, Jorge Cisneros, Vartan Yosef Kasheshian
  • Patent number: 11853804
    Abstract: Routing log-based information between production servers and logging servers is disclosed. A log entry for a logging server is generated at a production server. A shard identifier is computed for a shard associated with the logging server based on application of a hashing algorithm to properties associated with the production server. The hashing algorithm and properties are selected to prevent or minimize the likelihood of computing of the same shard identifier by another production server for the same shard associated with the logging server. The log entry is transmitted to the shard associated with the logging server. A determination is made that the logging server has malfunctioned by detecting that the log entry transmitted to the shard is absent. In response, another shard identifier is computed for another shard of another logging server and a subsequent log entry from the production server is transmitted to the another shard of the another logging over. No load balancers are used by the routing system.
    Type: Grant
    Filed: May 24, 2022
    Date of Patent: December 26, 2023
    Assignee: Rapid7, Inc.
    Inventors: Frank Mitchell, Andrew Thompson
  • Patent number: 11853802
    Abstract: Disclosed are various embodiments for centralized and dynamically generated service configurations for data center and other region builds. A computing environment may be configured to receive requests to deploy a new computing stack or service from different geographical regions at a central service. For the individual ones of the requests, the computing environment may identify computing resources of the computing environment required to deploy the new computing stack or service, determine an order of creation of the computing resources, and create at least one virtual process that automatically allocates the computing resources for the new computing stack based on a predetermined order of creation.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: December 26, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Eric Wei, Andrew J. Lusk, Juan-Pierre Longmore, Anuj Prateek
  • Patent number: 11847482
    Abstract: Methods and systems for balancing resources in a virtual machine computing environment are disclosed. A server can receive data illustrating the configuration of host machines and virtual machines in client computing environment. A simulated computing environment can be created that mirrors the configuration of the client computing environment. Data relating to resource usage (e.g., processor, memory, and storage) of the host machines can be received. The resource usage can be simulated in the simulated computing environment to mirror the usage of the client computing environment. A recommendation to execute a migration of a virtual machine can be received from the simulated computing environment. Instructions to execute a migration corresponding to the recommended migration can be generated and sent to the client computing environment.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: December 19, 2023
    Assignee: VMWARE, INC.
    Inventors: Rahul Ajmera, Amit Ratnapal Sangodkar, Jivan Madtha
  • Patent number: 11847554
    Abstract: The present disclosure discloses a data processing method and related products, in which the data processing method includes: generating, by a general-purpose processor, a binary instruction according to device information of an AI processor, and generating an AI learning task according to the binary instruction; transmitting, by the general-purpose processor, the AI learning task to the cloud AI processor for running; receiving, by the general-purpose processor, a running result corresponding to the AI learning task; and determining, by the general-purpose processor, an offline running file according to the running result, where the offline running file is generated according to the device information of the AI processor and the binary instruction when the running result satisfies a preset requirement. By implementing the present disclosure, the debugging between the AI algorithm model and the AI processor can be achieved in advance.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: December 19, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Yao Zhang, Xiaofu Meng, Shaoli Liu
  • Patent number: 11842218
    Abstract: A virtual machine management service obtains a request to instantiate a virtual machine image (VMI) to implement a virtual network function (VNF). The request specifies a set of processor requirements corresponding to instantiation of the VMI. In response to the request, the service identifies, from a server comprising a set of processor cores, available processor capacity. The service determines, based on the available processor capacity and the set of processor requirements, whether to instantiate the VMI on to a subset of processor cores of the server. Based on this determination, the service instantiates the VMI on to the subset of processor cores to implement the VNF.
    Type: Grant
    Filed: January 24, 2023
    Date of Patent: December 12, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Yanping Qu, Sabita Jasty, Kaushik Pratap Biswas, Yegappan Lakshmanan
  • Patent number: 11836519
    Abstract: A method of marshalling existing software applications to automatically execute a task in a cloud environment includes generating actions that together execute the task; passing the actions to code generation services, where each of the code generation services is associated with a corresponding software application. Each of the code generation services is configured to select a subset of the actions that can be executed by the corresponding software application, and to generate second actions to be executed by the corresponding software application that implement the subset of the actions. The method also includes providing a job definition for the task including each of the second actions for each of the software applications.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: December 5, 2023
    Assignee: Oracle International Corporation
    Inventor: Frank Joseph Klein
  • Patent number: 11829277
    Abstract: Systems and methods for remote debugging perform remote debugging of a receiving device, such as a set-top box or other connected media player, even when the receiving device is located behind a firewall. The receiving device has a persistent outbound connection with a message server. Since it is an outbound connection, it connects across firewall restrictions. A remote debug machine sends a message via the message server to the receiving device over a network. The message carries the command/operation to be executed by the receiving device. The receiving device, which receives the command, executes the command and sends the output of the command to a debug data upload server to which the remote debug machine has access.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: November 28, 2023
    Assignee: DISH NETWORK TECHNOLOGIES INDIA PRIVATE LIMITED
    Inventors: Rakesh Eluvan Periyaeluvan, Gopikumar Ranganathan, Amit Kumar
  • Patent number: 11822948
    Abstract: In response to a request to remove a PCI device from a virtual machine (VM), a processing device may transmit, to a guest operating system (OS) of a VM, an indication that a peripheral component interconnect (PCI) device connected to the VM has been disconnected such that the PCI device appears disconnected to a PCI port driver of the guest OS and simultaneously communicates with a device driver of the guest OS. The processing device may transmit a device removal request to the device driver. The removal request may be transmitted to the device driver without the delay associated with the “push button” approach to device removal since the guest OS already believes the PCI device has been disconnected from the VM. A graceful removal of the device driver may be performed and the PCI device may be disconnected from the VM.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: November 21, 2023
    Assignee: Red Hat, Inc.
    Inventor: Michael Tsirkin
  • Patent number: 11822907
    Abstract: A code repository stores application code. A code management determines, based at least in part on requested features selected in a graphical user interface, code requirements that include attributes of application code needed to achieve the requested features. The code management system determines, based at least in part on the determined code requirements and the metadata for each entry of application code stored in the code repository, one or more candidate application code entries from the code repository. The code management system presents the candidate application code entries for user selection in the graphical user interface. After receipt of a user selection of a selected application code, the selected application code is provided to a computing device associated with the user.
    Type: Grant
    Filed: August 11, 2021
    Date of Patent: November 21, 2023
    Assignee: Bank of America Corporation
    Inventors: Madhusudhanan Krishnamoorthy, Shadab Bubere, Vaasudevan Sundaram, Samrat Bhasin
  • Patent number: 11822462
    Abstract: A method and computer program for generating code coverage information during testing of a code sequence are described, in which the code sequence comprises decisions, each having one or more conditions as inputs. The method includes executing the code sequence on target processing circuitry under the control of test stimuli and maintaining, in memory, a code coverage table for at least one decision. When a decision is evaluated, a bitstring is created within a storage element, each position in the bitstring being associated with one of the conditions and the value in that position representing the value of that condition used in evaluating the decision. The bitstring is used to identify the entry, in the code coverage table associated with the evaluated decision, for that combination of values of the conditions, and a confirmation value is recorded in that entry, indicating that the decision has been evaluated for that entry.
    Type: Grant
    Filed: July 5, 2019
    Date of Patent: November 21, 2023
    Assignee: Arm Limited
    Inventors: Sanne Wouda, Robert James Catherall
  • Patent number: 11822457
    Abstract: Disclosed are systems, methods, and articles for determining compatibility of a mobile application and operating system on a mobile device. In some aspects, a method includes receiving one or more data values from a mobile device having a mobile medical software application installed thereon, the data value(s) characterizing a version of the software application, a version of an operating system installed on the mobile device, and one or more attributes of the mobile device; determining whether the mobile medical software application is compatible with the operating system by at least comparing the received data value(s) to one or more test values in a configuration file; and sending a message to the mobile device based on the determining, the message causing the software application to operate in one or more of a normal mode, a safe mode, and a non-operational mode.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: November 21, 2023
    Assignee: Dexcom, Inc.
    Inventors: Issa Sami Salameh, Douglas William Burnette, Tifo Vu Hoang, Steven David King, Stephen M. Madigan, Michael Robert Mensinger, Andrew Attila Pal, Michael Ranen Tyler