Patents Examined by Bradley A Teets
  • Patent number: 11281393
    Abstract: A data management and storage (DMS) cluster of peer DMS nodes manages data of a tenant of a multi-tenant compute infrastructure. The compute infrastructure includes an envoy connecting the DMS cluster to virtual machines of the tenant executing on the compute infrastructure. The envoy provides the DMS cluster with access to the virtual tenant network and the virtual machines of the tenant connected via the virtual tenant network for DMS services such as data fetch jobs to generate snapshots of the virtual machines. The envoy sends the snapshot from the virtual machine to a peer DMS node via the connection for storage within the DMS cluster. The envoy provides the DMS cluster with secure access to authorized tenants of the compute infrastructure while maintaining data isolation of tenants within the compute infrastructure.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: March 22, 2022
    Assignee: Rubrik, Inc.
    Inventors: Abdul Jabbar Abdul Rasheed, Soham Mazumdar, Hardik Vohra, Mudit Malpani
  • Patent number: 11275621
    Abstract: A device and a method for operating a computer system, a job to be processed by the computer system being assignable to a task from a plurality of tasks for processing, the job to be processed being assigned as a function of a result of a comparison, a first value being compared to a second value in the comparison, the first value characterizing a first computing expenditure, which is to be expected in the computer system in the processing of the job to be processed in a first task of the plurality of tasks, the second value characterizing a second computing expenditure, which is to be expected in the computer system in the processing of the job to be processed in a second task of the plurality of tasks.
    Type: Grant
    Filed: November 8, 2017
    Date of Patent: March 15, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Bjoern Saballus, Elmar Ott, Jascha Friedrich, Juergen Bregenzer, Simon Kramer, Michael Pressler, Sebastian Stuermer
  • Patent number: 11269685
    Abstract: In an approach for managing physical processor usage of a shared memory buffer, a processor receives a request for memory. A processor receives a request for memory from a process running on a physical processor. A processor determines whether the request for memory is less than or equal to a pre-determined threshold, wherein the pre-determined threshold is based on characteristics of a server on which the physical processor resides, needs of the server, and a frequency of requests of each memory size. Responsive to determining the request for memory is greater than the pre-determined threshold, a processor identifies a node on which the physical processor resides. A processor identifies a memory buffer of a set of memory buffers allocated to the node on which the physical processor resides. A processor allocates the memory buffer.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: March 8, 2022
    Assignee: International Business Machines Corporation
    Inventors: Muruganandam Somasundaram, Jeffrey Paul Kubala, Jerry A. Moody, Hunter J. Kauffman
  • Patent number: 11256531
    Abstract: In an approach for isolating physical processors during optimization of virtual machine placement, a server is provided comprising a plurality of containers and a plurality of physical processors. A processor builds a set of bit masks for each type of physical processor required for a logical partition. A processor builds a set of solution spaces based on the plurality of containers and an amount of each type of container of the plurality of containers. A processor completes a combinatorial search of the set of bitmasks and the set of solution spaces. A processor identifies a solution space of the set of solution spaces for the logical partition. The physical and logical configuration of the server is changed based on the solution space for the first logical partition.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: February 22, 2022
    Assignee: International Business Machines Corporation
    Inventors: Muruganandam Somasundaram, Jeffrey Paul Kubala, Seth E. Lederer, Jeffrey G. Chan, Jerry A. Moody
  • Patent number: 11231959
    Abstract: A system may detect a quit operation corresponding to a child application. In response to detection the quit operation, the system may store, in a memory, a mapping between the child application identifier and the child application task, and suspend display of the child application. The system may generate a foreground and background switching entry corresponding to the child application, the foreground and background switching entry associated with the child application identifier. The system may display the foreground and background switching entry in a visible region of the graphical user interface generated based on a parent application. The system may detect a selection operation indicative of the foreground and background switching entry. In response to the selection operation, the system may obtain the child application task from the memory according to the child application identifier, and resume display of the child application based on the child application task.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: January 25, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Hao Jun Hu, Zong Zhuo Wu, Zhaowei Wang, Shang Tao Liang, Yi Duan, Xiao Kang Long, Chao Lin, Ji Sheng Huang, Qing Jie Lin, Hao Hu
  • Patent number: 11226846
    Abstract: Systems and methods are disclosed for managing resources associated with cluster-based resource pool(s). According to illustrative implementations, innovations herein may include or involve one or more of best fit algorithms, infrastructure based service provision, tolerance and/or ghost processing features, dynamic management service having monitoring and/or decision process features, as well as virtual machine and resource distribution features.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: January 18, 2022
    Assignee: Virtustream IP Holding Company LLC
    Inventors: Vincent G. Lubsey, Kevin D. Reid, Karl J. Simpson, Rodney John Rogers
  • Patent number: 11204793
    Abstract: Aspects of the present invention provide an approach that evaluates a locally running image (e.g., such as that for a virtual machine (VM)) and determines if that image could run more efficiently and/or more effectively in an alternate computing environment (e.g., a cloud computing environment). Specifically, embodiments of the present invention evaluate the local (existing/target) image's actual and perceived performance, as well as the anticipated/potential performance if the image were to be migrated to an alternate environment. The anticipated/potential performance can be measured based on another image that is similar to the existing/target image but where that image is running in a different computing environment. Regardless, the system would display a recommendation to the end user if it were determined that the image could perform better in the alternate environment (or vice versa). It is understood that performance is just one illustrative metric for which the system would perform a comparison.
    Type: Grant
    Filed: April 11, 2019
    Date of Patent: December 21, 2021
    Assignee: SERVICENOW, INC.
    Inventors: Kulvir S. Bhogal, Gregory J. Boss, Nitin Gaur, Andrew R. Jones
  • Patent number: 11200083
    Abstract: Reconstituting a machine image separates constituent parts of a machine image, and for each part, determines whether an exact version of the part is available on the target machine. If an exact version of the part is not available on the target machine, an inexact part is looked for on the target machine. Whether an inexact part is found may be determined based on attribute policy specification and similarity computation. For the inexact part found on the target machine, any dependencies may be identified and processed as a part to be reconstituted for the machine image. If no exact part and no inexact part are found on the target machine, the part is transferred from a source machine to the target machine. A machine image is created based on parts.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: December 14, 2021
    Assignee: International Business Machines Corporation
    Inventors: Alexei Karve, Andrzej Kochut
  • Patent number: 11188372
    Abstract: A computing system may be in communication with client computing devices. The computing system may include a cloud infrastructure, an offline cache, and a VDA configured to concurrently have a first registration with the cloud infrastructure, and a second registration with the offline cache, and provide corresponding virtual desktop instances for the client computing devices based upon either the first registration or the second registration. The offline cache may be configured to broker local resources for the virtual desktop instances when the cloud infrastructure is unavailable. The VDA may be configured to transition to the offline cache using the second registration when the cloud infrastructure is unavailable.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: November 30, 2021
    Assignee: CITRIX SYSTEMS, INC.
    Inventors: Leo C. Singleton, IV, Mukund Ingale, Georgy Momchilov, Balasubramanian Swaminathan
  • Patent number: 11182211
    Abstract: The present application discloses a task allocation method and task allocation apparatus for distributed data computing. The task allocation method includes: receiving storage parameters for target data to be computed in distributed data; mapping data slices of the target data to a resilient distributed dataset based on the storage parameters, each data slice corresponding respectively to a partition in the resilient distributed dataset; assigning each partition to a storage node to generate a computing task and perform the computing tasks. By using data storage information in a distributed database to allocate computing tasks to storage nodes corresponding to the data, Only data in local memories need to be called during the computing process, IO redundancy and time consumed due to repeated data forwarding are reduced.
    Type: Grant
    Filed: May 25, 2016
    Date of Patent: November 23, 2021
    Assignee: HANGZHOU HIKVISION DIGITAL TECHNOLOGY CO., LTD
    Inventor: Zhihui Liu
  • Patent number: 11176483
    Abstract: Described herein are systems and methods for providing data sets from a constantly changing database to a streaming machine learning component. In one embodiment, a data streaming sub-system receives multiple incoming streams of data sets, in which each stream is generated in real-time by one of multiple data sources. The streaming sub-system sends data sets, on-the-fly as they are received, to storage in the memory of a database, in which there is a linkage between the storage and the time of arrival or the time of storage, of the data sets. The database receives, from a machine learning component, a request to receive data sets according to a particular time or time period. In response to such request, the database identifies such data sets according to the particular time or time period and sends them to the machine learning component.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: November 16, 2021
    Assignee: DataRobot Inc.
    Inventors: Swaminathan Sundararaman, Nisha Darshi Talagala, Gal Zuckerman
  • Patent number: 11175948
    Abstract: A plurality of processing entities are maintained. A plurality of task control block (TCB) groups are generated, wherein each of the plurality of TCB groups are restricted to one or more different processing entities of the plurality of processing entities. A TCB is assigned to one of the plurality of TCB groups, at TCB creation time.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: November 16, 2021
    Assignee: International Business Machines Corporation
    Inventors: Seamus J. Burke, Trung N. Nguyen, Louis A. Rasor
  • Patent number: 11163599
    Abstract: Techniques promote monitoring of hypervisor systems by presenting dynamic representations of hypervisor architectures that include performance indicators. A reviewer can interact with the representation to progressively view select lower-level performance indicators. Higher level performance indicators can be determined based on lower level state assessments. A reviewer can also view historical performance metrics and indicators, which can aid in understanding which configuration changes or system usages may have led to sub-optimal performance.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: November 2, 2021
    Assignee: SPLUNK INC.
    Inventors: Brian Bingham, Tristan Fletcher
  • Patent number: 11150956
    Abstract: A set of resources required to process a data integration job is determined. In response to determining that the set of resources is not available, queue occupation, for each queue in the computing environment, is predicted. Queue occupation is a workload of queue resources for a future time based on a previous workload. A best queue is selected based on the predicted queue occupation. The best queue is the queue or queues in the computing environment available to be assigned to process the data integration job without preemption. The data integration job is processed using the best queue. It is determined whether a preemption event occurred causing the removal of resources from the best queue. A checkpoint is created in response to determining that a preemption event occurred. The checkpoint indicates the last successful operation completed and provides a point where processing can resume when resources become available.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: October 19, 2021
    Assignee: International Business Machines Corporation
    Inventors: Krishna Kishore Bonagiri, Eric A. Jacobson, Ritesh Kumar Gupta, Scott Louis Brokaw
  • Patent number: 11126470
    Abstract: An allocation method for central processing units and a server using the allocation method are provided. The allocation method includes the following steps: testing a first efficacy of a server and recording a first number of first central processing unit(s) configured to perform a first task, a second number of second central processing unit(s) configured to perform a second task and the first efficacy; determining whether the first central processing unit(s) is in a busy state; increasing the first number when the first central processing unit(s) is in the busy state; determining whether a bandwidth occupied by the first task reaches a maximum bandwidth when the first central processing unit(s) is not in the busy state; increasing the second number when the bandwidth occupied by the first task does not reach the maximum bandwidth; continuously performing the aforementioned steps.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: September 21, 2021
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventor: Yu-Cheng Wang
  • Patent number: 11126530
    Abstract: Systems and methods for containerized IT intelligence and management. In one embodiment, a system for containerized IT financial management comprises at least one collector, at least one meter, at least one connector, and a reporting dashboard. The at least one collector is customized and connected to at least one container platform. The at least one collector sends capacity and consumption metrics to the at least one meter for processing and aggregation.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: September 21, 2021
    Assignee: EDJX, INC.
    Inventors: Delano Seymour, Douglas Steele, John Cowan
  • Patent number: 11113090
    Abstract: A container management utility tool may deploy an object model that may persist one or more container dependencies, relationships, or a collection of containers that may represent a system function. Through a web front-end interface, for example, the containers may be started, stopped, or restarted in a specific order, and the tool automatically determines the additional containers that need to be started in order to maintain the integrity of the environment. Through the web interface, for example, the object model may be managed, and start-up orders, container dependencies, or collection maps of containers that represent a system function may be updated. For containers that may not start under load, the object model may block access to the containers until the containers are fully initialized.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: September 7, 2021
    Assignee: United Services Automobile Association (USAA)
    Inventors: Christopher T. Wilkinson, Neelsen Edward Cyrus
  • Patent number: 11106480
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to improve containerized application visibility. An example apparatus includes a container application manager to build an inventory of the containerized application, the containerized application including a virtual machine, the virtual machine hosting one or more containers, and a network topology builder to invoke a virtual machine agent of the virtual machine to obtain network traffic events from the one or more containers to generate network topology information associated with the containerized application based on the inventory, generate a network topology for the containerized application based on the network topology information, build the visualization based on the network topology, the visualization including the inventory and the network topology information, and launch a user interface to display the visualization to execute one or more computing tasks.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: August 31, 2021
    Assignee: VMWARE, INC.
    Inventors: Bin Wang, Aditi Vutukuri, Lan Luo, Margaret Petrus
  • Patent number: 11108866
    Abstract: Method, devices and computer programs for a dynamic management of a first server application on a first server platform of a telecommunication system are disclosed wherein a further server application is operating or installable on the first server platform or a further server platform. The first server platform has a maximum processing capacity and a capacity fraction of the maximum processing capacity is assignable to the first server application reserving the capacity fraction for processing the first server application. A determination of a required processing capacity for processing at least one of the first server application and the further server application, an analysis of the required processing capacity for an assignment of the capacity fraction to the first server application, and an assignment of the capacity fraction are performed.
    Type: Grant
    Filed: January 30, 2015
    Date of Patent: August 31, 2021
    Assignee: TELEFONAKTIEBOLAGET L M ERICSSON (PUBL)
    Inventors: Heino Hameleers, Frank Hundscheidt
  • Patent number: 11099894
    Abstract: A multi-tenant environment is described with configurable hardware logic (e.g., a Field Programmable Gate Array (FPGA)) positioned on a host server computer. For communicating with the configurable hardware logic, an intermediate host integrated circuit (IC) is positioned between the configurable hardware logic and virtual machines executing on the host server computer. The host IC can include management functionality and mapping functionality to map requests between the configurable hardware logic and the virtual machines. Shared peripherals can be located either on the host IC or the configurable hardware logic. The host IC can apportion resources amongst the different configurable hardware logics to ensure that no one customer can over consume resources.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: August 24, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Mark Bradley Davis, Asif Khan, Christopher Joseph Pettey, Erez Izenberg, Nafea Bshara