Patents Examined by Charlie Sun
  • Patent number: 11163585
    Abstract: Embodiments of the present invention provide a method, system and computer program product for post-hoc image review method for short-lived Linux containers. In an embodiment of the invention, a post-hoc image review method for short-lived Linux containers includes first directing a creation of a short-lived Linux container in a container management system and applying an initial configuration to the short-lived Linux container. Thereafter, the method includes detecting a termination of the short-lived Linux container. Finally, in response to the termination, the method includes snapshotting a configuration of the short-lived Linux container, comparing the initial configuration to the snapshotted configuration and displaying a list of differences in a container management display.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: November 2, 2021
    Assignee: Google LLC
    Inventor: Richard Reinders
  • Patent number: 11157326
    Abstract: A method for deploying a task includes allocating nodes to the task; determining, in the network, a subnetwork, for interconnecting the allocated nodes, satisfying one or more predefined determination criteria including a first criterion according to which the subnetwork uses only links that are not allocated to any other task already deployed or that are allocated to fewer than N other tasks already deployed, N being a predefined number equal to one or more; allocating the subnet, and in particular the links belonging to that subnet, to the task; and implementing inter-node communication routes in the allocated subnet.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: October 26, 2021
    Assignee: BULL SAS
    Inventor: Jean-Noël Quintin
  • Patent number: 11150942
    Abstract: A communication device among a plurality of communication devices is used in a distributed computing system. The distributed computing system executes a target process including a plurality of partial processes by using the plurality of communication devices. The communication device includes a memory and a processor. The memory stores a trail that represents a state of the plurality of partial processes. The processor selects, from among the plurality of partial processes, an uncompleted partial process with a number of equivalent execution results being less than a target number according to the trail. The processor executes the uncompleted partial process selected by the selector. The processor records an execution result obtained by the execution unit in the trail.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: October 19, 2021
    Assignee: FUJITSU LIMITED
    Inventor: Motoshi Horii
  • Patent number: 11153184
    Abstract: Systems, methods, and computer-readable media for annotating process and user information for network flows. In some embodiments, a capturing agent, executing on a first device in a network, can monitor a network flow associated with the first device. The first device can be, for example, a virtual machine, a hypervisor, a server, or a network device. Next, the capturing agent can generate a control flow based on the network flow. The control flow may include metadata that describes the network flow. The capturing agent can then determine which process executing on the first device is associated with the network flow and label the control flow with this information. Finally, the capturing agent can transmit the labeled control flow to a second device, such as a collector, in the network.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: October 19, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Navindra Yadav, Abhishek Ranjan Singh, Anubhav Gupta, Shashidhar Gandham, Jackson Ngoc Ki Pang, Shih-Chun Chang, Hai Trong Vu
  • Patent number: 11150948
    Abstract: Systems and methods provide an extensible, multi-stage, realtime application program processing load adaptive, manycore data processing architecture shared dynamically among instances of parallelized and pipelined application software programs, according to processing load variations of said programs and their tasks and instances, as well as contractual policies. The invented techniques provide, at the same time, both application software development productivity, through presenting for software a simple, virtual static view of the actually dynamically allocated and assigned processing hardware resources, together with high program runtime performance, through scalable pipelined and parallelized program execution with minimized overhead, as well as high resource efficiency, through adaptively optimized processing resource allocation.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: October 19, 2021
    Assignee: ThroughPuter, Inc.
    Inventor: Mark Henrik Sandstrom
  • Patent number: 11144419
    Abstract: Controlled use of a memory monitor instruction and memory wait instruction in a virtualized environment. A hypervisor executing on a computing host determines that a first process executing in a virtual machine (VM) attempted to execute a memory monitor instruction. The hypervisor determines a memory range associated with the memory monitor instruction. A virtual machine control structure that corresponds to the VM is altered to prevent a virtual machine exit upon a subsequent execution of a memory wait instruction by the first process. The hypervisor executes the memory monitor instruction to cause the memory range to be monitored for a store command to the memory range. The hypervisor returns control to the first process to begin execution at an instruction after the memory monitor instruction.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: October 12, 2021
    Assignee: Red Hat, Inc.
    Inventors: Bandan Das, Karen L. Noel
  • Patent number: 11119821
    Abstract: In one embodiment, a method for FPGA accelerated serverless computing comprises receiving, from a user, a definition of a serverless computing task comprising one or more functions to be executed. A task scheduler performs an initial placement of the serverless computing task to a first host determined to be a first optimal host for executing the serverless computing task. The task scheduler determines a supplemental placement of a first function to a second host determined to be a second optimal host for accelerating execution of the first function, wherein the first function is not able to accelerated by one or more FPGAs in the first host. The serverless computing task is executed on the first host and the second host according to the initial placement and the supplemental placement.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: September 14, 2021
    Assignee: Cisco Technology, Inc.
    Inventors: Komei Shimamura, Xinyuan Huang, Amit Kumar Saha, Debojyoti Dutta
  • Patent number: 11113108
    Abstract: Systems and methods provide an extensible, multi-stage, realtime application program processing load adaptive, manycore data processing architecture shared dynamically among instances of parallelized and pipelined application software programs, according to processing load variations of said programs and their tasks and instances, as well as contractual policies. The invented techniques provide, at the same time, both application software development productivity, through presenting for software a simple, virtual static view of the actually dynamically allocated and assigned processing hardware resources, together with high program runtime performance, through scalable pipelined and parallelized program execution with minimized overhead, as well as high resource efficiency, through adaptively optimized processing resource allocation.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: September 7, 2021
    Assignee: ThroughPuter, Inc.
    Inventor: Mark Henrik Sandstrom
  • Patent number: 11113109
    Abstract: Various examples are disclosed for cluster resource management using adaptive memory demands. Some aspects involve determining a destination memory estimate and a local memory estimate for various workloads executing in a datacenter. Goodness scores are determined corresponding to the candidate workload being executed on a number of different hosts. The goodness scores are determined using the local memory estimates for the currently executing workloads, the destination memory estimate is utilized for the candidate workload if it is not executing on the corresponding host. The workloads are balanced based on the goodness scores.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: September 7, 2021
    Assignee: VMWARE, INC.
    Inventors: Zhelong Pan, Rajesh Venkatasubramanian, Julien Freche, Prashanth Victor
  • Patent number: 11106497
    Abstract: A first scheduler stores into a memory of a first virtual machine, a first block of jobs to be executed by the first virtual machine, the first block of jobs included in a table stored in a database associated with a server computer system. A second scheduler stores into a memory of a second virtual machine, a second block of jobs to be executed by the second virtual machine. The second block of jobs being included in the table and having a second block size equal to the first block size and including jobs not in the first block. From the first virtual machine memory, the first scheduler schedules one or more jobs in the first block for execution by the first virtual machine. From the second virtual machine memory, the second scheduler schedules one or more jobs in the second block for execution by the second virtual machine.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: August 31, 2021
    Assignee: salesforce.com, inc.
    Inventors: Bhinav Sura, Dilip Devaraj, Rajavardhan Sarkapally, Kirankumar Kakanuru Gowdru
  • Patent number: 11106490
    Abstract: Context switch by changing memory pointers. A determination is made that a context switch is to be performed from a first context to a second context. Data of the first context is stored in one or more configuration state registers stored at least in part in a first memory unit and data of the second context is stored in one or more configuration state registers stored at least in part in a second memory unit. The context switch is performed by changing a pointer from the first memory unit to the second memory unit.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: August 31, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 11099902
    Abstract: Distributed machine learning systems and other distributed computing systems are improved by embedding compute logic at the network switch level to perform collective actions, such as reduction operations, on gradients or other data processed by the nodes of the system. The switch is configured to recognize data units that carry data associated with a collective action that needs to be performed by the distributed system, referred to herein as “compute data,” and process that data using a compute subsystem within the switch. The compute subsystem includes a compute engine that is configured to perform various operations on the compute data, such as “reduction” operations, and forward the results back to the compute nodes. The reduction operations may include, for instance, summation, averaging, bitwise operations, and so forth. In this manner, the network switch may take over some or all of the processing of the distributed system during the collective phase.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: August 24, 2021
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 11093287
    Abstract: Data management for edge architected computing systems extends current storage and memory schemes of edge resources to expose interfaces to allow a device, such as an endpoint or client device, or another edge resource, to specify criteria for managing data originating from the device and stored in an edge resource, and extends the storage and memory controllers to manage data in accordance with the criteria, including removing stored data that no longer satisfies the criteria. The criteria includes a temporal hint to specify a time after which the data can be removed, a physical hint to specify a list of edge resources outside of which the data can be removed, an event-based hint to specify an event after which the data can be removed, and a quality of service condition to modify the time specified in the temporal hint based on a condition, such as memory and storage capacity of the edge resource in which the data is managed.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: August 17, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Ramanathan Sethuraman, Karthik Kumar, Mark A. Schmisseur, Brinda Ganesh
  • Patent number: 11086681
    Abstract: A workflow resource manager receives a request to execute a workflow in a cloud computing environment, where the workflow comprises a first set of operations and a second set of operations, and where the first set of operations precedes the second set of operations in the workflow. The workflow resource manager determines a set of cloud computing resource requirements associated with the second set of operations, determines whether the set of cloud computing resource requirements associated with the second set of operations is satisfied by available cloud computing resources, and responsive to determining that the set of cloud computing resource requirements associated with the second set of operations is not satisfied by the available cloud computing resources, rejects the request to execute the workflow.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: August 10, 2021
    Assignee: Red Hat, Inc.
    Inventor: Juana Nakfour
  • Patent number: 11074103
    Abstract: The scheduling device divides the execution duration into a plurality of unit periods. The scheduling device then allocates each of the plurality of physical machines to a physical machine group that may execute, while satisfying the constraint conditions, migration of the virtual machine operating on the allocated physical machine and maintenance work on the allocated physical machine within one or more unit periods of the plurality of unit periods. Thereafter, for each of the plurality of physical machine groups, the scheduling device creates individual schedule information indicating a work execution duration of migration of the virtual machine operating on the allocated physical machine and that of maintenance work on the allocated physical machine within one or more unit periods of the plurality of unit periods, and outputs overall schedule information obtained by integrating the individual schedule information on each of the plurality of physical machine groups.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: July 27, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Shingo Okuno, Yukihiro Watanabe, Fumi Iikura
  • Patent number: 11068296
    Abstract: A method of improving performance of a software application executing with a virtualized computing infrastructure wherein the application has associated: a hypervisor profile of characteristics of a hypervisor in the infrastructure; a network communication profile of characteristics of network communication for the application; a data storage profile of characteristics of data storage for the infrastructure; and an application profile defined collectively by the other profiles.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: July 20, 2021
    Assignee: British Telecommunications Public Limited Company
    Inventors: Kashaf Khan, Newland Andrews
  • Patent number: 11068306
    Abstract: Techniques for retaining in-memory dataframes beyond an in-memory processing session. One technique includes receiving a request to execute a first run having a first set of tasks, creating a first session to execute the first run, and executing the first run in the first session using a dataframe constructed for a dataset defined as a component of the first run. The executing the first run generates an updated dataframe. The technique further includes receiving a request to execute a second run having a second set of tasks. A dependency exists between the first run and the second run based on a condition that the dataset is defined as a component of the first run and the second run. The technique further includes creating a second session to execute the second run, and executing the second run in the second session using the updated dataframe for the dataset.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: July 20, 2021
    Assignee: ORACLE FINANCIAL SERVICES SOFTWARE LIMITED
    Inventors: Rajaram Narasimha Vadapandeshwara, Pramit Dey
  • Patent number: 11061731
    Abstract: A method of scheduling a dedicated processing resource includes: obtaining source code of an application to be compiled; extracting, during compiling of the source code, metadata associated with the application, the metadata indicating an amount of the dedicated processing resource required by the application; and obtaining, based on the metadata, the dedicated processing resource allocated to the application. In this manner, performance of the dedicated processing resource scheduling system and resource utilization is improved.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: July 13, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Junping Zhao, Kun Wang, Layne Lin Peng, Fei Chen
  • Patent number: 11061727
    Abstract: Systems and techniques are described for predicting future overlap in requests for a compute resource to address a system overload before it occurs. Requests for a resource may be tracked in time and grouped based on one or more common characteristics of the requests, such as a time of occurrence of the requests and a period that they repeat. Once grouped, different groups of requests for the resource may be tracked across at least one dimension, such as a periodic time of occurrence, a volume of requests, or a length of time of each occurrence of a group, to generate tracking data. Based on the tracking data, predictions may be generated indicating whether and to what extent the groups of resources will overlap at a future time. Additional resources may be provisioned to process the requests to prevent or reduce the likelihood of a system overload at the future time.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: July 13, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Monika Marta Gnyp, Jamie Plenderleith
  • Patent number: 11061724
    Abstract: Method and system embodying the method for programmable scheduling encompassing: enqueueing at least one command into one of a plurality of queues having a plurality of entries; determining a category of the command at the head entry of each of the plurality of queues; processing each determined non-job category command by a non-job command arbitrator; and processing each determined job category command by a job arbitrator and assignor, is disclosed.
    Type: Grant
    Filed: June 5, 2017
    Date of Patent: July 13, 2021
    Assignee: MARVELL ASIA PTE, LTD.
    Inventors: Timothy Toshio Nakada, Jason Daniel Zebchuk, Gregg Alan Bouchard, Tejas Maheshbhai Bhatt, Hong Jik Kim, Ahmed Shahid, Mark Jon Kwong