Patents Examined by Sisley N Kim
  • Patent number: 11853808
    Abstract: A method includes receiving a request to set up a computing cluster comprising at least one node, the request comprising a selection of a node graphical user interface element that represents at least one virtual machine associated with at least one of at least one cloud service provider and at least one on-premise computing device, dynamically generating a configuration file comprising configuration language to set up the computing cluster comprising the at least one node, parsing the configuration file to convert the configuration file into at least one application programming interface (API) request and sending the at least one API request to the least one of the at least one cloud service provider and the at least one on-premise computing device to set up the computing cluster, and receiving real-time deployment information.
    Type: Grant
    Filed: January 26, 2023
    Date of Patent: December 26, 2023
    Assignee: HARPOON CORP.
    Inventors: Dominic Holt, Manuel Gauto, Mathew Jackson
  • Patent number: 11853803
    Abstract: A workload compliance governor system includes a management system coupled to a computing system. A workload compliance governor subsystem in the computing system receives a workload performance request associated with a workload, exchanges hardware compose communications with the management system to compose hardware components for the workload, and receives back an identification of hardware components. The workload compliance governor subsystem then determines that the identified hardware components satisfy hardware compliance requirements for the workload, and configures the identified hardware components in the computing system based on the software compliance requirements for the workload in order to cause those identified hardware components to provide an operating system and at least one application that operate to perform the workload.
    Type: Grant
    Filed: October 28, 2022
    Date of Patent: December 26, 2023
    Assignee: Dell Products L.P.
    Inventors: Mukund P. Khatri, Gaurav Chawla, William Price Dawkins, Elie Jreij, Mark Steven Sanders, Walter A. O'Brien, III, Robert W. Hormuth, Jimmy D. Pike
  • Patent number: 11853800
    Abstract: Apparatuses and methods for providing resources are provided that include receiving power statuses of resources of a system capable of providing the resources; quantifying the power statuses of the resources; calculating an available soft capacity of the system based on the quantified power statuses and a total capacity of the system; and providing an assigning amount of the resources beyond the calculated available soft capacity to one or more users.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: December 26, 2023
    Assignee: Alibaba Group Holding Limited
    Inventors: Youquan Feng, Yijun Lu, Jun Song
  • Patent number: 11847505
    Abstract: Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for managing a storage system. The method includes: determining, at a first device of the storage system, whether a load of a first accelerator resource of the first device exceeds a load threshold; sending, if it is determined that the load exceeds the load threshold, a job processing request to a second device in a candidate device list to cause the second device to process a target job of the first device using a second accelerator resource of the second device, the candidate device list indicating devices in the storage system that can be used to assist the first device in job processing; receiving, from the second device, latency information related to remote processing latency of processing the target job using the second accelerator resource; and updating the candidate device list based on the latency information. The embodiments of the present disclosure can optimize the system performance.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: December 19, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Tao Chen, Bing Liu, Geng Han, Jian Gao
  • Patent number: 11842209
    Abstract: Exemplary methods, apparatuses, and systems include a client virtual machine processing a system call for a device driver to instruct a physical device to perform a function and transmitting the system call to an appliance virtual machine to execute the system call. The client virtual machine determines, in response to the system call, that an established connection with the appliance virtual machine has switched from a first protocol to a second protocol, the first and second protocols including a high-performance transmission protocol and Transmission Control Protocol and Internet Protocol (TCP/IP). The client virtual machine transmits the system call to the appliance virtual machine according to the second protocol. For example, the established connection may switch to the second protocol in response to the client virtual machine migrating to the first host device from a second host device.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: December 12, 2023
    Assignee: VMware, Inc.
    Inventors: Lawrence Spracklen, Hari Sivaraman, Vikram Makhija, Rishi Bidarkar
  • Patent number: 11842215
    Abstract: Techniques described herein can optimize usage of computing resources in a data system. Dynamic throttling can be performed locally on a computing resource in the foreground and autoscaling can be performed in a centralized fashion in the background. Dynamic throttling can lower the load without overshooting while minimizing oscillation and reducing the throttle quickly. Autoscaling may involve scaling in or out the number of computing resources in a cluster as well as scaling up or down the type of computing resources to handle different types of situations.
    Type: Grant
    Filed: January 28, 2023
    Date of Patent: December 12, 2023
    Assignee: Snowflake Inc.
    Inventors: Johan Harjono, Daniel Geoffrey Karp, Kunal Prafulla Nabar, Rares Radut, Arthur Kelvin Shi
  • Patent number: 11834071
    Abstract: Methods, system, non-transitory media, and devices for supporting safety compliant computing in a heterogeneous computing system, such as a vehicle heterogeneous computing system are disclosed. Various aspects include methods enabling a vehicle, such as an autonomous vehicle, a semi-autonomous vehicle, etc., to achieve algorithm safety for various algorithms on a heterogeneous compute platform with various safety levels.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: December 5, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Ahmed Kamel Sadek, Avdhut Joshi, Gautam Sachdeva, Yoga Y Nadaraajan
  • Patent number: 11816495
    Abstract: Embodiments of the present invention include a method for running a virtual manager scheduler for scheduling activities for virtual machines. The method may include: defining a schedule for one or more activities to be executed for a virtual machine; applying an adjustment to the schedule in accordance with feedback information received via a virtual machine client aggregating the feedback information from a plurality of virtual machine clients, each being related to a virtual machine, per scheduled activity type; and determining of a group adjustment for a determined group of the virtual machine clients based on a function of the feedback information of the plurality of virtual machine clients.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: November 14, 2023
    Assignee: International Business Machines Corporation
    Inventors: Piotr Kania, Wlodzimierz Martowicz, Piotr Padkowski, Marek Peszt
  • Patent number: 11815870
    Abstract: A method for performing computing procedures with a control unit of a transportation vehicle wherein the control unit is not installed in a fixed position in the transportation vehicle, but is instead a removable design. The control unit performs control tasks for transportation vehicle functions in the transportation vehicle and is used outside the transportation vehicle for vehicle-independent calculations. The control unit in the transportation vehicle uses a computing power and/or memory capacity which is/are not required for the control tasks for vehicle-independent calculations in the transportation vehicle, wherein these vehicle-independent calculations are continued outside the transportation vehicle when the control unit is removed from the transportation vehicle.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: November 14, 2023
    Inventors: Mukayil Kilic, Thomas Christian Lesinski
  • Patent number: 11809912
    Abstract: A system control processor manager for servicing workloads using composed information handling systems instantiated using information handling systems includes persistent storage and a workload manager. The workload manager obtains a workload request for a workload of the workloads; predicts future resource needs for the workload during a future time period; makes a determination that a portion of free resources of the information handling systems are available to meet the future resource needs; reserves the portion of the free resources based on the determination to obtain reserved resources during the future time period; and composes a composed information handling system of the composed information handling systems using the reserved resources during the future time period to service the workload request.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: November 7, 2023
    Assignee: DELL PRODUCTS L.P.
    Inventors: Elie Antoun Jreij, William Price Dawkins, Gaurav Chawla, Mark Steven Sanders, Walter A. O'Brien, III, Mukund P. Khatri, Robert Wayne Hormuth, Yossef Saad, Jimmy Doyle Pike
  • Patent number: 11803424
    Abstract: Systems described herein may allow for the intelligent configuration of containers onto virtualized resources. Different configurations may be generated based on the simulation of alternate placements of containers onto nodes, where the placement of a particular container onto a particular node may serve as a root for several branches which may themselves simulate the placement of additional containers on the node (in addition to the container(s) indicated in the root). Once a set of configurations are generated, a particular configuration may be selected according to determined selection parameters and/or intelligent selection techniques.
    Type: Grant
    Filed: January 3, 2023
    Date of Patent: October 31, 2023
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Matthew Kapala, Hector A. Garcia Crespo, Sudha Subramaniam, Brian A. Ward, Brent D. Segner
  • Patent number: 11789775
    Abstract: The visualization of progress of a distributed computational job at multiple points of execution. After a computational job is compiled into multiple vertices, and then those multiple vertices are scheduled on multiple processing nodes in a distributed environment, a processing gathering module gathers processing information regarding processing of multiple vertices of a computational job, and at multiple instances in time in the execution of the computational job. A user interface module graphically presents a representation of an execution structure representing multiple nodes of the computational job, and dependencies between the multiple nodes, where the nodes may be a single vertex or a group of vertices (such as a stage).
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: October 17, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Pu Li, Omid Afnan, Dian Zhang
  • Patent number: 11789778
    Abstract: An FPGA cloud platform acceleration resource allocates and coordinates accelerator card resources according to delays between a host of a user and FPGA accelerator cards deployed at various network segments. Upon an FPGA usage request of the user, allocating an FPGA accelerator card in an FPGA resource pool that has a minimum delay to the host. A cloud monitoring management platform obtains_transmission delays to a virtual machine network according to different geographic locations of various FPGA cards in the FPGA resource pool, and allocating a card having a minimum delay to each user. The cloud monitoring management platform prevents unauthorized users from accessing acceleration resources in the resource pool. The invention protects FPGA accelerator cards that are not authorized for users, and ensures that the card allocated to a user has a minimum network delay, thereby optimizing acceleration performance, and improving user experience.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: October 17, 2023
    Assignee: Inspur Suzhou Intelligent Technology Co., Ltd.
    Inventors: Zhixin Ren, Jiaheng Fan
  • Patent number: 11775352
    Abstract: Methods and apparatuses are described for automated prediction of computing resource performance scaling using reinforcement learning. A server executes performance tests against a production computing environment comprising a plurality of computing layers to capture performance data for computing resources in the production environment, where the performance tests are configured according to transactions-per-second (TPS) values. The server trains a classification model using the performance data, the trained model configured to predict computing power required by the plurality of computing layers. The server identifies a target TPS value and a target cost tolerance for the production environment and executes the trained classification model using the target TPS value and the target cost tolerance as input to generate a prediction of computing power required by the plurality of computing layers.
    Type: Grant
    Filed: November 28, 2022
    Date of Patent: October 3, 2023
    Assignee: FMR LLC
    Inventors: Nikhil Krishnegowda, Saloni Priyani, Samir Kakkar
  • Patent number: 11768698
    Abstract: A storage device is disclosed. The storage device may include storage for data and at least one Input/Output (I/O) queue for requests from at least one virtual machine (VM) on a host device. The storage device may support an I/O queue creation command to request the allocation of an I/O queue for a VM. The I/O queue creation command may include an LBA range attribute for a range of Logical Block Addresses (LBAs) to be associated with the I/O queue. The storage device may map the range of LBAs to a range of Physical Block Addresses (PBAs) in the storage.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: September 26, 2023
    Inventor: Oscar P. Pinto
  • Patent number: 11768708
    Abstract: To provide a media data processing system that can suppress a decrease in a request processing rate while suppressing an increase in response time in media data recognition processing where it is difficult to properly estimate the load. A first load estimation unit 5 estimates range of processing load of media data recognition processing based on header information of media data. A determination unit 31 determines whether to allow or disallow execution of the media data recognition processing, or to estimate the processing load, based on the range of the processing load. A second load estimation unit 6 estimates the processing load of the media data recognition processing based on content of the media data when it is determined to estimate the processing load. The determination unit 31 determines whether to allow or disallow the execution of the media data recognition processing based on the processing load.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: September 26, 2023
    Assignee: NEC CORPORATION
    Inventor: Yosuke Iwamatsu
  • Patent number: 11768713
    Abstract: Systems and methods for dynamically relocating pods to optimize inter-pod networking efficiency are provided. The method comprises receiving and storing inter-pod traffic data for a plurality of pods. The plurality of pods includes a first pod, a second pod, and a third pod. The method further includes receiving and storing node resource availability data for each node of a plurality of nodes, generating a queue that sorts the plurality of pods by an amount of inter-pod traffic indicated by the inter-pod traffic data, generating a hash that maps one or more parameters to the plurality of nodes, selecting, based on the generated hash, a node of the plurality of nodes, and dynamically relocating a highest ranked pod of the plurality of pods from the generated queue to the selected node.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: September 26, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vidush Vishwanath, Kendall Stratton, Rohit Raina
  • Patent number: 11762679
    Abstract: The image processing device is provided with: a first input unit which, with respect to one or more virtual models including a virtual model of an operation machine, receives an input of a first parameter for identifying a type; a second input unit which receives an input of a second parameter relating to a stochastic distribution having, as a random variable, a characteristic of an element constituting the one or more virtual models; a virtual model generation unit which, using the first parameter and the second parameter, generates the one or more virtual model stochastically; a determination unit which determines the correctness of an operation of the virtual model of the operation machine when operated in a virtual space including the one or more stochastically generated virtual models; and a learning unit which learns a control module for the operation machine for achieving a predetermined operation.
    Type: Grant
    Filed: March 1, 2019
    Date of Patent: September 19, 2023
    Assignee: OMRON Corporation
    Inventors: Yohei Okawa, Yoshiya Shibata, Chisato Saito, Kennosuke Hayashi, Yu Tomono
  • Patent number: 11762704
    Abstract: [Problem] To achieve resource allocation suitable for both a resource providing side and a using side. [Solution] A resource allocation apparatus 1 includes a filtering unit 12 configured to receive an allocation request specifying an amount of use of a physical CPU to be used by a virtual CPU and a characteristic of the physical CPU for each virtual CPU and select resources for allocation 2 that match the characteristic of the physical CPU specified in the allocation request, a weighting unit 13 configured to choose a physical CPU that is to serve as an allocation destination of a virtual CPU based on an amount of use in which each of the selected resources for allocation 2 is available and an amount of use of the physical CPU specified in the allocation request, and a virtual machine generation unit 14 configured to allocate the virtual CPU specified in the allocation request to the physical CPU chosen as the allocation destination.
    Type: Grant
    Filed: July 4, 2019
    Date of Patent: September 19, 2023
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventor: Keiko Kuriu
  • Patent number: 11762707
    Abstract: A computer implemented method and related system determine a current load result of a software container executing on a compute node in a container system. In response to determining that the current load result exceeds a predetermined scale-up threshold for the software container, the method adds a first plurality of replicas of the software container to the compute node, where a quantity of the first plurality of replicas is related to the current load result. In response to determining that the current load result is less than a predetermined scale-down threshold for the software container, the method deletes a second plurality of replicas of the software container from the compute node, where a quantity of the second plurality of replicas is related to the current load result.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: September 19, 2023
    Assignee: International Business Machines Corporation
    Inventors: Szymon Kowalczyk, Piotr P. Godowski, Michal Paluch, Tomasz Hanusiak, Andrzej Pietrzak