Resource Allocation Patents (Class 718/104)
  • Patent number: 11714688
    Abstract: Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for allocating computing resources for a data object. The system obtains a plurality of characteristics of a data object, and estimates, from the obtained characteristics, one or more cumulative sustainability metrics characterizing one or more categories of energy consumption during a life-cycle of the data object. The system further determines, from the cumulative sustainability metrics, allocations of one or more computing resources to the data object to optimize one or more objectives including minimizing a cumulative carbon cost during the life-cycle of the data object.
    Type: Grant
    Filed: November 17, 2022
    Date of Patent: August 1, 2023
    Assignee: Accenture Global Solutions Limited
    Inventors: Janardan Misra, Navveen Gordhan Balani
  • Patent number: 11716384
    Abstract: A method of distributed resource management in a distributed computing system includes determining usage of respective hardware resources by an application and generating usage metrics for the application, and assigning the application to a cluster of hardware resources to optimize diversity of usage of hardware resources in the cluster and to enhance utilization of the hardware resources by applications running in that cluster. The diversity of usage of the hardware resources is determined from respective usage metrics of the respective applications running in that cluster. The diversity of usage of the hardware resources in the cluster is optimized by assigning the application to a diversity pool of hardware resources adapted to minimize interference when applications assigned to the diversity pool of hardware resources access the hardware resources in the diversity pool and assigning applications from different diversity pools to the cluster of hardware resources.
    Type: Grant
    Filed: May 6, 2021
    Date of Patent: August 1, 2023
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Sharanyan Srikanthan, Zongfang Lin, Chen Tian, Ziang Hu
  • Patent number: 11711268
    Abstract: Methods and apparatus to execute a workload in an edge environment are disclosed. An example apparatus includes a node scheduler to accept a task from a workload scheduler, the task including a description of a workload and tokens, a workload executor to execute the workload, the node scheduler to access a result of execution of the workload and provide the result to the workload scheduler, and a controller to access the tokens and distribute at least one of the tokens to at least one provider, the provider to provide a resource to the apparatus to execute the workload.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: July 25, 2023
    Assignee: INTEL CORPORATION
    Inventors: Ned Smith, Francesc Guim Bernat, Sanjay Bakshi, Katalin Bartfai-Walcott, Kapil Sood, Kshitij Doshi, Robert Munoz
  • Patent number: 11704316
    Abstract: The present invention is generally directed to systems and methods of determining and provisioning peak memory requirements in Structured Query Language Processing engines. More specifically, methods may include determining or obtaining a query execution plan; gathering statistics associated with each database table; breaking the query execution plan into one or more subtasks: calculating an estimated memory usage for each subtask using the statistics; determining or obtaining a dependency graph of the one or more subtasks; based at least in part on the dependency graph, determining which subtasks can execute concurrently on a single worker node; and totaling the amount of estimated memory for each subtask that can execute concurrently on a single worker node and setting this amount of estimated memory as the estimated peak memory requirement for the specefic database query.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: July 18, 2023
    Assignee: Qubole, Inc.
    Inventors: Ankit Dixit, Shubham Tagra
  • Patent number: 11706086
    Abstract: The present disclosure relates to a baseboard management controller (BMC)-based switch monitoring method, a system, a computer device, and a readable medium thereof. The BMC-based switch monitoring method includes: virtualizing a multi-core BMC into a plurality of logical computers such that each core operates separately; in response to start of a switch, migrating functions of the switch into a first core of the multi-core BMC for start, and monitoring other cores of the multi-core BMC based on a second core of the multi-core BMC; determining, in response to that a new service is received by the switch is detected, whether the new service is a firmware upgrade service; and allocating the new service to the second core in the multi-core BMC for operation, in response to that the new service is the firmware upgrade service.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: July 18, 2023
    Assignee: INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD.
    Inventors: Shengna Che, Shaomei Wang, Yong Liu
  • Patent number: 11693721
    Abstract: A system for generating a robustness score for hardware components, nodes, and clusters of nodes in a computing infrastructure is provided. The system includes a memory and at least one processing device coupled to the memory. The processing device is to obtain first telemetry data associated with a selected portion of a computing infrastructure, and the selected portion includes a first node and a first hardware component. The processing device is further to obtain first metadata associated with the selected portion, input one or more telemetry inputs corresponding to the first telemetry data into a machine learning model, input one or more metadata inputs corresponding to the first metadata into the machine learning model, and generate, from the machine learning model, a first robustness score for the first hardware component representing a health state of the first hardware component.
    Type: Grant
    Filed: September 25, 2021
    Date of Patent: July 4, 2023
    Assignee: Intel Corporation
    Inventors: Rita H. Wouhaybi, Patricia M. Mwove Shaffer, Aline C. Kenfack Sadate, Lidia Warnes
  • Patent number: 11693710
    Abstract: Resource management includes storing, for multiple workload pools of a data intake and query system, a workload pool hierarchy arranged in multiple workload pool layers. After storing a processing request is assigned a selected subset of workload pools in a second layer of the workload pool hierarchy based on a type of processing request. The processing request is then assigned to an individual workload pool in the selected subset to obtain a selected workload pool. Execution of the processing request is initiated on the selected workload pool.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: July 4, 2023
    Assignee: Splunk Inc.
    Inventors: Bharath Kishore Reddy Aleti, Alexandros Batsakis, Mitchell Neuman Blank, Rama Gopalan, Hongxun Liu, Anish Shrigondekar
  • Patent number: 11695674
    Abstract: Network request data is collected over a time window. The network request data is filtered to generate bypass network traffic records. Network performance categories are generated from the bypass network traffic records. Sufficient statistics of network optimization parameters are calculated for the network performance categories. The sufficient statistics of the network optimization parameters are used to generate network optimization parameters to determine data download performances of web applications.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: July 4, 2023
    Assignee: Salesforce, Inc.
    Inventors: Tejaswini Ganapathi, Shauli Gal, Satish Raghunath, Kartikeya Chandrayana
  • Patent number: 11693678
    Abstract: A state management server applies configuration information to a set of virtual computer system instances in accordance with one or more limitations specified by an administrator. In an embodiment, the limitations include a velocity parameter that limits the number of virtual computer system instances to which the configuration may be applied concurrently. In an embodiment, the limitations include an error threshold that stops the application of the configuration if the number of configuration failures meets or exceeds the error threshold. In an embodiment, the set of virtual computer systems is identified by providing a list of the individual virtual computer system instances, or by specifying one or more tags that are associated with the virtual computer systems in the set. In an embodiment, the administrator is able to specify that an association be applied according to a predetermined schedule.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: July 4, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Samuel Seung Keun Carl, Amjad Hussain, Upender Sandadi, Anupam Shrivastava
  • Patent number: 11693816
    Abstract: Systems and techniques are provided for flexible data ingestion. Data including a file including a database table may be received at a computing device. The file may be in a non-standard binary format. The data including the file may be stored unaltered as a source data chunk. A processed data chunk may be generated from the source data chunk by converting the file to a standard binary format and storing the file in the processed data chunk without altering the source data chunk. A materialized data chunk may be generated from the processed data chunk by performing, with a database server engine of the computing device, a database operation on the database table of the file of the processed data chunk and storing the file in the materialized data chunk without altering the processed data chunk. The database table of the file of the materialized data chunk may be made available for querying by the database server engine.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: July 4, 2023
    Assignee: ActionIQ, Inc.
    Inventors: Nitay Joffe, Allen Ajit George, Casey Lewis Green, Mitesh Patel, Panagiotis Mousoulis
  • Patent number: 11687367
    Abstract: A method of scheduling a plurality of tasks in an autonomous vehicle system (AVS) includes, by a processor, prior to runtime of an autonomous vehicle, identifying a plurality of tasks to be implemented by the AVS of the autonomous vehicle, for each of the tasks, identifying at least one fixed parameter and at least one variable, and developing a schedule for each of the tasks. The schedule includes an event loop that minimizes an overall time for execution of the tasks. The method includes compiling the schedule into an execution plan, and saving the execution plan to a memory of the autonomous vehicle. During runtime of the autonomous vehicle, the processor receives data corresponding to the variables of the tasks, and uses the variables to implement the execution plan on the autonomous vehicle.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: June 27, 2023
    Assignee: ARGO AI, LLC
    Inventors: Evgeny Televitckiy, Guillaume Binet
  • Patent number: 11687377
    Abstract: An apparatus can include a control board operatively coupled to a modular compute boards and to a resource boards by (1) a first connection associated with control information and not data, and (2) a second connection associated with data and not control information. The control board can determine a computation load and a physical resource requirement for a time period. The control board can send, to the modular compute board and via the first connection, a signal indicating an allocation of that modular compute board during the time period. The control board can send, from the control board to the resource board, a signal indicating an allocation of that resource board to the modular compute board such that that resource board allocates at least a portion of its resources during the time period based on at least one of the computation load or the physical resource requirement.
    Type: Grant
    Filed: May 20, 2022
    Date of Patent: June 27, 2023
    Assignee: Management Services Group, Inc.
    Inventors: Thomas Scott Morgan, Steven Yates
  • Patent number: 11687370
    Abstract: An embodiment for resource management is provided. The embodiment may include receiving created text of an assigned activity to a proposed assignee. The embodiment may also include identifying information about the assigned activity. The embodiment may further include predicting resources and capabilities required to complete the assigned activity. The embodiment may also include identifying the proposed assignee. The embodiment may further include analyzing the resources and capabilities available on one or more devices of the proposed assignee. The embodiment may also include in response to determining the proposed assignee is able to complete the assigned activity, displaying to an assignor a predicted start time and time of completion of the assigned activity and in response to determining the proposed assignee is unable to complete the assigned activity, recommending to the assignor another assignee that is able to complete the assigned activity.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: June 27, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Raghuveer Prasad Nagar, Sarbajit K. Rakshit, Jagadesh Ramaswamy Hulugundi, Prashanth Krishna Rao
  • Patent number: 11689474
    Abstract: Central processing units (CPUs) are configured to support host access instruction(s) that are associated with accessing solid state storage. A resource management module, implemented independently of the CPUs, receives a resource allocation request that includes a usage type identifier and requested amount of a resource, where the usage type identifier is associated with a group identifier. Adjustable resource configuration information is accessed to obtain: (1) a maximum associated with the usage type identifier, (2) a minimum associated with the usage type identifier, and (3) a group limit associated with the group identifier. Resource state information is accessed and it is determine whether to grant the request based at least in part on the maximum, minimum, group limit, and resource state information. The resource allocation request is then granted or denied based on the determination.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: June 27, 2023
    Inventors: Priyanka Nilay Thakore, Lyle E. Adams, Chen Xiu
  • Patent number: 11687373
    Abstract: Embodiments of the present disclosure may provide dynamic and fair assignment techniques for allocating resources on a demand basis. Assignment control may be separated into at least two components: a local component and a global component. Each component may have an active dialog with each other; the dialog may include two aspects: 1) a demand for computing resources, and 2) a total allowed number of computing resources. The global component may allocate resources from a pool of resources to different local components, and the local components in turn may assign their allocated resources to local competing requests. The allocation may also be throttled or limited at various levels.
    Type: Grant
    Filed: October 28, 2022
    Date of Patent: June 27, 2023
    Assignee: Snowflake Inc.
    Inventors: Thierry Cruanes, Igor Demura, Varun Ganesh, Prasanna Rajaperumal, Libo Wang, Jiaqi Yan
  • Patent number: 11681560
    Abstract: Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for adjusting a computing load. The method in an illustrative embodiment includes: determining a total computing power demand of at least one user device that will be switched, due to movement, to being provided a computing service by a computing node; determining an available computing power of the computing node; and if the available computing power is unable to meet the total computing power demand, by adjusting a computing load of the computing node, adjusting the available computing power before the at least one user device is switched to being provided the computing service by the computing node, so as to meet the total computing power demand.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: June 20, 2023
    Assignee: EMC IP Holdinq Company LLC
    Inventors: Bin He, Zhen Jia, Danqing Sha, Si Chen, Zhenzhen Lin
  • Patent number: 11676013
    Abstract: Based on historic job data, a computer processor can predict a configuration of a computer node for running a future computer job. The computer processor can pre-configure the computer node based on the predicted configuration. Responsive to receiving a submission of a job, the computer processor can launch the job on the pre-configured computer node.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: June 13, 2023
    Assignee: International Business Machines Corporation
    Inventors: Eun Kyung Lee, Giacomo Domeniconi, Alessandro Morari, Yoonho Park
  • Patent number: 11675882
    Abstract: A system and method for scheduling tasks associated with changing a personality of a ticketing interface. One or more processors generate interaction scores for each of the plurality of user devices based on receiving interactions between the ticketing engine and a plurality of user devices. The system further generate interaction patterns for each of the plurality of user devices that include a relation between the interaction scores generated for each of the plurality of user devices with the interactions from the plurality of user devices. The system further classify each of the plurality of user devices based on the generated interaction patterns to identify whether a user device from the plurality of user devices is a fraudulent or a non-fraudulent user device and modify interface of the ticketing engine based on the classification of each of the plurality of user devices.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: June 13, 2023
    Assignee: Live Nation Entertainment, Inc.
    Inventors: Robert McEwen, Debbie Hsu, John Carnahan, Vasanth Kumar
  • Patent number: 11663045
    Abstract: A method and apparatus using machine learning for scheduling server maintenance. In one embodiment of the method, load values for a server are recorded over a period of time, wherein each of the load values is time stamped with a date and time. A first plurality of the load values are classified. The classified first plurality of values are then processed to create a model for predicting a load value of the server. The model is used to generate a first predicted load value of the server for a first date and a first time.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: May 30, 2023
    Assignee: Dell Products L.P.
    Inventors: Shanand Reddy Sukumaran, Lead Ta Choo
  • Patent number: 11656853
    Abstract: Various embodiments are generally directed to techniques for supporting the distributed execution of a task routine among multiple secure controllers incorporated into multiple computing devices. An apparatus includes a first processor component and first secure controller of a first computing device, where the first secure controller includes: a selection component to select the first secure controller or a second secure controller of a second computing device to compile a task routine based on a comparison of required resources to compile the task routine and available resources of the first secure controller; and a compiling component to compile the task routine into a first version of compiled routine for execution within the first secure controller by the first processor component and a second version for execution within the second secure controller by a second processor component in response to selection of the first secure controller. Other embodiments are described and claimed.
    Type: Grant
    Filed: August 10, 2022
    Date of Patent: May 23, 2023
    Assignee: INTEL CORPORATION
    Inventors: Mingqiu Sun, Rajesh Poornachandran, Vincent J. Zimmer, Ned M. Smith, Gopinatth Selvaraje
  • Patent number: 11650859
    Abstract: Example methods and computer systems for cloud environment configuration based on task parallelization. One example method may comprise: obtaining a task data structure specifying execution dependency information associated with a set of multiple configuration tasks that are executable to perform cloud environment configuration. The method may also comprise: In response to identifying a first configuration task and a second configuration task that are ready for execution based on the task data structure, triggering execution of the first configuration task and the second configuration task. The method may further comprise: in response to determination that the first configuration task has been completed, identifying third configuration task(s) that are ready for execution based on the task data structure; and triggering execution of the third configuration task(s) by respective third compute node(s).
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: May 16, 2023
    Assignee: VMWARE, INC.
    Inventor: Suman Chandra Shil
  • Patent number: 11650856
    Abstract: Systems and methods for inter-cluster deployment of compute services using federated operator components are generally described. In some examples, a first request to deploy a compute service may be received by a federated operator component. In various examples, the federated operator component may send a second request to provision a first compute resource for the compute service to a first cluster of compute nodes. In various examples, the first cluster of compute nodes may be associated with a first hierarchical level of a computing network. In some examples, the federated operator component may send a third request to provision a second compute resource for the compute service to a second cluster of compute nodes. The second cluster of compute nodes may be associated with a second hierarchical level of the computing network that is different from the first hierarchical level.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: May 16, 2023
    Assignee: RED HAT INC.
    Inventor: Huamin Chen
  • Patent number: 11645111
    Abstract: The present disclosure provides a computer-implemented method, computer system and computer program product for managing a task flow. According to the computer-implemented method, a definer module may receive a request for executing a task flow. The definer module may determine a cluster of edge devices to execute the task flow from a set of edge devices. The definer module may retrieve metadata information for the task flow and edge devices in the cluster, wherein the metadata information is used to schedule the task flow in the cluster. Then the edge devices in the cluster may execute the task flow according to the metadata information.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: May 9, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Yue Wang, Xin Peng Liu, Liang Wang, Zheng Li, Wei Wu
  • Patent number: 11645594
    Abstract: In an example, a method is performed by a computing system that is one of a group of computing systems involved in facilitating a manufacturing of an aircraft. The method comprises generating a plurality of manufacturing task work statements (MTWSs), each MTWS being associated with a task involved in the manufacturing and comprising smart contract data and computer code. The method also comprises receiving system state information indicating (i) a schedule according to which the aircraft is to be manufactured, (ii) resources available for use in executing the MTWSs, and (iii) one or more aircraft certification requirements with which the tasks involved in the manufacturing of the aircraft are to comply. The method also comprises executing the MTWSs based on the system state information and storing, in a blockchain-based distributed ledger accessible by the group of computing systems, an end state result of the execution of each MTWS.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: May 9, 2023
    Assignee: The Boeing Company
    Inventors: Stephen Acey Walls, Robert Leon Malone, Kristen Ann Bengtson, Michael Marcus Vander Wel, Sharon Filipowski Arroyo
  • Patent number: 11645123
    Abstract: Disclosed are systems, methods, and computer readable media for automatically assessing and allocating virtualized resources (such as CPU and GPU resources). In some embodiments, this method involves a computing infrastructure receiving a request to perform a workload, determining one or more workflows for performing the workload, selecting a virtualized resource, from a plurality of virtualized resources, wherein the virtualized resource is associated with a hardware configuration, and wherein selecting the virtualized resources is based on a suitability score determined based on benchmark scores of the one or more workflows on the hardware configuration, scheduling performance of at least part of the workload on the selected virtualized resource, and outputting results of the at least part of the workload.
    Type: Grant
    Filed: July 6, 2022
    Date of Patent: May 9, 2023
    Assignee: Entefy Inc.
    Inventor: Alston Ghafourifar
  • Patent number: 11640443
    Abstract: Based on a predetermined number of available processor sockets, a plurality of candidate matrix decompositions are identified, which correspond to a multiplication of matrices. Based on a first comparative relationship of a variation of first sizes of the plurality of candidate matrix decompositions along a first dimension and a second comparative relationship of a variation of second sizes of the plurality of candidate matrix decomposition sizes along a second dimension, a given candidate matrix decomposition is selected. Processing of the multiplication among the processor sockets is distributed based on the given candidate matrix decomposition.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: May 2, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Aaron M. Collier
  • Patent number: 11640395
    Abstract: A method and apparatus for carrying out a database select, or query, on a data storage device, upon data stored on that device. Data is received from a host and compressed on the data storage device using a compression code developed on the data storage device for the data. When the host issues a database select request on the compressed data, the compression code is distributed to processing cores of the data storage device and compiled, including the select request, into machine code. The machine code is used to decompress the compressed data while filtering the data with the select request. The filtering result is returned to the host.
    Type: Grant
    Filed: February 22, 2021
    Date of Patent: May 2, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventors: Israel Zimmerman, Eyal Hakoun, Judah Gamliel Hahn
  • Patent number: 11640263
    Abstract: Embodiments of the present disclosure relate to a memory system and an operating method thereof. According to the embodiments of the present disclosure, the memory system may include a plurality of cores, a control core, and a shared memory. When processing the event, the control core may select a first core executing a target job requiring distributed execution among the plurality of cores, and the first core may run a first firmware among the plurality of firmwares to execute the target job. The control core may select a second core to execute the target job together with the first core, and may control the second core to run the first firmware. The control core may control the first core and the second core to perform distributed execution of the target job.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: May 2, 2023
    Assignee: SK hynix Inc.
    Inventors: Su Ik Park, Ku Ik Kwon, Kyeong Seok Kim, Yong Joon Joo
  • Patent number: 11637791
    Abstract: A method and system for allocating tasks among processing devices in a data center. The method may include receiving a request to allocate a task to one or more processing devices, the request indicating a required bandwidth for performing the task, a list of predefined processing device groups connected to a host server and indicating availability of the processing device groups included therein for allocation of tasks and available bandwidth for each available processing device group, assigning the task to a processing device group having an available bandwidth greater than or equal to the required bandwidth for performing the task, and updating the list to indicate that each of the processing device group to which the task is assigned and other processing device group sharing at least one processing device is unavailable. The task may be assigned to an available processing device group having a lowest amount of power needed.
    Type: Grant
    Filed: May 31, 2022
    Date of Patent: April 25, 2023
    Assignee: Google LLC
    Inventor: Umang Sureshbhai Patel
  • Patent number: 11635994
    Abstract: A system and method have been devised for optimization and load balancing for computer clusters, comprising a distributed computational graph, a server architecture using multi-dimensional time-series databases for continuous load simulation and forecasting, a server architecture using traditional databases for discrete load simulation and forecasting, and using a combination of real-time data and records of previous activity for continuous and precise load forecasting for computer clusters, datacenters, or servers.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: April 25, 2023
    Assignee: QOMPLX, INC.
    Inventors: Jason Crabtree, Andrew Sellers
  • Patent number: 11632338
    Abstract: Described herein are systems, methods, and software to manage resources in a gateway shared by multiple tenants. In one example, a system may monitor usage of resources by a tenant of the gateway and compare the usage with usage limits associated with the resources. The system may further determine when the usage of a resource exceeds a usage limit associated with the resource and, when the usage of the resource exceeds the usage limit, identify an operation associated with causing the usage limit to be exceeded and blocking the operation.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: April 18, 2023
    Assignee: VMware, Inc.
    Inventors: Ravi Kumar Reddy Kottapalli, Srinivas Sampatkumar Hemige
  • Patent number: 11630810
    Abstract: Implementations described and claimed herein provide systems and methods for tuning and sizing one or more storage appliances in a storage system with respect to an application load and for optimizing a storage system based on a configuration of a client network and/or a storage appliance in a storage network. In one implementation, data corresponding to an application load configured to be applied to a storage appliance in the storage system is obtained. The application load is characterized in the context of a configuration of the storage system. One or more recommendations for optimizing performance of the storage system based on the characterized application load are generated.
    Type: Grant
    Filed: August 5, 2022
    Date of Patent: April 18, 2023
    Assignee: Oracle International Corporation
    Inventor: Michael J. Baranowsky
  • Patent number: 11630706
    Abstract: Systems and techniques for adaptive limited-duration edge resource management are described herein. Available capacity may be calculated for a resource for a node of the edge computing network based on workloads executing on the node. Available set-aside resources may be determined based on the available capacity. A service request may be received from an application executing on the edge computing node. A priority category may be determined for the service request. Set-aside resources from the available set-aside resources may be assigned to a workload associated with the service request based on the priority category.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: April 18, 2023
    Assignee: Intel Corporation
    Inventors: Kshitij Arun Doshi, Francesc Guim Bernat, Ned M. Smith, Christian Maciocco
  • Patent number: 11630689
    Abstract: Image subunit based guest scheduling is disclosed. For example, a memory stores an image registry, which stores a plurality of reference entries each associated with subunits hosted on each node of a plurality of nodes. A scheduler executing on a processor manages deployment of guests to the plurality of nodes including a first node and a second node, where a first guest is associated with an image file that includes a first subunit and a second subunit. The image registry is queried for at least one node of the plurality of nodes hosting the first subunit and/or the second subunit and the first node is determined to host the first subunit. The first guest is scheduled to the first node based on the first node hosting the first subunit.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: April 18, 2023
    Assignee: Red Hat, Inc.
    Inventor: Huamin Chen
  • Patent number: 11630702
    Abstract: Technologies for composing a managed node with multiple processors on multiple compute sleds to cooperatively execute a workload include a memory, one or more processors connected to the memory, and an accelerator. The accelerator further includes a coherence logic unit that is configured to receive a node configuration request to execute a workload. The node configuration request identifies the compute sled and a second compute sled to be included in a managed node. The coherence logic unit is further configured to modify a portion of local working data associated with the workload on the compute sled in the memory with the one or more processors of the compute sled, determine coherence data indicative of the modification made by the one or more processors of the compute sled to the local working data in the memory, and send the coherence data to the second compute sled of the managed node.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: April 18, 2023
    Assignee: Intel Corporation
    Inventors: Mohan J. Kumar, Murugasamy K. Nachimuthu, Krishna Bhuyan
  • Patent number: 11627181
    Abstract: Systems and methods for monitoring utilization rates of a plurality of network-connected databases; receiving a first data read request from a first user device for a data element stored in the plurality of network-connected databases; selecting a first target database among the plurality of network-connected databases based on the utilization rates and a load sharing ratios; generating a first data query for a copy of the data element stored in the first target database; and forwarding the copy of the data element from the first target database to the first user device in response to the first data read request.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: April 11, 2023
    Assignee: Coupang Corp.
    Inventors: Zhan Chen, Seong Hyun Jeong, Hyeong Gun Lee
  • Patent number: 11625271
    Abstract: A data management process determines, from user-implemented provisional reservations (400) for data processing resources, a projected total capacity requirement for each said data processing resource, by maintaining a record (9, 90, 91) recording previous such reservations made by each user and comparing each reservations with records (87, 88, 89) of the actual resources used, to provide an estimate of resources required to meet the projected capacity requirement, and to provide data for a demand management processor (2), which control associated configurable data processing equipment (1) to provide the resources required to meet the estimated capacity required. The process takes account of over- and under-ordering of capacity by comparing each reservation (400) with the use actually made (600), and includes a record (10) of ad-hoc (unreserved) usage.
    Type: Grant
    Filed: April 29, 2016
    Date of Patent: April 11, 2023
    Assignee: BRITISH TELECOMMUNICATIONS public limited company
    Inventors: Carla Di Cairano-Gilfedder, Kjeld Jensen, Gilbert Owusu
  • Patent number: 11627085
    Abstract: Provided is a non-transitory computer-readable recording medium storing a service management program that causes a computer to execute a process, the process including acquiring a first input load indicating an amount of inputs received by a service at a first point in time, the service being implemented by containers, identifying first numbers of the containers corresponding to the first input load by referring to a storage unit that stores information where a second input load is associated with second numbers of the containers, the second input load indicating an amount of inputs received by the service when a response time of the service is reduced by increasing numbers of the containers to the second numbers of the containers in each of second points in time prior to the first point in time, and increasing the numbers of containers to the first numbers of the containers.
    Type: Grant
    Filed: November 10, 2021
    Date of Patent: April 11, 2023
    Assignee: FUJITSU LIMITED
    Inventor: Shinya Kuwamura
  • Patent number: 11620201
    Abstract: A method for managing a storage system includes monitoring the storage system to obtain a set of input/output (I/O) telemetry entries, determine a workload signature based on the set of I/O telemetry entries, obtaining, based on the workload signature, a set of performance metrics, performing a parameter analysis to determine a set of alternative storage system parameterizations, based on the set of alternative storage system parameterization, generating a set of alternative performance metrics, updating a performance index structure based on the performance metrics, and the set of alternative performance metrics to obtain an updated performance index structure, and initiating a storage system update based on the updated performance index structure.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: April 4, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Vinicius Michel Gottin, Jaumir Valenca Da Silveira Junior
  • Patent number: 11622015
    Abstract: A method for configuring at least one OPC UA PubSub subscriber in an in particular industrial network, in which a) a virtual address space is provided for the at least one subscriber on a configuration module that is separate from the at least one subscriber, b) a configuration for the at least one subscriber is performed and/or a configuration already existing for the at least one subscriber is changed in the virtual address space of the at least one subscriber, c) the configuration module converts the configuration and/or configuration change into at least one PubSub message, d) the at least one PubSub message is transmitted to the at least one subscriber, and e) the at least one subscriber is configured according to the at least one PubSub message. In addition, the invention relates to an automation system, a computer program and a computer-readable medium.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: April 4, 2023
    Assignee: SIEMENS AKTIENGESELLSCHAFT
    Inventors: Sven Kerschbaum, Stephan Home, Frank Volkmann
  • Patent number: 11614972
    Abstract: Techniques are described for distributing network device tasks across virtual machines executing in a computing cloud. A network device includes a network interface to send and receive messages, a routing unit comprising one or more processors configured to execute a version of a network operating system, and a virtual machine agent. The virtual machine agent is configured to identify a virtual machine executing at a computing cloud communicatively coupled to the network device, wherein the identified virtual machine executes an instance of the version of the network operating system, to send, using the at least one network interface and to the virtual machine, a request to perform a task, and to receive, using the at least one network interface and from the virtual machine, a task response that includes a result of performing the task. The routing unit is configured to update the network device based on the result.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: March 28, 2023
    Assignee: Juniper Networks, Inc.
    Inventors: Joel Obstfeld, David Ward, Colby Barth, Mu Lin
  • Patent number: 11608135
    Abstract: An operating device for a bicycle is disclosed. The operating device includes a display ring and a coaxially arranged operating ring arranged to rotate relative to the display ring. The display ring includes a display for displaying information and a first magnetic element that cooperates with a second magnetic element arranged on or in the operating ring. In a first position of the operating ring relative to the display ring, the first magnetic element is proximate to the second magnetic element, and if the operating ring is rotated away from the first position, the first magnetic element and the second element magnetically cooperate to bias the operating ring back to the first position. The display ring further includes a sensor that receives a signal when in proximity of a signal providing element.
    Type: Grant
    Filed: July 19, 2022
    Date of Patent: March 21, 2023
    Inventor: Thomas Roberts
  • Patent number: 11610166
    Abstract: A predefined hierarchical service tree can be stored that includes a top at a service category definition level and a bottom at a level of a number of devices, each of the number of devices selected to perform a specific service function. A sequential progression can be enforced through the predefined hierarchical service tree to perform a service.
    Type: Grant
    Filed: October 29, 2013
    Date of Patent: March 21, 2023
    Assignee: Micro Focus LLC
    Inventors: Gil Tzadikevitch, Ran Biron, Oded Zilinsky
  • Patent number: 11609799
    Abstract: A method and system for distributing a compute model and data to process to heterogeneous and distributed compute devices. The compute model and a portion of the data is processed on a benchmark system and the timing used to make a job execution speed estimate for each compute device. Compute devices are selected and assigned data chunks based on the estimate so distributed processing is completed within a predefined time period. The compute model and data chunks can be sent to the respective compute devices using separate processes, such as a payload manager configured to transfer compute jobs to remote devices and a messaging engine configured to transfer data messages, and where the payload manager and messaging engine communicate with corresponding software engines on the compute devices.
    Type: Grant
    Filed: June 9, 2022
    Date of Patent: March 21, 2023
    Assignee: SAILION INC.
    Inventors: Tyler Gross, Ronald A. Felice
  • Patent number: 11606268
    Abstract: A cloud service provider network may receive, from a cloud subscriber device, a request to access an application, wherein the cloud service provider network includes a split interface associated with the cloud subscriber device. The cloud service provider network may provide, to the cloud operator device, the request to access the application, wherein the cloud operator device stores the application. The cloud service provider network may receive, from the cloud operator device, the application, based on the request to access the application. The cloud service provider network may provide the application to the cloud subscriber device via the application interface of the split interface, wherein the connectivity interface connects the cloud subscriber device and the cloud operator device so that the application is provided to the cloud subscriber device via the application interface.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: March 14, 2023
    Assignee: Verizon Patent and Licensing Inc.
    Inventor: Mehmet Toy
  • Patent number: 11606392
    Abstract: An apparatus, related devices and methods, having a memory element operable to store instructions; and a processor operable to execute the instructions, such that the apparatus is configured to determine, based on operating system workload demands, whether a high-demand application is running and, based on a determination that a high-demand application is running, apply an optimization policy that modifies a security application, wherein the optimization policy modification includes reducing a protection applied by the security application.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: March 14, 2023
    Assignee: McAfee, LLC
    Inventors: Shuborno Biswas, Junmin Qiu, Siddaraya B. Revashetti
  • Patent number: 11599783
    Abstract: A function creation method is disclosed. The method comprises defining one or more database function inputs, defining cluster processing information, defining a deep learning model, and defining one or more database function outputs. A database function is created based at least in part on the one or more database function inputs, the cluster set-up information, the deep learning model, and the one or more database function outputs. In some embodiments, the database function enables a non-technical user to utilize deep learning models.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: March 7, 2023
    Assignee: Databricks, Inc.
    Inventors: Sue Ann Hong, Shi Xin, Timothee Hunter, Ali Ghodsi
  • Patent number: 11599343
    Abstract: A method, an improvement node, a system and a computer program for computing an improvement result for a runtime environment of at least one application, on a device in a medical context. An embodiment of the method includes detecting a state of the runtime environment on the device; accessing a database with the state detected, to retrieve a corresponding at least one candidate improvement result; using the at least one candidate improvement result retrieved, for test-wise execution on a test infrastructure in which the state of the runtime environment detected is provided identically; measuring improvement parameters of the test-wise execution; and adding, upon the improvement parameters measured meeting defined requirements, candidate improvement results, of the at least one corresponding candidate improvement result retrieved, for which the improvement parameters measured meet defined requirements.
    Type: Grant
    Filed: May 6, 2019
    Date of Patent: March 7, 2023
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventors: Lutz Dominick, Vladyslav Ukis
  • Patent number: 11593368
    Abstract: A cluster view method of a database to perform compaction and clustering of database objects, such as database materialized view is shown. The database can comprise a cache to store changes to storage units of tables of the database objects. The cluster view method can implement clustering to remove data based on the cache and clustering to group the data of the materialized view.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: February 28, 2023
    Assignee: Snowflake Inc.
    Inventors: Varun Ganesh, Saiyang Gou, Prasanna Rajaperumal, Wenhao Song, Libo Wang, Jiaqi Yan
  • Patent number: 11586467
    Abstract: Certain embodiments of the present disclosure provide techniques for dynamically and reliably scaling a data processing pipeline in a computing environment. The method generally includes receiving a definition of a data pipeline to be instantiated on a set of resources in a computing environment. The data pipeline is converted into a plurality of steps, each step being defined as one or more workers. The one or more workers are instantiated. Each worker generally includes a user process and a processing coordinator to coordinate termination of the user process. Communications are orchestrated between one or more data sources and the one or more workers. The one or more workers are terminated by invoking a termination coordination process exposed by the user process and the processing coordinator associated with each worker of the one or more workers.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: February 21, 2023
    Assignee: INTUIT INC.
    Inventor: Alexander Edwin Collins