Patents Examined by Hiren P Patel
  • Patent number: 11687370
    Abstract: An embodiment for resource management is provided. The embodiment may include receiving created text of an assigned activity to a proposed assignee. The embodiment may also include identifying information about the assigned activity. The embodiment may further include predicting resources and capabilities required to complete the assigned activity. The embodiment may also include identifying the proposed assignee. The embodiment may further include analyzing the resources and capabilities available on one or more devices of the proposed assignee. The embodiment may also include in response to determining the proposed assignee is able to complete the assigned activity, displaying to an assignor a predicted start time and time of completion of the assigned activity and in response to determining the proposed assignee is unable to complete the assigned activity, recommending to the assignor another assignee that is able to complete the assigned activity.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: June 27, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Raghuveer Prasad Nagar, Sarbajit K. Rakshit, Jagadesh Ramaswamy Hulugundi, Prashanth Krishna Rao
  • Patent number: 11650857
    Abstract: Disclosed are systems, hybrid compute environments, methods and computer-readable media for dynamically provisioning nodes for a workload. In the hybrid compute environment, each node communicates with a first resource manager associated with the first operating system and a second resource manager associated with a second operating system. The method includes receiving an instruction to provision at least one node in the hybrid compute environment from the first operating system to the second operating system, after provisioning the second operating system, pooling at least one signal from the resource manager associated with the at least one node, processing at least one signal from the second resource manager associated with the at least one node and consuming resources associated with the at least one node having the second operating system provisioned thereon.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: May 16, 2023
    Assignee: III Holdings 12, LLC
    Inventor: David Brian Jackson
  • Patent number: 11640313
    Abstract: Embodiments of the present disclosure provide a device upgrade method and apparatus. Both NFVI (network functions virtualisation infrastructure) resources used to create a target VNFC (virtualised network function component) and NFVI resources used to scale out/up the target VNFC are fewer than those occupied by a to-be-upgraded VNFC; or both the NFVI resources used to create the target VNFC and the NFVI resources used to scale out/up the target VNFC are fewer than those required for upgrading the to-be-upgraded VNFC.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: May 2, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Cunxiu Qin, Yajun He
  • Patent number: 11635991
    Abstract: According to one or more embodiments of the present invention, a computer implemented method includes receiving a query for an amount of storage in memory of a computer system to be donated to a secure interface control of the computer system. The secure interface control can determine the amount of storage to be donated based on a plurality of secure entities supported by the secure interface control as a plurality of predetermined values. The secure interface control can return a response to the query indicative of the amount of storage as a response to the query. A donation of storage to secure for use by the secure interface control can be received based on the response to the query.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: April 25, 2023
    Assignee: International Business Machines Corporation
    Inventors: Utz Bacher, Reinhard Theodor Buendgen, Jonathan D. Bradbury, Lisa Cranton Heller, Fadi Y. Busaba
  • Patent number: 11635987
    Abstract: A system and corresponding method queue work within a virtualized scheduler based on in-unit accounting (IUA) of in-unit entries (IUEs). The system comprises an IUA resource and arbiter. The IUA resource stores, in association with an IUA identifier, an IUA count and threshold. The IUA count represents a global count of work-queue entries (WQEs) that are associated with the IUA identifier and occupy respective IUEs of an IUE resource. The IUA threshold limits the global count. The arbiter retrieves the IUA count and threshold from the IUA resource based on the IUA identifier and controls, as a function of the IUA count and threshold, whether a given IUE from a given scheduling group, assigned to the IUA identifier, is moved into the IUE resource to be queued for scheduling. The IUA count and threshold prevent group(s) assigned to the IUA identifier from using more than an allocated amount of IUEs.
    Type: Grant
    Filed: February 24, 2022
    Date of Patent: April 25, 2023
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Jason D. Zebchuk, Wilson P. Snyder, II
  • Patent number: 11635982
    Abstract: According to one embodiment, an information processing apparatus includes: a resource calculator configured to calculate a computing resource amount required to execute a test on a computer platform, the test causing an emulator to transmit data based on a communication model defined in a test scenario and causing a service to receive the data, and configured to determine allocation of the emulator for a computer on the computer platform; a first controller configured to access the computer platform to acquire the computing resource amount; and a second controller configured to configure a setting of the emulator allocated to the computer.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: April 25, 2023
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Tomonori Maegawa, Yumiko Sakai
  • Patent number: 11630700
    Abstract: An edge computing device receives, from a user device via an isolated local area network, a request for computing services that are hosted on the edge computing device and not on the user device. The edge computing device accesses policies that are applicable to the user device and the requested computing services. Based on the policies and the requested computing services, the edge computing device instantiates a container configured to provide the requested computing services. The container receives offloaded processing tasks from the device. The container executes the offloaded processing tasks, and sends, to the user device, data indicative of the processed tasks.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: April 18, 2023
    Assignee: T-Mobile USA, Inc.
    Inventor: Ali Daniali
  • Patent number: 11625283
    Abstract: The technology disclosed relates to inter-processor execution of configuration files on reconfigurable processors using smart network interface controller (SmartNIC) buffers. In particular, the technology disclosed relates to a runtime logic that is configured to execute configuration files that define applications and process application data for applications using a first reconfigurable processor and a second reconfigurable processor. The execution includes streaming configuration data in the configuration files and the application data between the first reconfigurable processor and the second reconfigurable processor using one or more SmartNIC buffers.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: April 11, 2023
    Assignee: SambaNova Systems, Inc.
    Inventors: Ram Sivaramakrishnan, Sumti Jairath, Emre Ali Burhan, Manish K. Shah, Raghu Prabhakar, Ravinder Kumar, Arnav Goel, Ranen Chatterjee, Gregory Frederick Grohoski, Kin Hing Leung, Dawei Huang, Manoj Unnikrishnan, Martin Russell Raumann, Bandish B. Shah
  • Patent number: 11625284
    Abstract: The technology disclosed relates to inter-node execution of configuration files on reconfigurable processors using smart network interface controller (SmartNIC) buffers. In particular, the technology disclosed relates to a runtime logic that is configured to execute configuration files that define applications and process application data for applications using a first reconfigurable processor on a first node, and a second host processor on a second node. The execution includes streaming configuration data in the configuration files and the application data between the first reconfigurable processor and the second host processor using one or more SmartNIC buffers.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: April 11, 2023
    Assignee: SambaNova Systems, Inc.
    Inventors: Ram Sivaramakrishnan, Sumti Jairath, Emre Ali Burhan, Manish K. Shah, Raghu Prabhakar, Ravinder Kumar, Arnav Goel, Ranen Chatterjee, Gregory Frederick Grohoski, Kin Hing Leung, Dawei Huang, Manoj Unnikrishnan, Martin Russell Raumann, Bandish B. Shah
  • Patent number: 11614966
    Abstract: An apparatus includes a processor to receive a request to provide a view of an object associated with a job flow, and in response to determining that the object is associated with a task type requiring access to a particular resource not accessible to a first interpretation routine: store, within a job queue, a job flow generation request message to cause generation of a job flow definition the defines another job flow for generating the requested view; within a task container in which a second interpretation routine that does have access to the particular resource is executed, generate the job flow definition; store, within a task queue, a job flow generation completion message that includes a copy of the job flow definition; use the job flow definition to perform the other job flow to generate the requested view; and transmit the requested view to the requesting device.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: March 28, 2023
    Assignee: SAS INSTITUTE INC.
    Inventors: Henry Gabriel Victor Bequet, Ronald Earl Stogner, Eric Jian Yang, Chaowang “Ricky” Zhang
  • Patent number: 11614965
    Abstract: An apparatus includes a processor to: receive a request to perform a job flow; within a performance container, based on the data dependencies among a set of tasks of the job flow, derive an order of performance of the set of tasks that includes a subset able to be performed in parallel, and derive a quantity of task containers to enable the parallel performance of the subset; based on the derived quantity of task containers, derive a quantity of virtual machines (VMs) to enable the parallel performance of the subset; provide, to a VM allocation routine, an indication of a need for provision of the quantity of VMs; and store, within a task queue, multiple task routine execution request messages to enable parallel execution of task routines within the quantity of task containers to cause the parallel performance of the subset.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: March 28, 2023
    Assignee: SAS INSTITUTE INC.
    Inventors: Henry Gabriel Victor Bequet, Ronald Earl Stogner, Eric Jian Yang, Chaowang “Ricky” Zhang
  • Patent number: 11614976
    Abstract: In accordance with an embodiment, described herein are systems and methods for determining or allocating an amount, quantity, or number of compute instances or virtual machines for use with extract, transform, load (ETL) processes. In an example embodiment, a particular (e.g., optimal) number of virtual machines (VM's) can be determined by predicting ETL completion times for customers, using historical data. ETL processes can be simulated with an initial/particular number of virtual machines. If the predicted duration is greater than the desired duration, the number of virtual machines can be incremented, and the simulation repeated. Actual completion times from ETL processes can be fed back, to update a determined number of compute instances or virtual machines. In accordance with an embodiment, the system can be used, for example, to generate alerts associated with customer service level agreements (SLA's).
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: March 28, 2023
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Krishnan Ramanathan, Jagan Narayanareddy, Gunaranjan Vasireddy, Aman Madaan
  • Patent number: 11609798
    Abstract: The technology disclosed relates to runtime execution of configuration files on reconfigurable processors with varying configuration granularity. In particular, the technology disclosed relates to a runtime logic that is configured to receive a set of configuration files for an application, and load and execute a first subset of configuration files in the set of configuration files and associated application data on a first reconfigurable processor. The first reconfigurable processor has a first level of configurable granularity. The runtime logic is further configured to load and execute a second subset of configuration files in the set of configuration files and associated application data on a second reconfigurable processor. The second reconfigurable processor has a second level of configurable granularity that is different from the first level of configurable granularity.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: March 21, 2023
    Assignee: SambaNova Systems, Inc.
    Inventors: Ram Sivaramakrishnan, Sumti Jairath, Emre Ali Burhan, Manish K. Shah, Raghu Prabhakar, Ravinder Kumar, Arnav Goel, Ranen Chatterjee, Gregory Frederick Grohoski, Kin Hing Leung, Dawei Huang, Manoj Unnikrishnan, Martin Russell Raumann, Bandish B. Shah
  • Patent number: 11593171
    Abstract: A method includes communicatively coupling a shared computing resource to core computing resources associated with a first project. The core computing resources associated with the first project are configured to use the shared computing resource to perform data processing operations associated with the first project. The method also includes reassigning the shared computing resource to a second project by (i) powering down the shared computing resource, (ii) disconnecting the shared computing resource from the core computing resources associated with the first project, (iii) communicatively coupling the shared computing resource to core computing resources associated with the second project, and (iv) powering up the shared computing resource. The core computing resources associated with the second project are configured to use the shared computing resource to perform data processing operations associated with the second project.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: February 28, 2023
    Assignee: Raytheon Company
    Inventors: Douglas A. Meyer, John D. Stone, Dudley F. Spooner, II, Ryan L. Bird, Amzie L. McWhorter
  • Patent number: 11586467
    Abstract: Certain embodiments of the present disclosure provide techniques for dynamically and reliably scaling a data processing pipeline in a computing environment. The method generally includes receiving a definition of a data pipeline to be instantiated on a set of resources in a computing environment. The data pipeline is converted into a plurality of steps, each step being defined as one or more workers. The one or more workers are instantiated. Each worker generally includes a user process and a processing coordinator to coordinate termination of the user process. Communications are orchestrated between one or more data sources and the one or more workers. The one or more workers are terminated by invoking a termination coordination process exposed by the user process and the processing coordinator associated with each worker of the one or more workers.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: February 21, 2023
    Assignee: INTUIT INC.
    Inventor: Alexander Edwin Collins
  • Patent number: 11586470
    Abstract: A method, system, and computer program product for running workflows and events using a stateless orchestrator includes: receiving first task data for a first task, where the first task data is information necessary for execution of the first task. The method may also include transmitting a request for a worker node to a provider, where the provider creates the worker node. The method may also include receiving a request from the worker node for the first task data. The method may also include transmitting the first task data to the worker node, where the worker node executes the first task. The method may also include, receiving results of the execution of the first task from the worker node. The method may also include, in response to the receiving the results, transmitting the results to a database.
    Type: Grant
    Filed: August 7, 2019
    Date of Patent: February 21, 2023
    Assignee: International Business Machines Corporation
    Inventors: Benjamin Ralf Salchow, Markus Reichart
  • Patent number: 11573830
    Abstract: Methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to implement and manage software defined silicon products are disclosed. Example semiconductor devices disclosed herein include circuitry configurable to provide one or more features. Disclosed example semiconductor devices also include a license processor to activate or deactivate at least one of the one or more features based on a license received via a network from a first remote enterprise system. Disclosed example semiconductor devices further include an analytics engine to report telemetry data associated with operation of the semiconductor device to at least one of the first remote enterprise system or a second remote enterprise system, the analytics engine to report the telemetry data in response to activation or deactivation of the at least one of the one or more features based on the license.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: February 7, 2023
    Assignee: Intel Corporation
    Inventors: Katalin Klara Bartfai-Walcott, Arkadiusz Berent, Vasuki Chilukuri, Mark Baldwin, Vasudevan Srinivasan, Vinila Rose, Mariusz Oriol, Justyna Chilczuk, Bartosz Gotowalski
  • Patent number: 11561817
    Abstract: A method includes, with a Virtual Network Function (VNF) manager, managing a VNF that includes a plurality of VNF components running on a plurality of virtual machines, the virtual machines running on a plurality of physical computing machines, and with the VNF manager, causing a Network Function Virtualization Infrastructure (NFVI) to have a total number of virtual machines provisioned, the total number being equal to a number of virtual machines capable of providing for a current demand for VNF components plus an additional number of virtual machines equal to the highest number of virtual machines being provided by a single one of the plurality of physical computing machines.
    Type: Grant
    Filed: November 3, 2021
    Date of Patent: January 24, 2023
    Assignee: GENBAND US LLC
    Inventor: Paul Miller
  • Patent number: 11556380
    Abstract: The present disclosure relates to a task execution method, apparatus, device and system, and a storage medium, wherein same relate to the technical field of computers. The method comprises: acquiring a plurality of tasks to be processed; acquiring a first task in the plurality of tasks to be processed, and controlling a task execution device associated with a current control device to execute the first task; and when a task request of any task execution device is received, acquiring, from the plurality of tasks to be processed, a second task when a target task execution device is the any task execution device, and sending the second task to the any task execution device.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: January 17, 2023
    Assignee: BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD.
    Inventor: Yuchen Wang
  • Patent number: 11556393
    Abstract: A resource management system of an application takes various actions to improve or maintain the health of the application (e.g., keep the application from becoming sluggish). The resource management system maintains a reinforcement learning model indicating which actions the resource management system is to take for various different states of the application. The resource management system performs multiple iterations of a process of identifying a current state of the application, determining an action to take to manage resources for the application, and taking the determined action. In each iteration, the resource management system determines the result of the action taken in the previous iteration and updates the reinforcement learning model so that the reinforcement learning model learns which actions improve the health of the application and which actions do not improve the health of the application.
    Type: Grant
    Filed: January 7, 2020
    Date of Patent: January 17, 2023
    Assignee: Adobe Inc.
    Inventors: Bhakti Ramnani, Sachin Tripathi, Reetesh Mukul, Prabal Kumar Ghosh