Patents Examined by Charles M Swift
  • Patent number: 11972302
    Abstract: Certain aspects of the present disclosure provide techniques for processing computing resource access requests from users of an application service. An example method generally includes measuring computing resource access metrics over a time window for a user of a computing system. The measured computing access metrics for the user of the computing system are determined to exceed a threshold. Based on determining that the measured computing access metrics for the user of the computing system exceeds the threshold, computing resource access requests from the user of the computing system are migrated from a first queue to a second queue, wherein the first queue comprises a rate-unlimited queue and the second queue comprises a rate-controlled queue having a defined rate for processing received requests. Computing resource access requests from the user of the computing system are processed based on the defined rate for processing received requests.
    Type: Grant
    Filed: December 30, 2022
    Date of Patent: April 30, 2024
    Assignee: Intuit Inc.
    Inventors: Anjaneya Murthy Gabbiti, Fan Li Gabbett, Apurva Patel, Sujay Sundaram, Ajith Kuttappan Rajeswari, Sanjay Channarayapatna Ramakrishna
  • Patent number: 11960931
    Abstract: Systems and methods for media production and broadcasting are provided. A method for a video production system according to the present disclosure includes receiving a request for media production assets from different categories from a connected computing device of an end user; determining a plurality of available production assets for each of the categories of production assets; receiving a selection of production assets from the categories of production assets from the connected computing device.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: April 16, 2024
    Assignee: NECF
    Inventor: Joseph Henry Maar
  • Patent number: 11960912
    Abstract: A method of generating a user interface for presentation to a user. The method comprises executing a first application computer program to provide a user interface, executing agent computer program code to interrogate and modify said user interface during execution of said first application computer program, and presenting said modified user interface. The first application computer program may be run on a server, while the modified user interface may be presented to a user at a client connected to said server.
    Type: Grant
    Filed: January 23, 2023
    Date of Patent: April 16, 2024
    Assignee: Versata FZ-LLC
    Inventor: Plamen Ivanov Valtchev
  • Patent number: 11948006
    Abstract: A computing resource sharing system and a computing resource sharing method are provided. The method includes: in response to receiving a resource request signal from a resource request device, obtaining a foreground process, a background process, a name of a software service, and an operating status of the software service of a resource sharing device; determining a specific graphic computing resource to be shared according to the foreground process, the background process, the name of the software service, and the operating status of the software service; applying the specific graphic computing resource to assist the resource request device in performing a graphic computing operation; transmitting a graphic computing result of the graphic computing operation back to the resource request device.
    Type: Grant
    Filed: August 16, 2021
    Date of Patent: April 2, 2024
    Assignee: Acer Incorporated
    Inventors: Kuan-Ju Chen, Chia-Jen Tao
  • Patent number: 11948001
    Abstract: Methods and apparatus consistent with the present disclosure may be used in environments where multiple different virtual sets of program instructions are executed by shared computing resources. These methods may allow actions associated with a first set of virtual software to be paused to allow a second set of virtual software to be executed by the shared computing resources. In certain instances, methods and apparatus consistent with the present disclosure may manage the operation of one or more sets of virtual software at a point in time. Apparatus consistent with the present disclosure may include a memory and one or more processors that execute instructions out of the memory. At certain points in time, a processors of a computing system may pause a virtual process while allowing instructions associated with another virtual process to be executed.
    Type: Grant
    Filed: June 17, 2021
    Date of Patent: April 2, 2024
    Assignee: SONICWALL INC.
    Inventors: Miao Mao, Wei Zhou, Zhong Chen
  • Patent number: 11922213
    Abstract: Techniques for behavioral pairing in a task assignment system are disclosed. In one particular embodiment, the techniques may be realized as a method for behavioral pairing in a task assignment system comprising: determining, by at least one computer processor communicatively coupled to and configured to operate in the task assignment system, a priority for each of a plurality of tasks; determining, by the at least one computer processor, an agent available for assignment to any of the plurality of tasks; and assigning, by the at least one computer processor, a first task of the plurality of tasks to the agent using a task assignment strategy, wherein the first task has a lower-priority than a second task of the plurality of tasks.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: March 5, 2024
    Assignee: AFINITI, LTD.
    Inventors: Ittai Kan, Zia Chishti, Vikash Khatri, James Edward Elmore
  • Patent number: 11915062
    Abstract: A tool may provide a real-time analysis of potential bottlenecks while threads wait on locks held by other threads. For each job currently operating on the server instance, the tool may access a list of threads and retrieve call stacks associated with those threads. The call stacks may then be analyzed to identify threads that are holding a lock, along with any corresponding threads that are waiting on the lock. The locks may be held on memory resources or any other type of computing resource. These bottlenecks may be identified and an adjustment of the configuration of the server instance may be triggered in response that is configured to reduce the likelihood that these types of bottlenecks may occur in the future.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: February 27, 2024
    Assignee: Oracle International Corporation
    Inventor: Pradip Kumar Pandey
  • Patent number: 11915042
    Abstract: Techniques for behavioral pairing in a task assignment system are disclosed. In one particular embodiment, the techniques may be realized as a method for behavioral pairing in a task assignment system comprising: determining, by at least one computer processor communicatively coupled to and configured to operate in the task assignment system, a priority for each of a plurality of tasks; determining, by the at least one computer processor, an agent available for assignment to any of the plurality of tasks; and assigning, by the at least one computer processor, a first task of the plurality of tasks to the agent using a task assignment strategy, wherein the first task has a lower-priority than a second task of the plurality of tasks.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: February 27, 2024
    Assignee: AFINITI, LTD.
    Inventors: Ittai Kan, Zia Chishti, Vikash Khatri, James Edward Elmore
  • Patent number: 11886224
    Abstract: A processing unit of a processing system compiles a priority queue listing of a plurality of processor cores to run a workload based on a cost of running the workload on each of the processor cores. The cost is based on at least one of a system usage policy, characteristics of the workload, and one or more physical constraints of each processor core. The processing unit selects a processor core based on the cost to run the workload and communicates an identifier of the selected processor core to an operating system of the processing system.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: January 30, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Leonardo De Paula Rosa Piga, Karthik Rao, Indrani Paul, Mahesh Subramony, Kenneth Mitchell, Dana Glenn Lewis, Sriram Sambamurthy, Wonje Choi
  • Patent number: 11880714
    Abstract: Technologies for providing dynamic selection of edge and local accelerator resources includes a device having circuitry to identify a function of an application to be accelerated, determine one or more properties of an accelerator resource available at the edge of a network where the device is located, and determine one or more properties of an accelerator resource available in the device. Additionally, the circuitry is to determine a set of acceleration selection factors associated with the function, wherein the acceleration factors are indicative of one or more objectives to be satisfied in the acceleration of the function. Further, the circuitry is to select, as a function of the one or more properties of the accelerator resource available at the edge, the one or more properties of the accelerator resource available in the device, and the acceleration selection factors, one or more of the accelerator resources to accelerate the function.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: January 23, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ned Smith, Thomas Willhalm, Timothy Verrall
  • Patent number: 11875173
    Abstract: Systems and methods are described for providing auxiliary functions in an on-demand code execution system in a manner that enables efficient execution of code. A user may generate a task on the system by submitting code. The system may determine the auxiliary functions that the submitted code may require when executed on the system, and may provide these auxiliary functions by provisioning or configuring sidecar virtualized execution environments that work in conjunction with the main virtualized execution environment executing the submitted code. Sidecar virtualized execution environments may be identified and obtained from a library of preconfigured sidecar virtualized execution environments, or a sidecar agent that provides the auxiliary function may be identified from a library, and then a virtualized execution environment may be provisioned with the agent and/or configured to work in conjunction with the main virtualized execution environment.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: January 16, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Niall Mullen, Philip Daniel Piwonka, Timothy Allen Wagner, Marc John Brooker
  • Patent number: 11868820
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for implementing critical section subgraphs in a computational graph system. One of the methods includes executing a lock operation including providing, by a task server, a request to a value server to create a shared critical section object. If the task server determines that the shared critical section object was created by the value server, the task server executes one or more other operations of the critical section subgraph in serial. The task server executes an unlock operation including providing, by the task server, a request to the value server to delete the shared critical section object.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: January 9, 2024
    Assignee: Google LLC
    Inventors: Eugene Brevdo, Alexandre Tachard Passos
  • Patent number: 11868890
    Abstract: A computer implemented method, computer program product, and system for managing execution of a workflow comprising a set of subworkflows, comprising optimizing the set of subworkflows using a deep neural network, wherein each subworkflow of the set of subworkflows has a set of tasks, wherein each task of the sets of tasks has a requirement of resources of a set of resources; wherein each task of the sets of tasks is enabled to be dependent on another task of the sets of tasks, training the deep neural network by: executing the set of subworkflows, collecting provenance data from the execution, and collecting monitoring data that represents the state of said set of resources, wherein the training causes the neural network to learn relationships between the states of said set of resources, the said sets of tasks, their parameters and the obtained performance, optimizing an allocation of resources of the set of resources to each task of the sets of tasks to ensure compliance with a user-defined quality metric b
    Type: Grant
    Filed: April 6, 2022
    Date of Patent: January 9, 2024
    Assignees: LANDMARK GRAPHICS CORPORATION, EMC IP HOLDING COMPANY LLC
    Inventors: Chandra Yeleshwarapu, Jonas F. Dias, Angelo Ciarlini, Romulo D. Pinho, Vinicius Gottin, Andre Maximo, Edward Pacheco, David Holmes, Keshava Rangarajan, Scott David Senften, Joseph Blake Winston, Xi Wang, Clifton Brent Walker, Ashwani Dev, Nagaraj Sirinivasan
  • Patent number: 11861391
    Abstract: The disclosure relates to a method, executed in a Network Function Virtualization Infrastructure (NFVI) software modification manager, for coordination of NFVI software modifications of a NFVI providing at least one Virtual Resource (VR) hosting at least one Virtual Network Function (VNF), comprising receiving an NFVI software modifications request; sending a notification that a software modification procedure of the at least one VR is about to start to a VNF level manager, the VNF level manager managing a VNF hosted on the at least one VR provided by the NFVI; applying software modifications to at least one resource of the at least one VR; and notifying the VNF level manager about completion of the software modifications.
    Type: Grant
    Filed: June 14, 2022
    Date of Patent: January 2, 2024
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventor: Maria Toeroe
  • Patent number: 11861419
    Abstract: Systems, methods, and devices for offloading network data to a datastore. A system includes routing chip hardware and an asynchronous object manager in communication with the routing chip hardware. The asynchronous object manager is configurable to execute instructions stored in non-transitory computer readable storage media. The instructions include asynchronously receiving a plurality of objects from one or more producers. The instructions include identifying one or more dependencies between two or more of the plurality of objects. The instructions include reordering the plurality of objects according to the one or more dependencies. The instructions include determining whether the one or more dependencies is resolve. The instructions include, in response to determining the one or more dependencies is resolved, calling back an application and providing one or more of the plurality of objects to the application.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: January 2, 2024
    Assignee: ARRCUS INC.
    Inventors: Nalinaksh Pai, Kalyani Rajaraman, Vikram Ragukumar
  • Patent number: 11853780
    Abstract: Disclosed is an improved approach to implement I/O and storage device management in a virtualization environment. According to some approaches, a Service VM is employed to control and manage any type of storage device, including directly attached storage in addition to networked and cloud storage. The Service VM implements the Storage Controller logic in the user space, and can be migrated as needed from one node to another. IP-based requests are used to send I/O request to the Service VMs. The Service VM can directly implement storage and I/O optimizations within the direct data access path, without the need for add-on products.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: December 26, 2023
    Assignee: Nutanix, Inc.
    Inventors: Mohit Aron, Dheeraj Pandey, Ajeet Singh
  • Patent number: 11847500
    Abstract: A method can include receiving, at a workflow controller, a machine learning workflow, the machine learning workflow associated with a first task and a second task. The first task is training a machine learning model and the second task is deploying the model. The method can include segmenting, by the workflow controller, the machine learning workflow into a first sub-workflow associated with the first task and a second sub-workflow associated with the second task, assigning a first workflow agent to the first sub-workflow and assigning a second workflow agent to the second sub-workflow, selecting, by the first workflow agent and based on first resources needed to perform the first task, a first cluster for performing the first task and selecting, by the second workflow agent and based on second resources needed to perform the second task, a second cluster for performing the second task.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: December 19, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Johnu George, Sourav Chakraborty, Amit Kumar Saha, Debojyoti Dutta, Xinyuan Huang, Adhita Selvaraj
  • Patent number: 11847493
    Abstract: A system may include a receiver to receive a task. The task may include a portion of an algorithm, and may include a task power level and a task precision. The system may also include a circuit including a circuit power level and a circuit precision. The system may include first software to identify the circuit, and second software to assign the task to the circuit to reduce total power. The circuit precision may be greater than the task precision.
    Type: Grant
    Filed: July 13, 2021
    Date of Patent: December 19, 2023
    Inventor: Yang Seok Ki
  • Patent number: 11847494
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for allocating computing resources. In one aspect, a method includes receiving intent data specifying one or more computing services to be hosted by a computing network, requested characteristics of computing resources for use in hosting the computing service, and a priority value for each requested characteristic. A budget constraint is identified for each computing service. Available resources data is identified that specifies a set of available computing resources. A resource allocation problem for allocating computing resources for the one or more computing resources is generated based on the intent data, each budget constraint, and the available resources data. At least a portion of the set of computing resources is allocated for the one or more computing services based on results of evaluating the resource allocation problem to meet a particular resource allocation objective.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: December 19, 2023
    Assignee: Google LLC
    Inventors: David J. Helstroom, Patricia Weir, Cameron Cody Smith, Zachary A. Hirsch, Ulric B. Longyear
  • Patent number: 11803414
    Abstract: Methods and systems for scaling computing processes within a serverless computing environment are provided. In one embodiment, a method is provided that includes receiving a request to execute a computing process in the serverless computing environment. A first node may be created within the serverless computing environment to execute the computing process. A first amount of computing resources may be assigned to the first node. It may be determined later that the first amount of computing resources are not sufficient to implement the first node. A second amount of computing resources may be determined with a vertical autoscaling process and a second node may be created within the serverless computing environment using a horizontal autoscaling process. The second node may be assigned the second amount of computing resources. The computing process may then be executed using both the first and second nodes within the serverless computing environment.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: October 31, 2023
    Assignee: Red Hat, Inc.
    Inventors: Huamin Chen, Roland Huss