Patents Examined by Zujia Xu
  • Patent number: 12223336
    Abstract: An edge network computing system includes: a plurality of terminal devices; a plurality of edge servers connected to the terminal device through an access network; and a plurality of cloud servers connected to the plurality of edge servers through a core network. Each edge server is configured to: receive a plurality of computing tasks originated from one of the plurality of terminal devices; use a deep Q-learning neural network (DQN) with experience replay to select one of the plurality of could servers to offload a portion of the plurality of computing tasks; and send the portion of the plurality of computing tasks to the selected cloud server and forward results of the portion of the plurality of computing tasks received from the selected cloud server to the originating terminal device.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: February 11, 2025
    Assignee: Intelligent Fusion Technology, Inc.
    Inventors: Qi Zhao, Yi Li, Mingjie Feng, Li Li, Genshe Chen
  • Patent number: 12223354
    Abstract: Described herein are embodiments for performing pattern recognition using a hierarchical network. The hierarchical network is made up of fractal cognitive computing nodes that manage their own interconnections and domains in an unsupervised manner. The fractal cognitive computing nodes are also self-replicating and may create new levels within the hierarchical network in an unsupervised manner. The form of signals processed in the hierarchical network may take on the form of key-value pairs. This may allow the hierarchical network to replicate and perform adaptive pattern recognition in non-domain-specific manner with regards to the input signals.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: February 11, 2025
    Assignee: Avatar Cognition Barcelona S.L.
    Inventor: Enric Guinovart
  • Patent number: 12222858
    Abstract: Disclosed here are systems and methods for optimized computation and data management. The systems and methods can be implemented, for example, in a Directed Acyclic Graph (DAG). The disclosed methods and systems involve receiving user instructions to create a graph configured to represent computations and data as a plurality of resources. Cache rules are set in accordance with the user instructions for cached resources to prevent the cached resources from being removed by a garbage collector. The disclosed methods and systems may also involve performing dynamic garbage collection of one or more un-cached resources in response to detection that the one or more un-cached resources are not referenced by any other resource or that all caching periods are over. Iterated computations and data are identified, and recovery policies and deduplication policies are determined for the iterated computations and data.
    Type: Grant
    Filed: March 8, 2024
    Date of Patent: February 11, 2025
    Inventors: Dmitriy Bolotin, Stanislav Poslavsky, Denis Korenevskii, Gleb Zakharov, Dmitriy Chudakov
  • Patent number: 12216552
    Abstract: Systems and techniques for multi-phase cloud service node error prediction are described herein. A set of spatial metrics and a set of temporal metrics may be obtained for node devices in a cloud computing platform. The node devices may be evaluated using a spatial machine learning model and a temporal machine learning model to create a spatial output and a temporal output. One or more potentially faulty nodes may be determined based on an evaluation of the spatial output and the temporal output using a ranking model. The one or more potentially faulty nodes may be a subset of the node devices. One or more migration source nodes may be identified from one or more potentially faulty nodes. The one or more migration source nodes may be identified by minimization of a cost of false positive and false negative node detection.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: February 4, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Qingwei Lin, Kaixin Sui, Yong Xu
  • Patent number: 12190143
    Abstract: Systems, methods, and apparatus are provided for modification of RPA controls during a workflow without impacting bot performance. A background scan initiated in parallel to an RPA application workflow may identify values for an application control parameter and a corresponding bot control parameter. Hashes of the values may be validated against a key value pair stored as a block in a distributed ledger. If changes have been made to the application control values, the bot control value will not match the application control value and validation will fail. If validation fails, an override may be generated for the bot value. An updated bot value may be stored in a temporary cache until the workflow is complete. Following completion of the workflow, a new block may be added to the blockchain storing a new key value pair including the application control value and the updated bot control value.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: January 7, 2025
    Assignee: Bank of America Corporation
    Inventors: Siva Paini, Sakshi Bakshi, Srinivasa Dhanwada, Sudhakar Balu
  • Patent number: 12190162
    Abstract: A method including a) receiving a computer generated data set of sequencing constraints describing a software system to be executed on an automation system and including software components and runnable function entities distributed over the number of computing nodes; b) generating a transition matrix from the data set of sequencing constraints, the transition matrix having a plurality of matrix elements each of them describing, by a transition value, a transition from a runnable function entity to another runnable function entity; c) receiving a computer generated communication matrix describing communication links between the computing nodes in the automation system; d) generating a Markov chain out of the data set of sequencing constraints and the communication matrix; e) generating a distribution function from the Markov chain describing used resources of the computing nodes by the software components and runnable function entities; and f) optimizing the allocation of resources.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: January 7, 2025
    Assignee: Siemens Aktiengesellschaft
    Inventors: Andrés Botero Halblaub, Jan Richter
  • Patent number: 12190174
    Abstract: A technique for synchronizing workgroups is provided. Multiple workgroups execute a wait instruction that specifies a condition variable and a condition. A workgroup scheduler stops execution of a workgroup that executes a wait instruction and an advanced controller begins monitoring the condition variable. In response to the advanced controller detecting that the condition is met, the workgroup scheduler determines whether there is a high contention scenario, which occurs when the wait instruction is part of a mutual exclusion synchronization primitive and is detected by determining that there is a low number of updates to the condition variable prior to detecting that the condition has been met. In a high contention scenario, the workgroup scheduler wakes up one workgroup and schedules another workgroup to be woken up at a time in the future. In a non-contention scenario, more than one workgroup can be woken up at the same time.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: January 7, 2025
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Alexandru Dutu, Sergey Blagodurov, Anthony T. Gutierrez, Matthew D. Sinclair, David A. Wood, Bradford M. Beckmann
  • Patent number: 12182624
    Abstract: A device, system and method for assigning portions of a global resource limit to application engines based on relative load is provided. A system comprises a plurality of application engines that share a global resource limit; and a plurality of operator engines. The plurality of operator engines are each configured to: monitor a respective metric representative of respective load at a respective application engine; share the respective metric with others of the plurality of operator engines; determine a relative load at the respective application engine based on the respective metric and respective metrics received from the others of the plurality of operator engines; and assign a portion of the global resource limit to the respective application engine based on the relative load.
    Type: Grant
    Filed: February 18, 2021
    Date of Patent: December 31, 2024
    Assignee: AMADEUS S.A.S., SOPHIA ANTIPOLIS
    Inventors: Philippe Grabarsky, Mohamed Wadie Nsiri
  • Patent number: 12175302
    Abstract: A transitioning process to integrate a computer-related service with one or more other computer-related services. The computer-related service and the one or more other computer-related services are analyzed to determine whether there is a conflict in integrating the computer-related service in the computing environment. A determination is made based on the analyzing whether one or more changes are to be made to a selected component. At least the analyzing and the determining are part of an automated process generated to integrate the computer-related service, and the automated process is at least a part of the transitioning process. An indication of a performance impact of executing at least the automated process to integrate the computer-related service is obtained. The transitioning process is to continue based on the performance impact meeting one or more selected criteria and based on determining that there are not one or more changes to be made to the selected component.
    Type: Grant
    Filed: November 27, 2020
    Date of Patent: December 24, 2024
    Assignee: Kyndryl, Inc.
    Inventors: Hong Dan Zhan, Kim Poh Wong
  • Patent number: 12124878
    Abstract: A system and method of dynamically controlling a reservation of resources within a cluster environment to maximize a response time are disclosed. The method embodiment of the invention includes receiving from a requestor a request for a reservation of resources in the cluster environment, reserving a first group of resources, evaluating resources within the cluster environment to determine if the response time can be improved and if the response time can be improved, then canceling the reservation for the first group of resources and reserving a second group of resources to process the request at the improved response time.
    Type: Grant
    Filed: March 17, 2022
    Date of Patent: October 22, 2024
    Assignee: III HOLDINGS 12, LLC
    Inventor: David B. Jackson
  • Patent number: 12124888
    Abstract: Example implementations relate to a role-based autoscaling approach for scaling of nodes of a stateful application in a large scale virtual data processing (LSVDP) environment. Information is received regarding a role performed by the nodes of a virtual cluster of an LSVDP environment on which a stateful application is or will be deployed. Role-based autoscaling policies are maintained defining conditions under which the roles are to be scaled. A policy for a first role upon which a second role is dependent specifies a condition for scaling out the first role by a first step and a second step by which the second role is to be scaled out in tandem. When load information for the first role meets the condition, nodes in the virtual cluster that perform the first role are increased by the first step and nodes that perform the second role are increased by the second step.
    Type: Grant
    Filed: June 28, 2023
    Date of Patent: October 22, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Xiongbing Ou, Lakshminarayanan Gunaseelan, Joel Baxter, Swami Viswanathan
  • Patent number: 12093720
    Abstract: A container specification is received. The container specification includes a definition of an image. The image definition specifies the running of one or more prestart runtime commands. The image definition is inspected to identify whether the image definition includes specifying the running of one or more prestart runtime commands. The image is started on a host system, wherein in response to identifying that the image definition includes running one or more prestart runtime commands, the starting of the image includes running the one or more prestart runtime commands prior to the container entering a running state.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: September 17, 2024
    Assignee: International Business Machines Corporation
    Inventors: Zach Taylor, Randy A. Rendahl, Stephen Paul Ridgill, II, Aditya Mandhare
  • Patent number: 12067414
    Abstract: Inadvertent data swaps can be prevented by measuring volume of transactions in distributed computing environment to determine locations for potential data swaps; and managing a correlation between a thread identification (ID) and transaction header (ID) for transactions in the distributed computing environment. In some embodiments, the prevention of data swaps can further include performing a data transmission interruption to avoid data swaps at the locations for potential data swaps. When the thread identification (ID) and transaction header (ID) do not match the potential for data swaps can be high.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: August 20, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Abhay Kumar Patra, Rakesh Shinde, Harish Bharti, Vijay Ekambaram
  • Patent number: 12056513
    Abstract: A server includes a hardware platform, a hypervisor platform, and at least one virtual machine operating as an independent guest computing device. The hypervisor includes a memory facilitator, at least two hardware emulators, a toolstack and an emulator manager. The memory facilitator provides memory for a virtual machine, with the memory having state data associated therewith at a current location within the virtual machine. The at least one hardware emulator provides at least one set of hardware resources for the virtual machine, with the at least one set of hardware resources having state data associated therewith at the current location within the virtual machine. The toolstack controls the hypervisor including generation of a start state data transfer request. The emulator manager coordinates transfer of the respective state data from the current location to a different location, and tracks progress of the transfer of the respective state data to the different location.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: August 6, 2024
    Assignee: Citrix Systems, Inc.
    Inventor: Jennifer Rachel Herbert
  • Patent number: 12050929
    Abstract: A data processing device is provided that includes a plurality of hardware data processing nodes, wherein each hardware data processing node performs a task, and a hardware thread scheduler including a plurality of hardware task schedulers configured to control execution of a respective task on a respective hardware data processing node of the plurality of hardware data processing nodes, and a proxy hardware task scheduler coupled to a data processing node external to the data processing device, wherein the proxy hardware task scheduler is configured to control execution of a task by the external data processing device, wherein the hardware thread scheduler is configurable to execute a thread of tasks, the tasks including the task controlled by the proxy hardware task scheduler and a first task controlled by a first hardware task scheduler of the plurality of hardware task schedulers.
    Type: Grant
    Filed: December 16, 2020
    Date of Patent: July 30, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Hetul Sanghvi, Niraj Nandan, Mihir Narendra Mody, Kedar Satish Chitnis
  • Patent number: 12039355
    Abstract: A telemetry service can receive telemetry collection requirements that are expressed as an “intent” that defines how telemetry is to be collected. A telemetry intent compiler can receive the telemetry intent and translate the high level intent into abstract telemetry configuration parameters that provide a generic description of desired telemetry data. The telemetry service can determine, from the telemetry intent, a set of devices from which to collect telemetry data. For each device, the telemetry service can determine capabilities of the device with respect to telemetry data collection. The capabilities may include a telemetry protocol supported by the device. The telemetry service can create a protocol specific device configuration based on the abstract telemetry configuration parameters and the telemetry protocol supported by the device. Devices in a network system that support a particular telemetry protocol can be allocated to instances of a telemetry collector that supports the telemetry protocol.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: July 16, 2024
    Assignee: JUNIPER NETWORKS, INC.
    Inventors: Gauresh Dilip Vanjare, Shruti Jadon, Tarun Banka, Venny Kranthi Teja Kommarthi, Aditi Ghotikar, Harshit Naresh Chitalia, Keval Nimeshkumar Shah, Mithun Chakaravarrti Dharmaraj, Rajenkumar Patel, Yixiao Wei
  • Patent number: 12020075
    Abstract: Techniques are disclosed relating to dispatching compute work from a compute stream. In some embodiments, a graphics processor executes instructions of compute kernels. Workload parser circuitry may determine, for distribution to the graphics processor circuitry, a set of workgroups from a compute kernel that includes workgroups organized in multiple dimensions, including a first number of workgroups in a first dimension and a second number of workgroups in a second dimension. This may include determining multiple sub-kernels for the compute kernel, wherein a first sub-kernel includes, in the first dimension, a limited number of workgroups that is smaller than the first number of workgroups. The parser circuitry may iterate through workgroups in both the first and second dimensions to generate the set of workgroups, proceeding through the first sub-kernel before iterating through any of the other sub-kernels. Disclosed techniques may provide desirable shapes for batches of workgroups.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: June 25, 2024
    Assignee: Apple Inc.
    Inventors: Andrew M. Havlir, Ajay Simha Modugala, Karl D. Mann
  • Patent number: 11983575
    Abstract: The embodiments herein describe a virtualization framework for cache coherent accelerators where the framework incorporates a layered approach for accelerators in their interactions between a cache coherent protocol layer and the functions performed by the accelerator. In one embodiment, the virtualization framework includes a first layer containing the different instances of accelerator functions (AFs), a second layer containing accelerator function engines (AFE) in each of the AFs, and a third layer containing accelerator function threads (AFTs) in each of the AFEs. Partitioning the hardware circuitry using multiple layers in the virtualization framework allows the accelerator to be quickly re-provisioned in response to requests made by guest operation systems or virtual machines executing in a host. Further, using the layers to partition the hardware permits the host to re-provision sub-portions of the accelerator while the remaining portions of the accelerator continue to operate as normal.
    Type: Grant
    Filed: September 6, 2022
    Date of Patent: May 14, 2024
    Assignee: XILINX, INC.
    Inventors: Millind Mittal, Jaideep Dastidar
  • Patent number: 11972297
    Abstract: Systems and methods are provided for offloading a task from a central processor in a radio access network (RAN) server to one or more heterogeneous accelerators. For example, a task associated with one or more operational partitions (or a service application) associated with processing data traffic in the RAN is dynamically allocated for offloading from the central processor based on workload status information. One or more accelerators are dynamically allocated for executing the task, where the accelerators may be heterogeneous and my not comprise pre-programming for executing the task. The disclosed technology further enables generating specific application programs for execution on the respective heterogeneous accelerators based on a single set of program instructions.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: April 30, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Daehyeok Kim
  • Patent number: 11960938
    Abstract: Disclosed system specifies, based on measurement results of communication times taken for accessing a plurality of external databases, relation between the communication times taken for accessing the plurality of external databases, calculates, when accepting an instruction to execute processing using at least one of the plurality of external databases, a processing load when accessing the at least one of the external databases, based on the relation between the communication times, and controls an access to data included in the at least one of the external databases according to the calculated processing load.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: April 16, 2024
    Assignee: FUJITSU LIMITED
    Inventors: Takuma Maeda, Kazuhiro Taniguchi, Junji Kawai