Patents Examined by Zujia Xu
-
Patent number: 12190174Abstract: A technique for synchronizing workgroups is provided. Multiple workgroups execute a wait instruction that specifies a condition variable and a condition. A workgroup scheduler stops execution of a workgroup that executes a wait instruction and an advanced controller begins monitoring the condition variable. In response to the advanced controller detecting that the condition is met, the workgroup scheduler determines whether there is a high contention scenario, which occurs when the wait instruction is part of a mutual exclusion synchronization primitive and is detected by determining that there is a low number of updates to the condition variable prior to detecting that the condition has been met. In a high contention scenario, the workgroup scheduler wakes up one workgroup and schedules another workgroup to be woken up at a time in the future. In a non-contention scenario, more than one workgroup can be woken up at the same time.Type: GrantFiled: May 29, 2019Date of Patent: January 7, 2025Assignee: Advanced Micro Devices, Inc.Inventors: Alexandru Dutu, Sergey Blagodurov, Anthony T. Gutierrez, Matthew D. Sinclair, David A. Wood, Bradford M. Beckmann
-
Patent number: 12190143Abstract: Systems, methods, and apparatus are provided for modification of RPA controls during a workflow without impacting bot performance. A background scan initiated in parallel to an RPA application workflow may identify values for an application control parameter and a corresponding bot control parameter. Hashes of the values may be validated against a key value pair stored as a block in a distributed ledger. If changes have been made to the application control values, the bot control value will not match the application control value and validation will fail. If validation fails, an override may be generated for the bot value. An updated bot value may be stored in a temporary cache until the workflow is complete. Following completion of the workflow, a new block may be added to the blockchain storing a new key value pair including the application control value and the updated bot control value.Type: GrantFiled: September 14, 2021Date of Patent: January 7, 2025Assignee: Bank of America CorporationInventors: Siva Paini, Sakshi Bakshi, Srinivasa Dhanwada, Sudhakar Balu
-
Patent number: 12190162Abstract: A method including a) receiving a computer generated data set of sequencing constraints describing a software system to be executed on an automation system and including software components and runnable function entities distributed over the number of computing nodes; b) generating a transition matrix from the data set of sequencing constraints, the transition matrix having a plurality of matrix elements each of them describing, by a transition value, a transition from a runnable function entity to another runnable function entity; c) receiving a computer generated communication matrix describing communication links between the computing nodes in the automation system; d) generating a Markov chain out of the data set of sequencing constraints and the communication matrix; e) generating a distribution function from the Markov chain describing used resources of the computing nodes by the software components and runnable function entities; and f) optimizing the allocation of resources.Type: GrantFiled: February 28, 2019Date of Patent: January 7, 2025Assignee: Siemens AktiengesellschaftInventors: Andrés Botero Halblaub, Jan Richter
-
Patent number: 12182624Abstract: A device, system and method for assigning portions of a global resource limit to application engines based on relative load is provided. A system comprises a plurality of application engines that share a global resource limit; and a plurality of operator engines. The plurality of operator engines are each configured to: monitor a respective metric representative of respective load at a respective application engine; share the respective metric with others of the plurality of operator engines; determine a relative load at the respective application engine based on the respective metric and respective metrics received from the others of the plurality of operator engines; and assign a portion of the global resource limit to the respective application engine based on the relative load.Type: GrantFiled: February 18, 2021Date of Patent: December 31, 2024Assignee: AMADEUS S.A.S., SOPHIA ANTIPOLISInventors: Philippe Grabarsky, Mohamed Wadie Nsiri
-
Patent number: 12175302Abstract: A transitioning process to integrate a computer-related service with one or more other computer-related services. The computer-related service and the one or more other computer-related services are analyzed to determine whether there is a conflict in integrating the computer-related service in the computing environment. A determination is made based on the analyzing whether one or more changes are to be made to a selected component. At least the analyzing and the determining are part of an automated process generated to integrate the computer-related service, and the automated process is at least a part of the transitioning process. An indication of a performance impact of executing at least the automated process to integrate the computer-related service is obtained. The transitioning process is to continue based on the performance impact meeting one or more selected criteria and based on determining that there are not one or more changes to be made to the selected component.Type: GrantFiled: November 27, 2020Date of Patent: December 24, 2024Assignee: Kyndryl, Inc.Inventors: Hong Dan Zhan, Kim Poh Wong
-
Patent number: 12124888Abstract: Example implementations relate to a role-based autoscaling approach for scaling of nodes of a stateful application in a large scale virtual data processing (LSVDP) environment. Information is received regarding a role performed by the nodes of a virtual cluster of an LSVDP environment on which a stateful application is or will be deployed. Role-based autoscaling policies are maintained defining conditions under which the roles are to be scaled. A policy for a first role upon which a second role is dependent specifies a condition for scaling out the first role by a first step and a second step by which the second role is to be scaled out in tandem. When load information for the first role meets the condition, nodes in the virtual cluster that perform the first role are increased by the first step and nodes that perform the second role are increased by the second step.Type: GrantFiled: June 28, 2023Date of Patent: October 22, 2024Assignee: Hewlett Packard Enterprise Development LPInventors: Xiongbing Ou, Lakshminarayanan Gunaseelan, Joel Baxter, Swami Viswanathan
-
Patent number: 12124878Abstract: A system and method of dynamically controlling a reservation of resources within a cluster environment to maximize a response time are disclosed. The method embodiment of the invention includes receiving from a requestor a request for a reservation of resources in the cluster environment, reserving a first group of resources, evaluating resources within the cluster environment to determine if the response time can be improved and if the response time can be improved, then canceling the reservation for the first group of resources and reserving a second group of resources to process the request at the improved response time.Type: GrantFiled: March 17, 2022Date of Patent: October 22, 2024Assignee: III HOLDINGS 12, LLCInventor: David B. Jackson
-
Patent number: 12093720Abstract: A container specification is received. The container specification includes a definition of an image. The image definition specifies the running of one or more prestart runtime commands. The image definition is inspected to identify whether the image definition includes specifying the running of one or more prestart runtime commands. The image is started on a host system, wherein in response to identifying that the image definition includes running one or more prestart runtime commands, the starting of the image includes running the one or more prestart runtime commands prior to the container entering a running state.Type: GrantFiled: March 15, 2021Date of Patent: September 17, 2024Assignee: International Business Machines CorporationInventors: Zach Taylor, Randy A. Rendahl, Stephen Paul Ridgill, II, Aditya Mandhare
-
Patent number: 12067414Abstract: Inadvertent data swaps can be prevented by measuring volume of transactions in distributed computing environment to determine locations for potential data swaps; and managing a correlation between a thread identification (ID) and transaction header (ID) for transactions in the distributed computing environment. In some embodiments, the prevention of data swaps can further include performing a data transmission interruption to avoid data swaps at the locations for potential data swaps. When the thread identification (ID) and transaction header (ID) do not match the potential for data swaps can be high.Type: GrantFiled: November 4, 2021Date of Patent: August 20, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Abhay Kumar Patra, Rakesh Shinde, Harish Bharti, Vijay Ekambaram
-
Patent number: 12056513Abstract: A server includes a hardware platform, a hypervisor platform, and at least one virtual machine operating as an independent guest computing device. The hypervisor includes a memory facilitator, at least two hardware emulators, a toolstack and an emulator manager. The memory facilitator provides memory for a virtual machine, with the memory having state data associated therewith at a current location within the virtual machine. The at least one hardware emulator provides at least one set of hardware resources for the virtual machine, with the at least one set of hardware resources having state data associated therewith at the current location within the virtual machine. The toolstack controls the hypervisor including generation of a start state data transfer request. The emulator manager coordinates transfer of the respective state data from the current location to a different location, and tracks progress of the transfer of the respective state data to the different location.Type: GrantFiled: March 17, 2021Date of Patent: August 6, 2024Assignee: Citrix Systems, Inc.Inventor: Jennifer Rachel Herbert
-
Patent number: 12050929Abstract: A data processing device is provided that includes a plurality of hardware data processing nodes, wherein each hardware data processing node performs a task, and a hardware thread scheduler including a plurality of hardware task schedulers configured to control execution of a respective task on a respective hardware data processing node of the plurality of hardware data processing nodes, and a proxy hardware task scheduler coupled to a data processing node external to the data processing device, wherein the proxy hardware task scheduler is configured to control execution of a task by the external data processing device, wherein the hardware thread scheduler is configurable to execute a thread of tasks, the tasks including the task controlled by the proxy hardware task scheduler and a first task controlled by a first hardware task scheduler of the plurality of hardware task schedulers.Type: GrantFiled: December 16, 2020Date of Patent: July 30, 2024Assignee: Texas Instruments IncorporatedInventors: Hetul Sanghvi, Niraj Nandan, Mihir Narendra Mody, Kedar Satish Chitnis
-
Patent number: 12039355Abstract: A telemetry service can receive telemetry collection requirements that are expressed as an “intent” that defines how telemetry is to be collected. A telemetry intent compiler can receive the telemetry intent and translate the high level intent into abstract telemetry configuration parameters that provide a generic description of desired telemetry data. The telemetry service can determine, from the telemetry intent, a set of devices from which to collect telemetry data. For each device, the telemetry service can determine capabilities of the device with respect to telemetry data collection. The capabilities may include a telemetry protocol supported by the device. The telemetry service can create a protocol specific device configuration based on the abstract telemetry configuration parameters and the telemetry protocol supported by the device. Devices in a network system that support a particular telemetry protocol can be allocated to instances of a telemetry collector that supports the telemetry protocol.Type: GrantFiled: August 24, 2020Date of Patent: July 16, 2024Assignee: JUNIPER NETWORKS, INC.Inventors: Gauresh Dilip Vanjare, Shruti Jadon, Tarun Banka, Venny Kranthi Teja Kommarthi, Aditi Ghotikar, Harshit Naresh Chitalia, Keval Nimeshkumar Shah, Mithun Chakaravarrti Dharmaraj, Rajenkumar Patel, Yixiao Wei
-
Patent number: 12020075Abstract: Techniques are disclosed relating to dispatching compute work from a compute stream. In some embodiments, a graphics processor executes instructions of compute kernels. Workload parser circuitry may determine, for distribution to the graphics processor circuitry, a set of workgroups from a compute kernel that includes workgroups organized in multiple dimensions, including a first number of workgroups in a first dimension and a second number of workgroups in a second dimension. This may include determining multiple sub-kernels for the compute kernel, wherein a first sub-kernel includes, in the first dimension, a limited number of workgroups that is smaller than the first number of workgroups. The parser circuitry may iterate through workgroups in both the first and second dimensions to generate the set of workgroups, proceeding through the first sub-kernel before iterating through any of the other sub-kernels. Disclosed techniques may provide desirable shapes for batches of workgroups.Type: GrantFiled: September 11, 2020Date of Patent: June 25, 2024Assignee: Apple Inc.Inventors: Andrew M. Havlir, Ajay Simha Modugala, Karl D. Mann
-
Patent number: 11983575Abstract: The embodiments herein describe a virtualization framework for cache coherent accelerators where the framework incorporates a layered approach for accelerators in their interactions between a cache coherent protocol layer and the functions performed by the accelerator. In one embodiment, the virtualization framework includes a first layer containing the different instances of accelerator functions (AFs), a second layer containing accelerator function engines (AFE) in each of the AFs, and a third layer containing accelerator function threads (AFTs) in each of the AFEs. Partitioning the hardware circuitry using multiple layers in the virtualization framework allows the accelerator to be quickly re-provisioned in response to requests made by guest operation systems or virtual machines executing in a host. Further, using the layers to partition the hardware permits the host to re-provision sub-portions of the accelerator while the remaining portions of the accelerator continue to operate as normal.Type: GrantFiled: September 6, 2022Date of Patent: May 14, 2024Assignee: XILINX, INC.Inventors: Millind Mittal, Jaideep Dastidar
-
Patent number: 11972297Abstract: Systems and methods are provided for offloading a task from a central processor in a radio access network (RAN) server to one or more heterogeneous accelerators. For example, a task associated with one or more operational partitions (or a service application) associated with processing data traffic in the RAN is dynamically allocated for offloading from the central processor based on workload status information. One or more accelerators are dynamically allocated for executing the task, where the accelerators may be heterogeneous and my not comprise pre-programming for executing the task. The disclosed technology further enables generating specific application programs for execution on the respective heterogeneous accelerators based on a single set of program instructions.Type: GrantFiled: May 18, 2021Date of Patent: April 30, 2024Assignee: Microsoft Technology Licensing, LLCInventor: Daehyeok Kim
-
Patent number: 11960938Abstract: Disclosed system specifies, based on measurement results of communication times taken for accessing a plurality of external databases, relation between the communication times taken for accessing the plurality of external databases, calculates, when accepting an instruction to execute processing using at least one of the plurality of external databases, a processing load when accessing the at least one of the external databases, based on the relation between the communication times, and controls an access to data included in the at least one of the external databases according to the calculated processing load.Type: GrantFiled: April 29, 2021Date of Patent: April 16, 2024Assignee: FUJITSU LIMITEDInventors: Takuma Maeda, Kazuhiro Taniguchi, Junji Kawai
-
Patent number: 11960937Abstract: A system and method of dynamically controlling a reservation of resources within a cluster environment to maximize a response time are disclosed. The method embodiment of the invention includes receiving from a requestor a request for a reservation of resources in the cluster environment, reserving a first group of resources, evaluating resources within the cluster environment to determine if the response time can be improved and if the response time can be improved, then canceling the reservation for the first group of resources and reserving a second group of resources to process the request at the improved response time.Type: GrantFiled: March 17, 2022Date of Patent: April 16, 2024Assignee: III Holdings 12, LLCInventor: David B. Jackson
-
Patent number: 11875198Abstract: At least one processing device comprises a processor and a memory coupled to the processor. The at least one processing device is configured to establish one or more groups of synchronization objects in a storage system based at least in part on object type, and for each of the one or more groups, to insert entries into a corresponding object type queue for respective objects of the group, to execute a monitor thread for the group, the monitor thread being configured to scan the entries of the corresponding object type queue, and responsive to at least one of the scanned entries meeting one or more designated conditions, to take at least one automated action for its associated object. The synchronization objects illustratively comprise respective locks, or other objects. The at least one processing device illustratively comprises at least a subset of a plurality of processing cores of the storage system.Type: GrantFiled: March 22, 2021Date of Patent: January 16, 2024Assignee: EMC IP Holding Company LLCInventors: Vladimir Shveidel, Lior Kamran
-
Patent number: 11874758Abstract: Some embodiments are directed to a logging within a software application executed over an assembly of information processing devices. More particularly, some embodiments relate to a method allowing process logging in the case of a software application operating with several processes and/or threads.Type: GrantFiled: August 25, 2015Date of Patent: January 16, 2024Assignee: BULL SASInventor: Pierre Vigneras
-
Patent number: 11842224Abstract: Client application (112) submits request (118) to resource status service (110) for resource status data (“data”) regarding one or more computing resources (108) provided in a service provider network (102). The resource status service submits requests to the resources for the data. The resource status service provides a reply to the client application that includes any data received from the resources within a specified time. If all requested data was not received from the resources within the specified time the resource status service can also provide, in the reply, an identifier (“ID”) that identifies the request and can be utilized to identify and retrieve additional status data received at a later time. The client application can also submit additional requests for the status data, and may include the ID, may wait for additional data to be pushed to it, or may check a queue for the status data.Type: GrantFiled: September 1, 2017Date of Patent: December 12, 2023Assignee: Amazon Technologies, Inc.Inventor: Nima Sharifi Mehr