Patents Examined by Gregory A Kessler
-
Patent number: 11868821Abstract: A system and method for launching parallel processes on a server configured to process a number of parallel processes. A request is received from a parallel application to start a number of parallel processes. In response to this request a launcher creates a surrogate. The surrogate inherits communications channels from the launcher. The surrogate then executes activities related to the launch of the parallel processes, and then launches the parallel processes. The parallel processes are launched and the surrogate is terminated.Type: GrantFiled: January 17, 2023Date of Patent: January 9, 2024Assignee: International Business Machines CorporationInventors: Joshua J. Hursey, David Solt, Austen William Lauria
-
Patent number: 11861408Abstract: The present disclosure includes systems, methods, and computer-readable mediums for discovering capabilities of a hardware (HW) accelerator card. A processor may communicate a request for a listing of acceleration services to a HW accelerator card connected to the processor via the communication interface. The HW accelerator card may retrieve the listing from memory and provide a response to the processor that includes a listing of the HW acceleration services provided by the HW accelerator card.Type: GrantFiled: June 18, 2021Date of Patent: January 2, 2024Assignee: Google LLCInventors: Shrikant Kelkar, Lakshmi Sharma, Manoj Jayadevan, Gargi Adhav, Parveen Patel, Parthasarathy Ranganathan
-
Patent number: 11853748Abstract: The current document is directed to automated application-release-management facilities that, in a described implementation, coordinate continuous development and release of cloud-computing applications. The application-release-management process is specified, in the described implementation, by application-release-management pipelines, each pipeline comprising one or more stages, with each stage comprising one or more tasks. The currently described methods and systems allow resources to be shared among multiple, interdependent release pipelines and allow access to shared resources to be controlled.Type: GrantFiled: October 25, 2021Date of Patent: December 26, 2023Assignee: VMware, Inc.Inventors: Agila Govindaraju, Ravi Kasha, Mohammed Muneebuddin
-
Patent number: 11853779Abstract: A host device and methods for efficient distributed security forensics. The method includes creating, at a host device configured to run a virtualization entity, an event index for the virtualization entity; encoding a plurality of events related to the virtualization entity, wherein each event includes a process having a process path; and updating the event index based on the encoded plurality of events.Type: GrantFiled: October 15, 2021Date of Patent: December 26, 2023Assignee: Twistlock, Ltd.Inventors: Liron Levin, Dima Stopel, Ami Bizamcher, Michael Kletselman, John Morello
-
Patent number: 11853789Abstract: In one embodiment, a system includes first host machines implementing a public-cloud computing environment, wherein at least one of the first host machines includes a resource manager that provides a public-cloud resource interface through which one or more public-cloud clients interact with one or more virtual machines, and second host machines implementing a private-cloud computing environment, wherein at least one of the second host machines includes one or more private-cloud virtual machines, wherein at least one of the first host machines further includes a private-cloud VM resource provider through which the resource manager interacts with the private-cloud virtual machines, wherein the VM resource provider translates requests to perform virtual machine operations from a public-cloud-resource interface to a private-cloud virtual machine interface, and the private-cloud virtual machines perform the requested virtual machine operations in response to receiving the translated requests from the VM resourceType: GrantFiled: November 23, 2022Date of Patent: December 26, 2023Assignee: Google LLCInventors: Ilya Beyer, Manoj Sharma, Gururaj Pangal, Maurilio Cometto
-
Patent number: 11853811Abstract: Methods of arbitrating between requestors and a shared resource are described. The method comprises generating a vector with one bit per requestor, each initially set to one. Based on a plurality of select signals (one per decision node in a first layer of a binary decision tree, where each select signal is configured to be used by the corresponding decision node to select one of two child nodes), bits in the vector corresponding to non-selected requestors are set to zero. The method is repeated for each subsequent layer in the binary decision tree, based on the select signals for the decision nodes in those layers. The resulting vector is a priority vector in which only a single bit has a value of one. Access to the shared resource is granted, for a current processing cycle, to the requestor corresponding to the bit having a value of one.Type: GrantFiled: October 17, 2022Date of Patent: December 26, 2023Assignee: Imagination Technologies LimitedInventor: Casper Van Benthem
-
Patent number: 11853794Abstract: A pipeline task verification method and system is disclosed, and may use one or more processors. The method may comprise providing a data processing pipeline specification, wherein the data processing pipeline specification defines a plurality of data elements of a data processing pipeline. The method may further comprise identifying from the data processing pipeline specification one or more tasks defining a relationship between a first data element and a second data element. The method may further comprise receiving for a given task one or more data processing elements intended to receive the first data element and to produce the second data element. The method may further comprise verifying that the received one or more data processing elements receive the first data element and produce the second data element according to the defined relationship.Type: GrantFiled: September 30, 2022Date of Patent: December 26, 2023Assignee: Palantir Technologies Inc.Inventor: Kaan Tekelioglu
-
Patent number: 11847498Abstract: A system and method for multi-region deployment of application jobs in a federated cloud computing infrastructure. A job is received for execution in two or more regions of the federated cloud computing infrastructure, each of the two or more regions comprising a collection of servers joined in a raft group for separate, regional execution of the job generating a copy of the job for each of the two or more regions. The job is then deployed to the two or more regions, the workload orchestrator deploying the job according to a deployment plan. A state indication is received from each of the two or more regions, the state indication representing a state of completion of the job by each respective region of the multi-cloud computing infrastructure.Type: GrantFiled: July 8, 2021Date of Patent: December 19, 2023Assignee: HashiCorpInventor: Timothy Gross
-
Patent number: 11847533Abstract: A distributed computing network includes a quantum computation network and a processor. The quantum computation network includes one or more quantum processor units (QPUs) interconnected one with the other using quantum interconnects including each a quantum link and quantum network interface cards (QNICs), where each QPU is further connected to, using the QNIC, a quantum memory. The processor is configured to receive a quantum computation task, and, using a network interface card (NIC) (i) allocate the quantum computation task to the computation network, by activating any of the quantum interconnects between the QPUs according to the quantum computation task, and (ii) solve the quantum computation task using the quantum computation network.Type: GrantFiled: December 3, 2020Date of Patent: December 19, 2023Assignee: MELLANOX TECHNOLOGIES, LTD.Inventors: Elad Mentovich, Kyle Scheps, Juan Jose Vegas Olmos
-
Patent number: 11847497Abstract: Methods, apparatus, systems and articles of manufacture are disclosed that enable out-of-order pipelined execution of static mapping of a workload to one or more computational building blocks of an accelerator. An example apparatus includes an interface to load a first number of credits into memory; a comparator to compare the first number of credits to a threshold number of credits associated with memory availability in a buffer; and a dispatcher to, when the first number of credits meets the threshold number of credits, select a workload node of the workload to be executed at a first one of the one or more computational building blocks.Type: GrantFiled: December 23, 2021Date of Patent: December 19, 2023Assignee: Intel CorporationInventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
-
Patent number: 11836536Abstract: Systems and methods for analyzing an event log for a plurality of instances of execution of a process to identify a bottleneck are provided. An event log for a plurality of instances of execution of a process is received and segments executed during one or more of the plurality of instances of execution are identified from the event log. The segments represent a pair of activities of the process. For each particular segment of the identified segments, a measure of performance is calculated for each of the one or more instances of execution of the particular segment based on the event log, each of the one or more instances of execution of the particular segment is classified based on the calculated measures of performance, and one or more metrics are computed for the particular segment based on the classified one or more instances of execution of the particular segment.Type: GrantFiled: March 14, 2022Date of Patent: December 5, 2023Assignee: UiPath, Inc.Inventors: Martijn Copier, Roeland Johannus Scheepens
-
Patent number: 11836642Abstract: A method, system, and computer program product for dynamically scheduling machine learning inference jobs receive or determine a plurality of performance profiles associated with a plurality of system resources, wherein each performance profile is associated with a machine learning model; receive a request for system resources for an inference job associated with the machine learning model; determine a system resource of the plurality of system resources for processing the inference job associated with the machine learning model based on the plurality of performance profiles and a quality of service requirement associated with the inference job; assign the system resource to the inference job for processing the inference job; receive result data associated with processing of the inference job with the system resource; and update based on the result data, a performance profile of the plurality of the performance profiles associated with the system resource and the machine learning model.Type: GrantFiled: December 23, 2022Date of Patent: December 5, 2023Assignee: Visa International Service AssociationInventors: Yinhe Cheng, Yu Gu, Igor Karpenko, Peter Walker, Ranglin Lu, Subir Roy
-
Patent number: 11836524Abstract: Representative apparatus, method, and system embodiments are disclosed for configurable computing. A representative memory interface circuit comprises: a plurality of registers storing a plurality of tables, a state machine circuit, and a plurality of queues. The plurality of tables include a memory request table, a memory request identifier table, a memory response table, a memory data message table, and a memory response buffer. The state machine circuit is adapted to receive a load request, and in response, to obtain a first memory request identifier from the load request, to store the first memory request identifier in the memory request identifier table, to generate one or more memory load request data packets having the memory request identifier for transmission to the memory circuit, and to store load request information in the memory request table. The plurality of queues store one or more data packets for transmission.Type: GrantFiled: August 19, 2020Date of Patent: December 5, 2023Assignee: Micron Technology, Inc.Inventor: Tony M. Brewer
-
Patent number: 11836522Abstract: In one or more embodiments, one or more systems, one or more methods, and/or one or more processes may determine an identification of an application executing on an information handling system (IHS); determine a first performance profile based at least on a policy and based at least on the identification of the application; configure a processor to utilize power up to a first power level based at least on the first performance profile; determine that a user physically utilizes at least one human input device of the IHS within an amount of time transpiring; receive information indicating that the user is physically in contact with the IHS; determine a second performance profile based at least on the policy and based at least on the information; and configure the processor to utilize power up to a second power level based at least on the second performance profile.Type: GrantFiled: March 23, 2021Date of Patent: December 5, 2023Assignee: Dell Products L.P.Inventors: Travis Christian North, Daniel Lawrence Hamlin
-
Patent number: 11829805Abstract: A plurality of low-performance locks within a computing environment are monitored. It is identified that, during a time window, threads of one of the plurality of low-performance locks are in a lock queue for an average time that exceeds a time threshold. It is further identified that, during that same time window, the average queue depth of the one of the plurality of low-performance locks exceeds a depth threshold. The one of the plurality of low-performance locks is converted from a low-performance lock into a high-performance lock.Type: GrantFiled: May 27, 2021Date of Patent: November 28, 2023Assignee: International Business Machines CorporationInventor: Louis A. Rasor
-
Patent number: 11809973Abstract: A modularized model interaction system and method of use, including an orchestrator, a set of hardware modules each including a standard set of hardware submodules with hardware-specific logic, and a set of model modules each including a standard set of model submodules with model-specific logic. In operation, the orchestrator determines a standard set of submodule calls to the standard submodules of a given hardware module and model module to implement model interaction on hardware associated with the hardware module.Type: GrantFiled: May 10, 2022Date of Patent: November 7, 2023Assignee: Grid.ai, Inc.Inventors: Williams Falcon, Adrian Walchli, Thomas Henri Marceau Chaton, Sean Presley Narenthiran
-
Patent number: 11809915Abstract: A parallel processing technique can be used to expedite reconciliation of a hierarchy of forecasts on a computer system. As one example, the computer system can receive forecasts that have a hierarchical relationship with respect to one another. The computer system can distribute the forecasts among a group of computing nodes by time point, so that all data points corresponding to the same time point in the forecasts are assigned to the same computing node. The computing nodes can receive the datasets corresponding to the time points, organize the data points in each of the datasets by forecast to generate ordered datasets, and assign the ordered datasets to processing threads. The processing threads (across the computing nodes) can then execute a reconciliation process in parallel to one another to generate reconciled values, which can be output by the computing nodes.Type: GrantFiled: August 2, 2023Date of Patent: November 7, 2023Assignee: SAS Institute Inc.Inventors: Matthew Wayne Simpson, Caiqin Wang, Nilesh Jakhotiya, Michele Angelo Trovero
-
Patent number: 11803418Abstract: Systems and methods for implementing robotic process automation (RPA) in the cloud are provided. An instruction for managing an RPA robot is received at an orchestrator in a cloud computing environment from a user in a local computing environment. In response to receiving the instruction, the instruction for managing the RPA robot is effectuated.Type: GrantFiled: March 17, 2022Date of Patent: October 31, 2023Assignee: UiPath, Inc.Inventor: Tarek Madkour
-
Patent number: 11803427Abstract: The present invention relates to a method of contentions mitigation for an operational application implemented by an embedded platform comprising a plurality of cores and a plurality of shared resources. This method comprises the steps of executing the operational application by one of the cores of the embedded platform, executing a stressor application on at least some other cores of the embedded platform in parallel with the operational application, the stressor application being composed of a set of contention tasks generating a maximum contention on interference channels, and determining contentions generated by the stressor application on the operational application.Type: GrantFiled: May 12, 2021Date of Patent: October 31, 2023Assignee: THALESInventors: Pierrick Lamour, Marc Fumey
-
Patent number: 11797356Abstract: A method of synchronizing tasks in a test and measurement system, includes receiving, at a client in the system, a task input, receiving, at a job manager running on a first device processor in the system, a call from the client to create a job associated with the task, returning to the client an action containing at least one job code block associated with the job, receiving a call for the action, executing the at least one job code block by at least one processor in the system, determining that the job has completed, and completing the task.Type: GrantFiled: September 11, 2020Date of Patent: October 24, 2023Assignee: Tektronix, Inc.Inventors: Timothy E. Sauerwein, Clinton M. Alter, Sean T. Marty, Jenny Yang, Keith D. Rule