Patents Examined by Gregory Kessler
-
Patent number: 11847533Abstract: A distributed computing network includes a quantum computation network and a processor. The quantum computation network includes one or more quantum processor units (QPUs) interconnected one with the other using quantum interconnects including each a quantum link and quantum network interface cards (QNICs), where each QPU is further connected to, using the QNIC, a quantum memory. The processor is configured to receive a quantum computation task, and, using a network interface card (NIC) (i) allocate the quantum computation task to the computation network, by activating any of the quantum interconnects between the QPUs according to the quantum computation task, and (ii) solve the quantum computation task using the quantum computation network.Type: GrantFiled: December 3, 2020Date of Patent: December 19, 2023Assignee: MELLANOX TECHNOLOGIES, LTD.Inventors: Elad Mentovich, Kyle Scheps, Juan Jose Vegas Olmos
-
Patent number: 11847498Abstract: A system and method for multi-region deployment of application jobs in a federated cloud computing infrastructure. A job is received for execution in two or more regions of the federated cloud computing infrastructure, each of the two or more regions comprising a collection of servers joined in a raft group for separate, regional execution of the job generating a copy of the job for each of the two or more regions. The job is then deployed to the two or more regions, the workload orchestrator deploying the job according to a deployment plan. A state indication is received from each of the two or more regions, the state indication representing a state of completion of the job by each respective region of the multi-cloud computing infrastructure.Type: GrantFiled: July 8, 2021Date of Patent: December 19, 2023Assignee: HashiCorpInventor: Timothy Gross
-
Patent number: 11847497Abstract: Methods, apparatus, systems and articles of manufacture are disclosed that enable out-of-order pipelined execution of static mapping of a workload to one or more computational building blocks of an accelerator. An example apparatus includes an interface to load a first number of credits into memory; a comparator to compare the first number of credits to a threshold number of credits associated with memory availability in a buffer; and a dispatcher to, when the first number of credits meets the threshold number of credits, select a workload node of the workload to be executed at a first one of the one or more computational building blocks.Type: GrantFiled: December 23, 2021Date of Patent: December 19, 2023Assignee: Intel CorporationInventors: Michael Behar, Moshe Maor, Ronen Gabbai, Roni Rosner, Zigi Walter, Oren Agam
-
Patent number: 11836524Abstract: Representative apparatus, method, and system embodiments are disclosed for configurable computing. A representative memory interface circuit comprises: a plurality of registers storing a plurality of tables, a state machine circuit, and a plurality of queues. The plurality of tables include a memory request table, a memory request identifier table, a memory response table, a memory data message table, and a memory response buffer. The state machine circuit is adapted to receive a load request, and in response, to obtain a first memory request identifier from the load request, to store the first memory request identifier in the memory request identifier table, to generate one or more memory load request data packets having the memory request identifier for transmission to the memory circuit, and to store load request information in the memory request table. The plurality of queues store one or more data packets for transmission.Type: GrantFiled: August 19, 2020Date of Patent: December 5, 2023Assignee: Micron Technology, Inc.Inventor: Tony M. Brewer
-
Patent number: 11836642Abstract: A method, system, and computer program product for dynamically scheduling machine learning inference jobs receive or determine a plurality of performance profiles associated with a plurality of system resources, wherein each performance profile is associated with a machine learning model; receive a request for system resources for an inference job associated with the machine learning model; determine a system resource of the plurality of system resources for processing the inference job associated with the machine learning model based on the plurality of performance profiles and a quality of service requirement associated with the inference job; assign the system resource to the inference job for processing the inference job; receive result data associated with processing of the inference job with the system resource; and update based on the result data, a performance profile of the plurality of the performance profiles associated with the system resource and the machine learning model.Type: GrantFiled: December 23, 2022Date of Patent: December 5, 2023Assignee: Visa International Service AssociationInventors: Yinhe Cheng, Yu Gu, Igor Karpenko, Peter Walker, Ranglin Lu, Subir Roy
-
Patent number: 11836536Abstract: Systems and methods for analyzing an event log for a plurality of instances of execution of a process to identify a bottleneck are provided. An event log for a plurality of instances of execution of a process is received and segments executed during one or more of the plurality of instances of execution are identified from the event log. The segments represent a pair of activities of the process. For each particular segment of the identified segments, a measure of performance is calculated for each of the one or more instances of execution of the particular segment based on the event log, each of the one or more instances of execution of the particular segment is classified based on the calculated measures of performance, and one or more metrics are computed for the particular segment based on the classified one or more instances of execution of the particular segment.Type: GrantFiled: March 14, 2022Date of Patent: December 5, 2023Assignee: UiPath, Inc.Inventors: Martijn Copier, Roeland Johannus Scheepens
-
Patent number: 11836522Abstract: In one or more embodiments, one or more systems, one or more methods, and/or one or more processes may determine an identification of an application executing on an information handling system (IHS); determine a first performance profile based at least on a policy and based at least on the identification of the application; configure a processor to utilize power up to a first power level based at least on the first performance profile; determine that a user physically utilizes at least one human input device of the IHS within an amount of time transpiring; receive information indicating that the user is physically in contact with the IHS; determine a second performance profile based at least on the policy and based at least on the information; and configure the processor to utilize power up to a second power level based at least on the second performance profile.Type: GrantFiled: March 23, 2021Date of Patent: December 5, 2023Assignee: Dell Products L.P.Inventors: Travis Christian North, Daniel Lawrence Hamlin
-
Patent number: 11829805Abstract: A plurality of low-performance locks within a computing environment are monitored. It is identified that, during a time window, threads of one of the plurality of low-performance locks are in a lock queue for an average time that exceeds a time threshold. It is further identified that, during that same time window, the average queue depth of the one of the plurality of low-performance locks exceeds a depth threshold. The one of the plurality of low-performance locks is converted from a low-performance lock into a high-performance lock.Type: GrantFiled: May 27, 2021Date of Patent: November 28, 2023Assignee: International Business Machines CorporationInventor: Louis A. Rasor
-
Patent number: 11809915Abstract: A parallel processing technique can be used to expedite reconciliation of a hierarchy of forecasts on a computer system. As one example, the computer system can receive forecasts that have a hierarchical relationship with respect to one another. The computer system can distribute the forecasts among a group of computing nodes by time point, so that all data points corresponding to the same time point in the forecasts are assigned to the same computing node. The computing nodes can receive the datasets corresponding to the time points, organize the data points in each of the datasets by forecast to generate ordered datasets, and assign the ordered datasets to processing threads. The processing threads (across the computing nodes) can then execute a reconciliation process in parallel to one another to generate reconciled values, which can be output by the computing nodes.Type: GrantFiled: August 2, 2023Date of Patent: November 7, 2023Assignee: SAS Institute Inc.Inventors: Matthew Wayne Simpson, Caiqin Wang, Nilesh Jakhotiya, Michele Angelo Trovero
-
Patent number: 11809973Abstract: A modularized model interaction system and method of use, including an orchestrator, a set of hardware modules each including a standard set of hardware submodules with hardware-specific logic, and a set of model modules each including a standard set of model submodules with model-specific logic. In operation, the orchestrator determines a standard set of submodule calls to the standard submodules of a given hardware module and model module to implement model interaction on hardware associated with the hardware module.Type: GrantFiled: May 10, 2022Date of Patent: November 7, 2023Assignee: Grid.ai, Inc.Inventors: Williams Falcon, Adrian Walchli, Thomas Henri Marceau Chaton, Sean Presley Narenthiran
-
Patent number: 11803418Abstract: Systems and methods for implementing robotic process automation (RPA) in the cloud are provided. An instruction for managing an RPA robot is received at an orchestrator in a cloud computing environment from a user in a local computing environment. In response to receiving the instruction, the instruction for managing the RPA robot is effectuated.Type: GrantFiled: March 17, 2022Date of Patent: October 31, 2023Assignee: UiPath, Inc.Inventor: Tarek Madkour
-
Patent number: 11803427Abstract: The present invention relates to a method of contentions mitigation for an operational application implemented by an embedded platform comprising a plurality of cores and a plurality of shared resources. This method comprises the steps of executing the operational application by one of the cores of the embedded platform, executing a stressor application on at least some other cores of the embedded platform in parallel with the operational application, the stressor application being composed of a set of contention tasks generating a maximum contention on interference channels, and determining contentions generated by the stressor application on the operational application.Type: GrantFiled: May 12, 2021Date of Patent: October 31, 2023Assignee: THALESInventors: Pierrick Lamour, Marc Fumey
-
Patent number: 11797356Abstract: A method of synchronizing tasks in a test and measurement system, includes receiving, at a client in the system, a task input, receiving, at a job manager running on a first device processor in the system, a call from the client to create a job associated with the task, returning to the client an action containing at least one job code block associated with the job, receiving a call for the action, executing the at least one job code block by at least one processor in the system, determining that the job has completed, and completing the task.Type: GrantFiled: September 11, 2020Date of Patent: October 24, 2023Assignee: Tektronix, Inc.Inventors: Timothy E. Sauerwein, Clinton M. Alter, Sean T. Marty, Jenny Yang, Keith D. Rule
-
Patent number: 11789725Abstract: A modular electronic warfare (EW) framework that is implemented into a first preexisting EW system with associated hardware and firmware to leverage the capabilities of the first preexisting EW system into a second, different preexisting EW system with associated hardware and firmware. The modular EW framework includes a tracking framework and a logic framework. The tracking framework is configured to receive a first set of objects from a preexisting EW system and augments the first set of objects with at least one parameter and outputs the second set of objects. The logic framework is configured to receive the set of second objects from the tracking framework and implements at least one process onto the second set of objects and outputs a third set of objects. A multi-core processor is used to operate the modular EW framework.Type: GrantFiled: February 8, 2021Date of Patent: October 17, 2023Assignee: BAE Systems Information and Electronic Systems Integration Inc.Inventors: Charles C. Gasdick, Daniel B. Harrison, Michael F. Roske
-
Patent number: 11789786Abstract: Aspects of the disclosure relate to providing and maintaining efficient and effective processing of sets of work items in enterprise computing environments by optimizing distributed and parallelized batch data processing. A computing platform may initialize at least two processing workers. Subsequently, the computing platform may cause a first processing worker to perform a first query on a work queue database and initiate parallel processing of a first set of work items. Thereafter, the computing platform may cause the second processing worker to perform a second query on the work queue database and initiate parallel processing of a second set of work items. In some instances, performing the second query on the work queue database comprises reading at least one work item that was read and locked by the first processing worker.Type: GrantFiled: September 9, 2022Date of Patent: October 17, 2023Assignee: Bank of America CorporationInventor: Marcus Matos
-
Patent number: 11782760Abstract: A method for executing applications in a system comprising general hardware and reconfigurable hardware includes accessing a first execution file comprising metadata storing a first priority indicator associated with a first application, and a second execution file comprising metadata storing a second priority indicator associated with a second application. In an example, use of the reconfigurable hardware is interleaved between the first application and the second application, and the interleaving is scheduled to take into account (i) workload of the reconfigurable hardware and (ii) the first priority indicator and the second priority indicator associated with the first application and the second application, respectively. In an example, when the reconfigurable hardware is used by one of the first and second applications, the general hardware is used by another of the first and second applications.Type: GrantFiled: February 25, 2021Date of Patent: October 10, 2023Assignee: SambaNova Systems, Inc.Inventors: Anand Misra, Arnav Goel, Qi Zheng, Raghunath Shenbagam, Ravinder Kumar
-
Patent number: 11782770Abstract: A processor may analyze, using an AI system, an application, where the application includes one or more application modules. The processor may determine, using the AI system, that an application module is critical based on a contextual scenario. The AI system may be trained utilizing data regarding heat generation of hardware on which the application module is operating. The processor may identify, using the AI system, required resources of the hardware for the application module to function during the contextual scenario. The processor may allocate an availability of the required resources for the application module.Type: GrantFiled: December 2, 2020Date of Patent: October 10, 2023Assignee: International Business Machines CorporationInventors: Aaron K. Baughman, Shikhar Kwatra, Jennifer L. Szkatulski, Sarbajit K. Rakshit
-
Patent number: 11768714Abstract: Hardware semaphores are utilized to increase the speed with which preconditions are evaluated. On an individual basis, each hardware semaphore can implement a binary semaphore or a counting semaphore. Collections of hardware semaphores can be chained together to implement a chain semaphore that can support multiple conditionals. In addition, hardware semaphores can have the capability, not only of generating an interrupt, but, in addition, being able to generate commands, such as to other semaphores. The implementation of a chain semaphore spanning multiple hardware semaphores can be performed by a compiler at compile time or at run time. An integrated circuit chip can comprise multiple execution units, such as processing cores, and individual ones of the execution units can be associated with multiple hardware semaphores, such as in the form of hardware semaphore arrays. A dedicated network-on-chip enables hardware semaphore communication.Type: GrantFiled: June 22, 2021Date of Patent: September 26, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Xiaoling Xu, Timothy Hume Heil, Deepak Goel
-
Patent number: 11768704Abstract: Systems and methods for intelligently scheduling a pod in a cluster of worker nodes are described. A scheduling service may account for previous scheduling attempts by considering the time and node (scheduling data) on which a preceding attempt to schedule a node were made, and factoring this information into the scheduling decision. Upon making a determination of a node on which to attempt to schedule the pod, the scheduling data may be updated with the time and node ID of the determined node and the pod may be scheduled on the determined node. In response to determining that the pod has been evicted from the determined node, the above process may continue iteratively until the pod has been successfully scheduled.Type: GrantFiled: April 28, 2021Date of Patent: September 26, 2023Assignee: Red Hat, Inc.Inventors: Swat Sehgal, Marcel Apfelbaum
-
Patent number: 11762691Abstract: An information processing system includes: a storage device configured to store first clock time scheduled for execution of a task; and a processing device configured to: execute a task at second clock time earlier than first clock time scheduled for execution of the task; control the executing of the task not to execute the task at the first clock time when data used in the task is not updated in a period from the second clock time to the first clock time; and control the executing of the task to re-execute the task either at the first clock time or at third clock time earlier than the first clock time when the data is updated in the period from the second clock time to the first clock time.Type: GrantFiled: September 8, 2020Date of Patent: September 19, 2023Assignee: FUJITSU LIMITEDInventors: Eiichi Takahashi, Miwa Okabayashi, Akira Shiba, Naoki Nishiguchi, Hisatoshi Yamaoka, Kota Itakura, Takafumi Onishi, Tatsuro Matsumoto