Process Scheduling Patents (Class 718/102)
  • Patent number: 11989590
    Abstract: To provide a more efficient resource allocation method and system using a genetic algorithm (GA). The present technology includes a method for allocating resources to a production process including a plurality of processes, the method including allocating priorities to the plurality of processes, selecting processes executable at a first time among the plurality of processes and capable of allocating necessary resources, allocating the necessary resources to the selected processes in descending order of priorities, selecting processes executable at a second time that is later than the first time among the plurality of processes and capable of allocating necessary resources, and allocating the necessary resources to the selected processes in descending order of priorities. The present technology also includes, as a method of expressing genes of GA, not having direct allocation information for genes but having information (priority) for determining an order for allocation.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: May 21, 2024
    Assignee: SYNAPSE INNOVATION INC.
    Inventors: Kazuya Izumikawa, Shigeo Fujimoto
  • Patent number: 11973845
    Abstract: Managing organization disconnections from a shared resource of a communication platform is described. In a sharing approval repository of a communication platform, a shared resource can be associated with a host organization identifier and a non-host organization identifier. In an example, in response to receiving, from a user computing device associated with the host organization identifier or the non-host organization identifier, a resource disconnection request comprising a disconnecting organization identifier and a resource identifier associated with the shared resource, the sharing approval repository can be updated to add a disconnection indication for the resource identifier in association with the disconnecting organization identifier.
    Type: Grant
    Filed: November 6, 2021
    Date of Patent: April 30, 2024
    Assignee: Salesforce, Inc.
    Inventors: Christopher Sullivan, Myles Grant, Michael Demmer, Shanan Delp, Sri Vasamsetti
  • Patent number: 11972267
    Abstract: Tasks are selected for hibernation by recording user preferences for tasks having no penalty for hibernation and sleep; and assigning thresholds for battery power at which tasks are selected for a least one of hibernation and sleep. The assigning of the thresholds for battery power include considering current usage of hardware resources by a user and battery health per battery segment. A penalty score is determined for tasks based upon the user preferences for tasks having no penalty, and task performance including at least one of frequency of utilization, memory utilization, task dependency characteristics and task memory hierarchy. The penalty performance is a value including both the user preference and the task performance. Tasks can then be put into at least one of hibernation mode and sleep mode dictated by their penalty performance during the thresholds for battery power.
    Type: Grant
    Filed: October 4, 2022
    Date of Patent: April 30, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Madhu Pavan Kothapally, Rajesh Kumar Pirati, Bharath Sakthivel, Sarika Sinha
  • Patent number: 11971824
    Abstract: Disclosed is a method for enhancing memory utilization and throughput of a computing platform in training a deep neural network (DNN). The critical features of the method includes: calculating a memory size for every operation in a computational graph, storing the operations in the computational graph in multiple groups with the operations in each group being executable in parallel and a total memory size less than a memory threshold of a computational device, sequentially selecting a group and updating a prefetched group buffer, and simultaneously executing the group and prefetching data for a group in the prefetched group buffer to the corresponding computational device when the prefetched group buffer is update. Because of group execution and data prefetch, the memory utilization is optimized and the throughput is significantly increased to eliminate issues of out-of-memory and thrashing.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: April 30, 2024
    Assignee: AETHERAI IP HOLDING LLC
    Inventors: Chi-Chung Chen, Wei-Hsiang Yu, Chao-Yuan Yeh
  • Patent number: 11966789
    Abstract: Systems and methods for optimal load distribution and data processing of a plurality of files in anti-malware solutions are provided herein. In some embodiments, the system includes: a plurality of node processors; a control processor programmed to: receiving a plurality of files used for malware analysis and training of anti-malware ML models; separating the plurality of files into a plurality of subsets of files based on byte size of each of the files, such that processing of each subset of files produces similar workloads amongst all available node processors; distributing the plurality of subsets of files amongst all available node processors such that each node processor processes its respective subset of files in parallel and within a similar timeframe as the other node processors; and receiving, by the control processor, a report of performance and/or anti-malware processing results of the subset of files performed from each node processor.
    Type: Grant
    Filed: April 27, 2022
    Date of Patent: April 23, 2024
    Assignee: UAB 360 IT
    Inventor: Mantas Briliauskas
  • Patent number: 11966619
    Abstract: An apparatus for executing a software program, comprising at least one hardware processor configured for: identifying in a plurality of computer instructions at least one remote memory access instruction and a following instruction following the at least one remote memory access instruction; executing after the at least one remote memory access instruction a sequence of other instructions, where the sequence of other instructions comprises a return instruction to execute the following instruction; and executing the following instruction; wherein executing the sequence of other instructions comprises executing an updated plurality of computer instructions produced by at least one of: inserting into the plurality of computer instructions the sequence of other instructions or at least one flow-control instruction to execute the sequence of other instructions; and replacing the at least one remote memory access instruction with at least one non-blocking memory access instruction.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: April 23, 2024
    Assignee: Next Silicon Ltd
    Inventors: Elad Raz, Yaron Dinkin
  • Patent number: 11968098
    Abstract: A method of initiating a KPI backfill operation for a cellular network based on detecting a network data anomaly, where the method includes receiving, by an anomaly detection and backfill engine (ADBE) executed by a computing device, a data quality metric that is based on a KPI of the cellular network; detecting, by the ADBE, the network data anomaly based on the data quality metric being more than a threshold amount different than a predicted value for the data quality metric, where the network data anomaly indicates that at least a portion of a data stream from which the KPI is calculated was unavailable for a previous iteration of the KPI; and providing, by the ADBE and based on detecting the network data anomaly, a backfill command to a backfill processing pipeline to perform the backfill operation by reaggregating the KPI when the portion of the data stream becomes available.
    Type: Grant
    Filed: March 31, 2023
    Date of Patent: April 23, 2024
    Assignee: T-Mobile Innovations LLC
    Inventors: Vikas Ranjan, Raymond Wu
  • Patent number: 11967321
    Abstract: Implementations set forth herein relate to an automated assistant that can interact with applications that may not have been pre-configured for interfacing with the automated assistant. The automated assistant can identify content of an application interface of the application to determine synonymous terms that a user may speak when commanding the automated assistant to perform certain tasks. Speech processing operations employed by the automated assistant can be biased towards these synonymous terms when the user is accessing an application interface of the application. In some implementations, the synonymous terms can be identified in a responsive language of the automated assistant when the content of the application interface is being rendered in a different language. This can allow the automated assistant to operate as an interface between the user and certain applications that may not be rendering content in a native language of the user.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: April 23, 2024
    Assignee: GOOGLE LLC
    Inventors: Joseph Lange, Abhanshu Sharma, Adam Coimbra, Gökhan Bakir, Gabriel Taubman, Ilya Firman, Jindong Chen, James Stout, Marcin Nowak-Przygodzki, Reed Enger, Thomas Weedon Hume, Vishwath Mohan, Jacek Szmigiel, Yunfan Jin, Kyle Pedersen, Gilles Baechler
  • Patent number: 11966756
    Abstract: The present disclosure generally relates to dataflow applications. In aspects, a system is disclosed for scheduling execution of feature services within a distributed data flow service (DDFS) framework. Further, the DDFS framework includes a main system-on-chip (SoC), at least one sensing service, and a plurality of feature services. Each of the plurality of feature services include a common pattern with an algorithm for processing the input data, a feature for encapsulating the algorithm into a generic wrapper rendering the algorithm compatible with other algorithms, a feature interface for encapsulating a feature output into a generic interface allowing generic communication with other feature services, and a configuration file including a scheduling policy to execute the feature services. For each of the plurality of feature services, processor(s) schedule the execution of a given feature service using the scheduling policy and execute a given feature service on the standard and/or accelerator cores.
    Type: Grant
    Filed: July 7, 2022
    Date of Patent: April 23, 2024
    Assignee: Aptiv Technologies AG
    Inventors: Vinod Aluvila, Miguel Angel Aguilar
  • Patent number: 11968248
    Abstract: Methods are provided. A method includes announcing to a network meta information describing each of a plurality of distributed data sources. The method further includes propagating the meta information amongst routing elements in the network. The method also includes inserting into the network a description of distributed datasets that match a set of requirements of the analytics task. The method additionally includes delivering, by the routing elements, a copy of the analytics task to locations of respective ones of the plurality of distributed data sources that include the distributed datasets that match the set of requirements of the analytics task.
    Type: Grant
    Filed: October 19, 2022
    Date of Patent: April 23, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bong Jun Ko, Theodoros Salonidis, Rahul Urgaonkar, Dinesh C. Verma
  • Patent number: 11962659
    Abstract: Metrics that characterize one or more computing devices are received. A time value associated with a performance of the one or more computing devices based on the received metrics is determined. A first scheduling parameter based on the time value is determined, wherein the first scheduling parameter is associated with a first discovery process that is associated with at least a portion of the one or more computing devices. Execution of the first discovery process is executed according to the first scheduling parameter.
    Type: Grant
    Filed: July 17, 2023
    Date of Patent: April 16, 2024
    Assignee: ServiceNow, Inc.
    Inventors: Steven W. Francis, Sai Saketh Nandagiri
  • Patent number: 11960940
    Abstract: A FaaS system comprises a plurality of execution nodes. A software package is received in the system, the software package comprising a function that is to be executed in the FaaS system. Data location information related to data that the function is going to access during execution is obtained. Based on the data location information, a determination is then made of an execution node in which the function is to be executed. The function is loaded into the determined execution node and executing in the determined execution node.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: April 16, 2024
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Zoltán Turányi, Dániel Géhberger
  • Patent number: 11954352
    Abstract: A request to perform a first operation in a system that stores deduplicated data can be received. The system can include a data block stored at multiple logical address each referencing the data block. A reference count can be associated with the data block and can denote a number of logical addresses referencing the data block. Processing can be performed to service the request and perform the first operation, wherein the processing can include: acquiring a non-exclusive lock for a page that includes the reference count of the data block; storing, in a metadata log while holding the non-exclusive lock on the page, an entry to decrement the reference count of the data block; and releasing the non-exclusive lock on the page.
    Type: Grant
    Filed: June 29, 2022
    Date of Patent: April 9, 2024
    Assignee: Dell Products L.P.
    Inventors: Vladimir Shveidel, Uri Shabi
  • Patent number: 11953972
    Abstract: Selective privileged container augmentation is provided. A target group of edge devices is selected from a plurality of edge devices to run a plurality of child tasks comprising a pending task by mapping edge device tag attributes of the plurality of edge devices to child task tag attributes of the plurality of child tasks. A privileged container corresponding to the pending task is installed in each edge device of the target group to monitor execution of a child task by a given edge device of the target group. A privileged container installation tag that corresponds to the privileged container is added to an edge device tag attribute of each edge device of the target group having the privileged container installed. A child task of the plurality of child tasks comprising the pending task is sent to a selected edge device in the target group to run the child task.
    Type: Grant
    Filed: April 6, 2022
    Date of Patent: April 9, 2024
    Assignee: International Business Machines Corporation
    Inventors: Yue Wang, Xin Peng Liu, Wei Wu, Liang Wang, Biao Chai
  • Patent number: 11946368
    Abstract: A system to determine a contamination level of a formation fluid, the system including a formation tester tool to be positioned in a borehole, wherein the borehole has a mixture of the formation fluid and a drilling fluid and the formation tester tool includes a sensor to detect time series measurements from a plurality of sensor channels. The system includes a processor to dimensionally reduce the time series measurements to generate a set of reduced measurement scores in a multi-dimensional measurement space and determine an end member in the multi-dimensional measurement space based on the set of reduced measurement scores, wherein the end member comprises a position in the multi-dimensional measurement space that corresponds with a predetermined fluid concentration. The processor also determines the contamination level of the formation fluid at a time point based the set of reduced measurement scores and the end member.
    Type: Grant
    Filed: December 16, 2022
    Date of Patent: April 2, 2024
    Assignee: Halliburton Energy Services, Inc.
    Inventors: Bin Dai, Dingding Chen, Christopher Michael Jones
  • Patent number: 11949566
    Abstract: Methods, systems, and computer readable media for testing a system under test (SUT). An example system includes a distributed processing node emulator configured for emulating a multi-processing node distributed computing system using a processing node communications model and generating intra-processing node communications and inter-processing node communications in the multi-processing node distributed computing system. At least a portion of the inter-processing node communications comprises one or more messages communicated with the SUT by way of a switching fabric. The system includes a test execution manager configured for managing the distributed processing node emulator to execute a pre-defined test case, monitoring the SUT, and outputting a test report based on monitoring the SUT during execution of the pre-defined test case.
    Type: Grant
    Filed: September 6, 2022
    Date of Patent: April 2, 2024
    Assignee: KEYSIGHT TECHNOLOGIES, INC.
    Inventors: Winston Wencheng Liu, Dan Mihailescu, Matthew R. Bergeron
  • Patent number: 11947454
    Abstract: Apparatuses, systems, and methods for controlling cache allocations in a configurable combined private and shared cache in a processor-based system. The processor-based system is configured to receive a cache allocation request to allocate a line in a share cache structure, which may further include a client identification (ID). The cache allocation request and the client ID can be compared to a sub-non-uniform memory access (NUMA) (sub-NUMA) bit mask and a client allocation bit mask to generate a cache allocation vector. The sub-NUMA bit mask may have been programmed to indicate that processing cores associated with a sub-NUMA region are available, whereas processing cores associated with other sub-NUMA regions are not available, and the client allocation bit mask may have been programmed to indicate that processing cores are available.
    Type: Grant
    Filed: June 7, 2022
    Date of Patent: April 2, 2024
    Assignee: Ampere Computing LLC
    Inventors: Richard James Shannon, Stephan Jean Jourdan, Matthew Robert Erler, Jared Eric Bendt
  • Patent number: 11941722
    Abstract: A kernel comprising at least one dynamically configurable parameter is submitted by a processor. The kernel is to be executed at a later time. Data is received after the kernel has been submitted. The at least one dynamically configurable parameter of the kernel is updated based on the data. The kernel having the at least one updated dynamically configurable parameter is executed after the at least one dynamically configurable parameter has been updated.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: March 26, 2024
    Assignee: Mellanox Technologies, Ltd.
    Inventors: Sayantan Sur, Stephen Anthony Bernard Jones, Shahaf Shuler
  • Patent number: 11941494
    Abstract: Systems and methods for developing enterprise machine learning (ML) models within a notebook application are described. The system may include a notebook application, a packaging service, and an online ML platform. The method may include initiating a runtime environment within the notebook application, creating a plurality of files based on a notebook recipe template, generating a prototype model within the data science notebook application by accessing the plurality of files through the runtime environment, generating a production recipe including the runtime environment and the plurality of files, and publishing the production recipe to the online ML platform.
    Type: Grant
    Filed: May 13, 2019
    Date of Patent: March 26, 2024
    Assignee: ADOBE INC.
    Inventors: Pari Sawant, Shankar Srinivasan, Nirmal Mani
  • Patent number: 11934873
    Abstract: A first processing unit such as a graphics processing unit (GPU) pipelines that execute commands and a scheduler to schedule one or more first commands for execution by one or more of the pipelines. The one or more first commands are received from a user mode driver in a second processing unit such as a central processing unit (CPU). The scheduler schedules one or more second commands for execution in response to completing execution of the one or more first commands and without notifying the second processing unit. In some cases, the first processing unit includes a direct memory access (DMA) engine that writes blocks of information from the first processing unit to a memory. The one or more second commands program the DMA engine to write a block of information including results generated by executing the one or more first commands.
    Type: Grant
    Filed: September 16, 2022
    Date of Patent: March 19, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Rex Eldon McCrary
  • Patent number: 11928760
    Abstract: Techniques are described for automatically detecting and accommodating state changes in a computer-generated forecast. In one or more embodiments, a representation of a time-series signal is generated within volatile and/or non-volatile storage of a computing device. The representation may be generated in such a way as to approximate the behavior of the time-series signal across one or more seasonal periods. Once generated, a set of one or more state changes within the representation of the time-series signal is identified. Based at least in part on at least one state change in the set of one or more state changes, a subset of values from the sequence of values is selected to train a model. An analytical output is then generated, within volatile and/or non-volatile storage of the computing device, using the trained model.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: March 12, 2024
    Assignee: Oracle International Corporation
    Inventors: Dustin Garvey, Uri Shaft, Sampanna Shahaji Salunke, Lik Wong
  • Patent number: 11930080
    Abstract: The present application discloses a vehicle-mounted heterogeneous network collaborative task unloading method, which comprises the following steps: calculating a communication delay when a vehicle requests a cache from a smart lamp post for a vehicle terminal; taking the maximum communication delay in all the caches as the communication delay between the vehicle and the smart lamp post network, and determining whether the communication delay is less than the time when the vehicle sends a request to a cloud center, if so, unloading a task to the smart lamp post network, otherwise, unloading a task to the cloud center; taking profit of a single smart lamp post itself as an index for the smart lamp post terminal, dividing the smart lamp post network into a plurality of coalitions, taking the profit maximization of the coalition as an optimization objective, optimizing a smart lamp post combination in the coalition.
    Type: Grant
    Filed: October 13, 2023
    Date of Patent: March 12, 2024
    Assignee: HUNAN UNIVERSITY
    Inventors: Hongbo Jiang, Zhu Xiao, Kehua Yang, Daibo Liu
  • Patent number: 11928612
    Abstract: The system obtains a first acyclic graph including multiple nodes and edges connecting the multiple nodes. A process to create a weave of the first acyclic graph produces a matching weave when executed on the first acyclic graph by different computing devices. An addition of a node to the first acyclic graph produces a second acyclic graph. The addition of the node to the first acyclic graph changes the weave of the first acyclic graph. The system obtains a process to reach a global consensus among the multiple computing devices. The process indicates a criterion to satisfy prior to reaching the global consensus and determines whether the multiple computing devices in the network satisfy the criterion. Upon determining that the criterion is satisfied, the system adds a finalize node to the first acyclic graph to obtain a third acyclic graph. A weave of the third acyclic graph cannot change.
    Type: Grant
    Filed: July 20, 2023
    Date of Patent: March 12, 2024
    Assignee: SpiderOak, Inc.
    Inventor: Jonathan Andrew Crockett Moore
  • Patent number: 11907194
    Abstract: The disclosed systems and methods can comprise executing a modeling sequence comprising a first model and a second model, obtaining a first result from the first model being used as an input for the second model to obtain a second result, hashing data representative of a first configuration and a second configuration to create a first hash and a second hash, respectively, storing the first result in a first location and the second result in a second location, receiving one or more configuration changes to the second model thereby creating a third configuration associated with the second model, hashing data representative of the third configuration to create a third hash, receiving a request to rerun the modeling sequence, determining that the first configuration is associated with the first model, and providing the first result to the second model without rerunning the first model.
    Type: Grant
    Filed: November 3, 2021
    Date of Patent: February 20, 2024
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Bryan Der, Steven Devisch, Justin Essert
  • Patent number: 11907756
    Abstract: A graphics processing apparatus that includes at least a memory device and an execution unit coupled to the memory. The memory device can store a command buffer with at least one command that is dependent on completion of at least one other command. The command buffer can include a jump command that causes a jump to a location in the command buffer to identify any unscheduled command. The execution unit is to jump to a location in the command buffer based on execution of the jump command. The execution unit is to perform one or more jumps to one or more locations in the command buffer to attempt to schedule a command with dependency on completion of at least one other command until the command with a dependency on completion of at least one other command is scheduled.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: February 20, 2024
    Assignee: Intel Corporation
    Inventors: Bartosz Dunajski, Brandon Fliflet, Michal Mrozek
  • Patent number: 11899799
    Abstract: A system performs an application update process based on security management information that is information including meta information for each of a plurality of security services. The application update process is a process for adding one or more security services including a security service that reduces the security risk of an application having a plurality of distributed microservices having a graph structure relationship to the application.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: February 13, 2024
    Assignee: HITACHI, LTD.
    Inventors: Jens Doenhoff, Nodoka Mimura, Yoshiaki Isobe
  • Patent number: 11900123
    Abstract: A system includes a processing unit such as a GPU that itself includes a command processor configured to receive instructions for execution from a software application. A processor pipeline coupled to the processing unit includes a set of parallel processing units for executing the instructions in sets. A set manager is coupled to one or more of the processor pipeline and the command processor. The set manager includes at least one table for storing a set start time, a set end time, and a set execution time. The set manager determines an execution time for one or more sets of instructions of a first window of sets of instructions submitted to the processor pipeline. Based on the execution time of the one or more sets of instructions, a set limit is determined and applied to one or more sets of instructions of a second window subsequent to the first window.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: February 13, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Alexander Fuad Ashkar, Manu Rastogi, Harry J. Wise
  • Patent number: 11900248
    Abstract: Methods, apparatus, and processor-readable storage media for correlating data center resources in a multi-tenant execution environment using machine learning techniques are provided herein. An example computer-implemented method includes obtaining multiple data streams pertaining to one or more data center resources in at least one multi-tenant executing environment; correlating one or more portions of the multiple data streams by processing at least a portion of the multiple data streams using at least one multi-tenant-capable search engine; determining one or more anomalies within the multiple data streams by processing the one or more correlated portions of the multiple data streams using a machine learning-based anomaly detection engine; and performing at least one automated action based at least in part on the one or more determined anomalies.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: February 13, 2024
    Assignee: Dell Products L.P.
    Inventors: James S. Watt, Bijan K. Mohanty, Bhaskar Todi
  • Patent number: 11900149
    Abstract: Data queries that are agnostic to any particular data source may include a data source alias. The data source alias may be replaced with a data source identifier to obtain a data query configured for a target data source. Data processing jobs may be agnostic to any particular data processing platform. A data processing job may include a data processing task that is agnostic to any particular data processing platform. A code library may provide platform-specific code configured to implement a data processing task on a data processing platform. A data query configured for a particular data source and a data processing task configured for a particular data processing platform may be used to create a data processing job. Configurations that restrict execution of a data processing job to execution via an interactive development environment may be removed to allow its execution directly at the data processing platform itself.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: February 13, 2024
    Assignee: Capital One Services, LLC
    Inventors: Timothy Haggerty, Yuting Zhou, Venu Nannapaneni, Pravin Nair, Hussein Ali Khalif Samao
  • Patent number: 11893417
    Abstract: A processing request management apparatus includes: an estimation unit which estimates a time required for processing for each of a plurality of processing requests for which desired processing completion times are designated; and a determination unit which determines an execution order of processing for the plurality of processing requests such that a sum of delays of estimated processing completion times based on the required time with respect to the desired processing completion times for the respective processing requests in the plurality of processing requests is minimized, to increase a likelihood of restrictions with respect to processing requests being satisfied.
    Type: Grant
    Filed: November 5, 2019
    Date of Patent: February 6, 2024
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Masahiro Kobayashi, Shigeaki Harada
  • Patent number: 11893502
    Abstract: A system assigns experts of a mixture-of-experts artificial intelligence model to processing devices in an automated manner. The system includes an orchestrator component that maintains priority data that stores, for each of a set of experts, and for each of a set of execution parameters, ranking information that ranks different processing devices for the particular execution parameter. In one example, for the execution parameter of execution speed, and for a first expert, the priority data indicates that a central processing unit (“CPU”) executes the first expert faster than a graphics processing unit (“GPU”). In this example, for the execution parameter of power consumption, and for the first expert, the priority data indicates that a GPU uses less power than a CPU. The priority data stores such information for one or more processing devices, one or more experts, and one or more execution characteristics.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: February 6, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Nicholas Malaya, Nuwan Jayasena
  • Patent number: 11892970
    Abstract: A method for data processing, a processor chip. The method includes: acquiring a first relationship instruction; executing at least one first computing instruction acquired before the first relationship instruction based on the first relationship instruction; and sending acknowledgment information based on the first relationship instruction in response to completing executing the at least one first computing instruction, to cause a second coprocessor receiving the acknowledgment information to revert to a state of acquiring a second computing instruction after the second relationship instruction acquired by a second coprocessor based on the acknowledgment information.
    Type: Grant
    Filed: July 19, 2022
    Date of Patent: February 6, 2024
    Assignee: KUNLUNXIN TECHNOLOGY (BEIJING) COMPANY
    Inventors: Jing Wang, Jiaxin Shi, Hanlin Xie, Xiaozhang Gong
  • Patent number: 11886846
    Abstract: A method for executing computation, a computing device, a computing system, and a storage medium are provided. The method includes: confirming, via a compiler, whether there is a call instruction related to a thread block modification request in a kernel function to be compiled; in response to confirming that there is the call instruction related to the thread block modification request in the kernel function to be compiled, determining a corresponding program segment associated with the call instruction; configuring a required thread block and thread local register for the corresponding program segment; and inserting a control instruction into the corresponding program segment to enable the thread block configured for the corresponding program segment to execute relevant computation of the corresponding program segment, and an unconfigured thread block not to execute the relevant computation. The disclosure can improve overall performance, make coding and maintenance easy and reduce error rate of code.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: January 30, 2024
    Assignee: Shanghai Biren Technology Co., Ltd
    Inventors: HaiChuan Wang, Huayuan Tian, Long Chen
  • Patent number: 11886960
    Abstract: Parallel training of a machine learning model on a computerized system may be provided. Computing tasks can be assigned to multiple workers of a system. A method may include accessing training data. A parallel training of the machine learning model can be started based on the accessed training data, so as for the training to be distributed through a first number K of workers, K>1. Responsive to detecting a change in a temporal evolution of a quantity indicative of a convergence rate of the parallel training (e.g., where said change reflects a deterioration of the convergence rate), the parallel training of the machine learning model is scaled-in, so as for the parallel training to be subsequently distributed through a second number K? of workers, where K>K??1. Related computerized systems and computer program products may be provided.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: January 30, 2024
    Assignee: International Business Machines Corporation
    Inventors: Michael Kaufmann, Thomas Parnell, Antonios Kornilios Kourtis
  • Patent number: 11868235
    Abstract: Examples include aggregating logs, where each of the logs is associated with a workflow instance. Each log includes information indicative of an event occurring during the workflow instance. Further, examples include assigning, based on user intent of the workflow instance, a workflow name to each log, where the user intent is indicative of an outcome of execution of the workflow instance and assigning an instance identifier to each log, where the instance identifier corresponds to the workflow instance. Further, identifying a subset of the plurality of logs having an identical workflow name and an identical instance identifier, associating a tracking identifier to the subset, and creating an index of processed logs, wherein each processed log in the index includes the tracking identifier. Further, analyzing the index of processed logs based on a set of rules and identifying, based on the analysis, an error in execution of each the workflow instance.
    Type: Grant
    Filed: July 21, 2021
    Date of Patent: January 9, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Akshar Kumar Ranka, Nitish Midha, Christopher Wild
  • Patent number: 11861211
    Abstract: API in conjunction with a bridge chip and first and second hosts having first and second memories respectively. The bridge chip connects the memories. The API comprises key identifier registration functionality to register a key identifier for each of plural computer processes performed by the first host, thereby to define plural key identifiers; and/or access control functionality to provide at least computer process P1 performed by the first host with access, typically via the bridge chip, to at least local memory buffer M2 residing in the second memory, typically after the access control functionality first validates that process P1 has a key identifier which has been registered, e.g., via the key identifier registration functionality. Typically, the access control functionality also prevents at least computer process P2, performed by the first host, which has not registered a key identifier, from accessing local memory buffer M2, e.g., via the bridge chip.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: January 2, 2024
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Gal Shalom, Adi Horowitz, Omri Kahalon, Liran Liss, Aviad Yehezkel, Rabie Loulou
  • Patent number: 11860755
    Abstract: An approach is provided for implementing memory profiling aggregation. A hardware aggregator provides memory profiling aggregation by controlling the execution of a plurality of hardware profilers that monitor memory performance in a system. For each hardware profiler of the plurality of hardware profilers, a hardware counter value is compared to a threshold value. When a threshold value is satisfied, execution of a respective hardware profiler of the plurality of hardware profilers is initiated to monitor memory performance. Multiple hardware profilers of the plurality of hardware profilers may execute concurrently and each generate a result counter value. The result counter values generated by each hardware profiler of the plurality of hardware profilers are aggregated to generate an aggregate result counter value. The aggregate result counter value is stored in memory that is accessible by a software processes for use in optimizing memory-management policy decisions.
    Type: Grant
    Filed: July 11, 2022
    Date of Patent: January 2, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Sergey Blagodurov, Jinyoung Choi
  • Patent number: 11853761
    Abstract: In some examples, first segment of computer language text in a first rule in IT workflow data and a second segment of computer language text in a second rule in the IT workflow data may be identified. In some examples, a similarity score may be determined between the first and the second rules based on a comparison of the first segment with the second segment.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: December 26, 2023
    Assignee: Micro Focus LLC
    Inventors: Shlomi Chovel, Hava Babay Adi, Rotem Chen, Ran Biron, Olga Tubman
  • Patent number: 11847503
    Abstract: Example techniques for execution of functions by clusters of computing nodes are described. In an example, if a cluster does not have resources available for executing a function for handling a service request, the cluster may request another cluster for executing the function. A result of execution of the function may be received by the cluster and used for handling the service request.
    Type: Grant
    Filed: October 17, 2020
    Date of Patent: December 19, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Jyoti Ranjan, Prabhu Murthy, Siddhartha Singh
  • Patent number: 11842221
    Abstract: Techniques are disclosed for utilizing directed acyclic graphs for deployment instructions. A computer-implemented method can include various operations. Instructions may be executed by a computing device to perform parses of configuration data associated with deploying one or more services to various execution targets. The computing device may cause a first graph to be generated that indicates dependencies between tasks associated with deploying the service(s). A second graph may be generated that specifies dependencies between different deployments of the service(s) to the execution target(s). Services may be deployed based on traversing the first and second graph.
    Type: Grant
    Filed: December 21, 2022
    Date of Patent: December 12, 2023
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Nathaniel Martin Glass, Gregory Mark Jablonski
  • Patent number: 11841872
    Abstract: Disclosed are some implementations of systems, apparatus, methods and computer program products for executing a process flow represented by a graph or portion thereof using cached subgraphs. A first request to execute a first portion of a process flow is processed, where the first portion of the process flow is represented by a first subgraph of a graph representing the process flow and a final node of the first subgraph corresponds to a set of computer-readable instructions. The first portion of the process flow is executed such that a first output of executing the first portion of the process flow is obtained. The first subgraph is stored in association with the first output in a first cache entry of a cache. A second request to execute a second portion of the process flow is processed, where the second portion of the process flow is represented by a second subgraph of the graph.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: December 12, 2023
    Assignee: Salesforce, Inc.
    Inventors: Gregory Hui, Alex Field, Brittany Zenger, Magnus Byne
  • Patent number: 11829799
    Abstract: A method, a structure, and a computer system for predicting pipeline training requirements. The exemplary embodiments may include receiving one or more worker node features from one or more worker nodes, extracting one or more pipeline features from one or more pipelines to be trained, and extracting one or more dataset features from one or more datasets used to train the one or more pipelines. The exemplary embodiments may further include predicting an amount of one or more resources required for each of the one or more worker nodes to train the one or more pipelines using the one or more datasets based on one or more models that correlate the one or more worker node features, one or more pipeline features, and one or more dataset features with the one or more resources. Lastly, the exemplary embodiments may include identifying a worker node requiring a least amount of the one or more resources of the one or more worker nodes for training the one or more pipelines.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: November 28, 2023
    Assignee: International Business Machines Corporation
    Inventors: Saket Sathe, Gregory Bramble, Long Vu, Theodoros Salonidis
  • Patent number: 11829780
    Abstract: A system may include a cluster and a module of the cluster. The module may include a user resource definition and a catalog server. The catalog server may maintain a configuration of the cluster.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: November 28, 2023
    Assignee: International Business Machines Corporation
    Inventors: Ning Ding, Yongjie Gong, Yao Zhou, Ke Zhao Li, Dan Dan Wang
  • Patent number: 11822961
    Abstract: A method includes that: a user event to be processed is received; the user event to be processed is stored into an event queue corresponding to an event attribute of the user event to be processed, user events with different event attributes corresponding to different event queues; the user event is read from the event queue through multiple processes and is processed; and the processed user event is deleted from the event queue.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: November 21, 2023
    Assignee: Beijing Xiaomi Mobile Software Co., Ltd.
    Inventors: Fuye Wang, Xiaobing Mao, Zenghui Liu
  • Patent number: 11816713
    Abstract: A merchant may use an e-commerce platform to sell products to customers on an online store. The merchant may have more than one online store, each with its own separate inventory, orders, domain name (or subdomain), currency, etc. A computer-implemented system and method are provided that allow the merchant to build workflows to automate tasks at the organizational level, i.e. workflows that can incorporate triggers, conditions, and/or actions from and across the different online stores that belong to the merchant.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: November 14, 2023
    Assignee: SHOPIFY INC.
    Inventors: Hanan Ayad, Stanislav Korsei
  • Patent number: 11809266
    Abstract: Failure impact analysis (or “impact analysis”) is a process that involves identifying effects of a network event that are may or will results from the network event. In one example, this disclosure describes a method that includes generating, by a control system managing a resource group, a resource graph that models resource and event dependencies between a plurality of resources within the resource group; detecting, by the control system, a first event affecting a first resource of the plurality of resources, wherein the first event is a network event; and identifying, by the control system and based on the dependencies modeled by the resource graph, a second resource that is expected to be affected by the first event.
    Type: Grant
    Filed: February 22, 2022
    Date of Patent: November 7, 2023
    Assignee: Juniper Networks, Inc.
    Inventors: Jayanthi R, Javier Antich, Chandrasekhar A
  • Patent number: 11809219
    Abstract: A method for executing instructions embedded in two threads stored in a system including two operating units and a virtual managing entity for holding queues for virtual objects (VO) waiting to use a respective operating unit and diverting them between queues. Each VO is associated with two virtual timers, one measuring a time period during which the VO is held in the queue (TIQ) and the other providing time period during which the VO will remain in an alive state (TTL). The method includes receiving information relating to VOs associated with the two threads; operating on VOs for which their TTLs have expired; upon emerging from its respective queue, determining whether each VO should be diverted to another queue; upon diverting the VO, resetting its TIQ timer; and allocating an access time to each VO based on a number of threads requiring that VO and the TIQ associated therewith.
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: November 7, 2023
    Assignee: DRIVENETS LTD.
    Inventors: Ori Zakin, Amir Krayden, Or Sadeh, Yuval Lev
  • Patent number: 11809978
    Abstract: An apparatus to facilitate workload scheduling is disclosed. The apparatus includes one or more clients, one or more processing units to processes workloads received from the one or more clients, including hardware resources and scheduling logic to schedule direct access of the hardware resources to the one or more clients to process the workloads.
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: November 7, 2023
    Assignee: Intel Corporation
    Inventors: Liwei Ma, Nadathur Rajagopalan Satish, Jeremy Bottleson, Farshad Akhbari, Eriko Nurvitadhi, Chandrasekaran Sakthivel, Barath Lakshmanan, Jingyi Jin, Justin E. Gottschlich, Michael Strickland
  • Patent number: 11803391
    Abstract: Devices and techniques for threads in a programmable atomic unit to self-schedule are described herein. When it is determined that an instruction will not complete within a threshold prior to insertion into a pipeline of the processor, a thread identifier (ID) can be passed with the instruction. Here, the thread ID corresponds to a thread of the instruction. When a response to completion of the instruction is received that includes the thread ID, the thread is rescheduled using the thread ID in the response.
    Type: Grant
    Filed: October 20, 2020
    Date of Patent: October 31, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Tony Brewer
  • Patent number: 11797339
    Abstract: Systems and methods for maintaining data objects include receiving an event in a queue indicating a change to a data source; obtaining data corresponding to the event from the data source; determining that a monitored item condition defined in a workflow is satisfied based on the data corresponding to the event; generating a data object responsive to the monitored item condition being satisfied; identifying, using a mapping between fields and triggers generated based on the workflow, a trigger defined in the workflow that uses a first field of one or more fields; determining that the value of the first field satisfies a trigger condition of the trigger; and performing, responsive to determining that the value satisfies the trigger condition, an action corresponding to the trigger defined in the workflow.
    Type: Grant
    Filed: March 31, 2023
    Date of Patent: October 24, 2023
    Assignee: TONKEAN, INC.
    Inventors: Sagi Eliyahu, Offir Talmor