Patents Examined by Gregory Kessler
  • Patent number: 12164972
    Abstract: The present disclosure provides various embodiments of information handling systems and related methods to provide workload remediation on client devices running multiple concurrent workloads. More specifically, the present disclosure provides software services and computer-implemented methods that utilize workload performance metrics and contextual information to provide workload remediation for each workload/application included within a user's workspace. The disclosed embodiments provide an automated iterative remediation framework, which identifies degradation of workload performance metrics of each workload, takes one or more corrective actions to remediate the performance degradation based on a set of observed states obtained for each workload, measures the efficacy of each corrective action using a weighted scoring function, and improves the workload performance for each workload by selecting the corrective action that optimizes the weighted scoring function for the set of observed states.
    Type: Grant
    Filed: October 26, 2021
    Date of Patent: December 10, 2024
    Assignee: Dell Products L.P.
    Inventors: Nikhil M. Vichare, Vivek V. Iyer
  • Patent number: 12164960
    Abstract: The present disclosure relates to a database-based data processing method, device, medium and electronic apparatus, the method including: receiving a query request task to be executed, and determining a plurality of coroutine tasks corresponding to the query request task; in each thread, determining a target coroutine task to be executed according to time information of each coroutine task in the local task queue of the thread; interrupting the target coroutine task and adding the target coroutine task to the global task queue when execution of the target coroutine task is not completed and the target coroutine task has been executed in the thread for a current time slice; according to the global task queue, and the local task queue of the thread, determining a new target coroutine task for the thread, and executing the new target coroutine task in a next time slice.
    Type: Grant
    Filed: June 7, 2024
    Date of Patent: December 10, 2024
    Assignees: DOUYIN VISION CO., LTD., LEMON INC.
    Inventors: Yuanjin Lin, Wei Ding
  • Patent number: 12164964
    Abstract: A memory management method for a device, and a memory management device and a computing system.
    Type: Grant
    Filed: September 15, 2022
    Date of Patent: December 10, 2024
    Assignee: SHENZHEN MICROBT ELECTRONICS TECHNOLOGY CO., LTD.
    Inventors: Guo Ai, Zuoxing Yang, Ruming Fang, Zhihong Xiang
  • Patent number: 12153958
    Abstract: Systems, apparatuses, and methods for abstracting tasks in virtual memory identifier (VMID) containers are disclosed. A processor coupled to a memory executes a plurality of concurrent tasks including a first task. Responsive to detecting one or more instructions of the first task which correspond to a first operation, the processor retrieves a first identifier (ID) which is used to uniquely identify the first task, wherein the first ID is transparent to the first task. Then, the processor maps the first ID to a second ID and/or a third ID. The processor completes the first operation by using the second ID and/or the third ID to identify the first task to at least a first data structure. In one implementation, the first operation is a memory access operation and the first data structure is a set of page tables. Also, in one implementation, the second ID identifies a first application of the first task and the third ID identifies a first operating system (OS) of the first task.
    Type: Grant
    Filed: October 7, 2022
    Date of Patent: November 26, 2024
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Anirudh R. Acharya, Michael J. Mantor, Rex Eldon McCrary, Anthony Asaro, Jeffrey Gongxian Cheng, Mark Fowler
  • Patent number: 12153969
    Abstract: The disclosure provides a shelf label communication method based on a synchronous network, a shelf label system and a computer device. In the method, an electronic shelf label establishes a first timing task of a timer when receiving a timing service instruction; determines a timing duration of the first timing task based on a time difference between current local system time when the timing service instruction is received and the instruction execution system time, and starts the first timing task; the electronic shelf label cyclically calibrates a current timing duration in the first timing task based on the periodically received base-station system time, to obtain the calibrated current timing duration.
    Type: Grant
    Filed: May 6, 2024
    Date of Patent: November 26, 2024
    Assignee: HANSHOW TECHNOLOGY CO., LTD.
    Inventors: Min Liang, Yaping Ji, Yujing Wang, Longfei Gao, Qi Jiang, Ju Zhang, Gengfeng Chen, Guofeng Zhang
  • Patent number: 12153947
    Abstract: An allocation control apparatus (1) includes a virtual core allocation unit (112) configured to allocate, out of a virtual core which is created on a virtual machine and occupies a physical core and a virtual core which is created on the virtual machine and shares a physical core, a task whose priority is equal to or greater than a threshold to a virtual core, and an interrupt core allocation unit (113) configured to allocate the virtual core to which the task is allocated as an interrupt destination of a virtual network interface card used by the task.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: November 26, 2024
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventor: Makoto Hamada
  • Patent number: 12153955
    Abstract: An Internet of Things system comprises an IoT hub and a local subsystem with a plurality of subsystem devices. These subsystem devices include an edge hub communicatively coupled to the IoT hub and to each other subsystem device; a requestor module configured to perform a task according to a requestor module schedule; and a scheduler module with a persistent time loop. The scheduler module receives a scheduler request from the requestor module via the edge hub, and based on this scheduler request generates a subsystem schedule that includes the requestor module schedule. The scheduler module transmits at least a part of this subsystem schedule to a persistence layer outside of the local IoT subsystem, via the IoT hub. The scheduler module flags scheduled event occurrences via the time loop and the subsystem schedule, and transmits task-specific triggered messages to the requestor module in response to these event occurrences.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: November 26, 2024
    Assignee: Insight Direct USA, Inc.
    Inventor: Ben Kotvis
  • Patent number: 12147829
    Abstract: Provided is a data processing system for a heterogeneous architecture, including, a job decomposing component, configured to decompose a to-be-completed job into a series of tasks executed by an execution subject in the heterogeneous architecture; a task topology generating component, configured to generate a task relationship topology based on an inherent relationship between the decomposed tasks during the job decomposition, where a task node of the task topology includes all node attributes required to execute a corresponding task; an execution subject creating component, configured to create a corresponding execution subject for each task in a computing resource based on the task relationship topology; and an execution subject network component, configured to include one or more data processing paths including various created execution subjects, and fragment actual job data into task data when receiving the actual job data.
    Type: Grant
    Filed: January 4, 2022
    Date of Patent: November 19, 2024
    Assignee: BEIJING ONEFLOW TECHNOLOGY CO., LTD
    Inventor: Jinhui Yuan
  • Patent number: 12147832
    Abstract: The handling of external calls from one or more services to one or more subservices is described. Upon detecting that a service has made an external call to a subservice and prior to allowing the external call to be sent to the subservice, a system evaluates the external call against one or more pre-call thresholds to determine whether or not the one or more pre-call thresholds are met. If the determination is that a pre-call threshold of the one or more pre-call thresholds is not met, the external call is failed without sending the external call to the subservice. This failing might include communicating to the service that placed the external call that the external call has failed. Otherwise, the system sends the external call to the subservice. By applying these thresholds, the service is kept from using too many resources.
    Type: Grant
    Filed: December 2, 2021
    Date of Patent: November 19, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Nishand Lalithambika Vasudevan, Akshay Navneetlal Mutha, Abhishek Anil Kakhandiki, Sathya Narayanan Ramamirtham
  • Patent number: 12147848
    Abstract: A workload execution manager receives a request to execute a workload process in a cloud computing environment, where the cloud computing environment comprises a plurality of nodes; identifies a set of eligible nodes of the plurality of nodes for executing the workload process; determines whether a first eligible node of the set of eligible nodes satisfies a version threshold; responsive to determining that the first eligible node satisfies the version threshold, selects the first eligible node as a target node for executing the workload process; and executes the workload process on the target node.
    Type: Grant
    Filed: August 20, 2021
    Date of Patent: November 19, 2024
    Assignee: Red Hat, Inc.
    Inventor: Yaniv Kaul
  • Patent number: 12136002
    Abstract: A system-on-chip can include a data input chiplet to obtain data from one or more data sources. The system-on-chip can further include one or more workload processing chiplets that access the data obtained by the data input chiplet to execute respective workloads. The system-on-chip further includes a central chiplet including a shared memory comprising a reservation table listing a plurality of workload entries. Each respective workload entry can correspond to a specified workload to be executed by the one or more workload processing chiplets. The central chiplet can input a thread number for each respective workload entry in the reservation table, where the thread number identifies a workload pipeline in which the specified workload is to be executed.
    Type: Grant
    Filed: January 24, 2024
    Date of Patent: November 5, 2024
    Assignee: Mercedes-Benz Group AG
    Inventor: Francois Piednoel
  • Patent number: 12124874
    Abstract: A pipeline task verification method and system is disclosed, and may use one or more processors. The method may comprise providing a data processing pipeline specification, wherein the data processing pipeline specification defines a plurality of data elements of a data processing pipeline. The method may further comprise identifying from the data processing pipeline specification one or more tasks defining a relationship between a first data element and a second data element. The method may further comprise receiving for a given task one or more data processing elements intended to receive the first data element and to produce the second data element. The method may further comprise verifying that the received one or more data processing elements receive the first data element and produce the second data element according to the defined relationship.
    Type: Grant
    Filed: November 9, 2023
    Date of Patent: October 22, 2024
    Assignee: Palantir Technologies Inc.
    Inventor: Kaan Tekelioglu
  • Patent number: 12124882
    Abstract: Disclosed are a method and an electronic apparatus including an accelerator for lightweight and parallel accelerator task scheduling. The method includes pre-running a deep learning model with sample input data having a preset data form and generating a scheduling result through the pre-running.
    Type: Grant
    Filed: November 12, 2021
    Date of Patent: October 22, 2024
    Assignee: Seoul National University R&DB Foundation
    Inventors: Byung-Gon Chun, Gyeongin Yu, Woosuk Kwon
  • Patent number: 12112210
    Abstract: A method for alleviating data poisoning in an edge computing resource includes receiving a numeric value from an Internet of Things (IoT) unit and associating the numeric value with a cluster selected from a plurality of clusters in accordance with a suitable clustering algorithm such as a k-means clustering algorithm. In at least some embodiments, the numeric value comprises a poisoned numeric value including an adversarial component injected by an adversary to negatively impact a trained model of a cloud-based artificial intelligence engine. Rather than permitting the injected adversarial component to corrupt the AI engine, a cluster with which the numeric value is associated is sampled in accordance with a probability distribution of the cluster to obtain a surrogate for the poisoned numeric value. The surrogate may then be provided as an input to an inference module of the AI engine to generate a prediction.
    Type: Grant
    Filed: October 25, 2021
    Date of Patent: October 8, 2024
    Assignee: Dell Products L.P.
    Inventors: Ofir Ezrielev, Nadav Azaria, Avitan Gefen, Amihai Savir
  • Patent number: 12106154
    Abstract: Systems, apparatuses and methods include technology that analyzes an input stream and an artificial intelligence (AI) model graph to generate a workload characterization. The workload characterization characterizes one or more of compute resources or memory resources, and the one or more of the compute resources or the memory resources is associated with execution of the AI model graph based on the input stream. The technology partitions the AI model graph into subgraphs based on the workload characterization. The technology selects a plurality of hardware devices to execute the subgraphs.
    Type: Grant
    Filed: August 19, 2021
    Date of Patent: October 1, 2024
    Assignee: Intel Corporation
    Inventors: Yamini Nimmagadda, Akhila Vidiyala, Suryaprakash Shanmugam, Divya Prakash
  • Patent number: 12106159
    Abstract: The system of the present technology includes an embodiment that provides a host audio, video and control operating system configured to establish or interact with one or more virtual machines, each with a guest operating system.
    Type: Grant
    Filed: June 22, 2023
    Date of Patent: October 1, 2024
    Inventor: Gerrit Eimbertus Rosenboom
  • Patent number: 12099873
    Abstract: A method includes, by a scheduling controller, receiving from a user a request for an application to be executed by a computing system associated with a data center, wherein the application includes a plurality of tasks, and wherein the request includes an estimated execution time corresponding to an estimated amount of real-world time that the tasks will be actively running on the computing system to fully execute the application. The method includes receiving from the user a service level objective indicating a target percentage of a total amount of real-world time that the tasks will be actively running on the computing system and generating, in response to determining that the job can be completed according to the service level objective and the estimated execution time, a notification indicating acceptance of the job.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: September 24, 2024
    Assignee: LANCIUM LLC
    Inventors: Andrew Grimshaw, Vanamala Venkataswamy, Raymond E. Cline, Jr., Michael T. McNamara
  • Patent number: 12099882
    Abstract: Techniques are disclosed for deploying a computing resource (e.g., a service) in response to user input. A computer-implemented method can include operations of identifying a first set of computing components already deployed within the cloud-computing environment and identifying a second set of computing components available for deployment within the cloud-computing environment. A request for deployment may be subsequently received for one of the available computing components. A bootstrap request corresponding to the particular computing component requested may be transmitted to a deployment orchestrator, the deployment orchestrator being configured to deploy the particular computing component to the cloud-computing environment based at least in part on the bootstrap request. A user interface may present status indicators for each computing component (e.g., deployed, available, requested, etc.).
    Type: Grant
    Filed: October 5, 2021
    Date of Patent: September 24, 2024
    Assignee: Oracle International Corporation
    Inventors: Eden Grail Adogla, Matthew Victor Rushton, Iliya Roitburg, Brijesh Singh
  • Patent number: 12086049
    Abstract: Techniques for capacity management in computing systems are disclosed herein. In one embodiment, a method includes analyzing data representing a number of enabled users or a number of provisioned users to determine whether the analyzed data represents an anomaly based on historical data. The method can also include upon determining that the data represents an anomaly, determining a conversion rate between a change in the number of enabled users or the number of provisioned users and a change in a number of active users of the computing service and deriving a future value of the number of active users of the computing service based on both the detected anomaly and the determined conversion rate. The method can further include allocating and provisioning an amount of the computing resource in the distributed computing system in accordance with the determined future value of the active users of the computing resource.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: September 10, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jieqiu Chen, Yow-Gwo Wang, Qizhi Xu, Feiyue Jiang, Harsh Mahendra Mehta, Boon Yeap, Dimple Kaul
  • Patent number: 12086643
    Abstract: Techniques for managing critical workloads in container-based computing environments are disclosed. In one example, a method determines a resource trigger threshold associated with executing at least one containerized workload associated with a first service having a first criticality level, the resource trigger threshold corresponding to a resource capacity allocated to execute the first service. The method determines when the resource capacity allocated to execute the first service reaches the resource trigger threshold, and then re-allocates resource capacity allocated to execute at least one containerized workload associated with a second service having a second criticality level to the first service when the resource trigger threshold is reached. For example, the first criticality level may be higher than the second criticality level.
    Type: Grant
    Filed: September 16, 2021
    Date of Patent: September 10, 2024
    Assignee: Dell Products L.P.
    Inventors: Shibi Panikkar, Rohit Gosain, Dhilip S. Kumar