Patents Examined by Eric C. Wai
  • Patent number: 12197963
    Abstract: This on-vehicle control device includes: an acquisition unit configured to acquire a plurality of pieces of wear information regarding a degree of wear of each of a plurality of function units mounted on a vehicle; a selection unit configured to select, on the basis of each piece of the wear information acquired by the acquisition unit, from the plurality of function units, one or a plurality of the function units to be caused to perform a target process that should be performed by one or a plurality of the function units among the plurality of function units; and a control unit configured to perform a control of causing the one or plurality of the function units selected by the selection unit to perform the target process.
    Type: Grant
    Filed: February 12, 2019
    Date of Patent: January 14, 2025
    Assignee: SUMITOMO ELECTRIC INDUSTRIES, LTD.
    Inventor: Toshihiro Ichimaru
  • Patent number: 12190153
    Abstract: In a M2M device management system, a Task Orchestration Module, TOM (32) external to the M2M device (20) manages the execution of tasks wholly or partly on the M2M device (20). This alleviates the M2M device (20) of the need to store code, execute tasks, monitor task execution, and the like. The tasks are specified using Finite State Machine, FSM, syntax. A task URL, tURL (34) resource on the M2M device (20) provides a tURL (34) to a resource hosting (36) a service (38) mapping task-IDs to FSM specifications. Communications between the TOM (32) and M2M device (20) is compactly and efficiently achieved using a device management protocol server/client system (16, 18), such as LightWeightM2M (LWM2M). A predetermined mapping (40) at the M2M device (20) maps action labels to library functions (22) of the M2M device (20), obviating the need for code in the M2M device (20) to interpret and execute actions.
    Type: Grant
    Filed: August 14, 2018
    Date of Patent: January 7, 2025
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Swarup Kumar Mohalik, Senthamiz Selvi Arumugam, Chakri Padala
  • Patent number: 12190164
    Abstract: Disclosed embodiments relate to controlling sets of graphics work (e.g., kicks) assigned to graphics processor circuitry. In some embodiments, tracking slot circuitry implements entries for multiple tracking slots. Slot manager circuitry may store, using an entry of the tracking slot circuitry, software-specified information for a set of graphics work, where the information includes: type of work, dependencies on other sets of graphics work, and location of data for the set of graphics work. The slot manager circuitry may prefetch, from the location and prior to allocating shader core resources for the set of graphics work, configuration register data for the set of graphics work. Control circuitry may program configuration registers for the set of graphics work using the prefetched data and initiate processing of the set of graphics work by the graphics processor circuitry according to the dependencies. Disclosed techniques may reduce kick-to-kick transition time, in some embodiments.
    Type: Grant
    Filed: August 11, 2021
    Date of Patent: January 7, 2025
    Assignee: Apple Inc.
    Inventors: Steven Fishwick, Fergus W. MacGarry, Jonathan M. Redshaw, David A. Gotwalt, Ali Rabbani Rankouhi, Benjamin Bowman
  • Patent number: 12174716
    Abstract: An information handling system may include at least one central processing unit (CPU); and a plurality of special-purpose processing units. The information handling system may be configured to: receive information regarding cooling characteristics of the plurality of special-purpose processing units; and assign identification (ID) numbers to each of the plurality of special-purpose processing units in an order that is determined based at least in part on the cooling characteristics.
    Type: Grant
    Filed: July 22, 2021
    Date of Patent: December 24, 2024
    Assignee: Dell Products L.P.
    Inventors: Ramesh Radhakrishnan, Elizabeth Raymond
  • Patent number: 12175303
    Abstract: Implementations are disclosed for adaptively reallocating computing resources of resource-constrained devices between tasks performed in situ by those resource-constrained devices. In various implementations, while the resource-constrained device is transported through an agricultural area, computing resource usage of the resource-constrained device ma may be monitored. Additionally, phenotypic output generated by one or more phenotypic tasks performed onboard the resource-constrained device may be monitored. Based on the monitored computing resource usage and the monitored phenotypic output, a state may be generated and processed based on a policy model to generate a probability distribution over a plurality of candidate reallocation actions.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: December 24, 2024
    Assignee: Deere &Company
    Inventors: Zhiqiang Yuan, Rhishikesh Pethe, Francis Ebong
  • Patent number: 12169708
    Abstract: A gateway device is connected to a plurality of electronic controllers on-board a vehicle. The gateway device acquires firmware update information, which includes at least a part of updated firmware to be applied to a first electronic controller, patch data, and information indicating where to apply the patch data. When the gateway device determines that the first electronic controller does not include a firmware cache for performing a pre-update firmware cache operation, the gateway device executes a proxy process. In this regard, the gateway device requests the first electronic controller to transmit boot ROM data to the gateway device, merges the patch data and existing firmware to create updated boot ROM data with updated firmware, and transmits the updated boot ROM data to the first electronic controller that updates the boot ROM data and resets the first electronic controller with the updated firmware.
    Type: Grant
    Filed: October 27, 2023
    Date of Patent: December 17, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Yoshihiro Ujiie, Hideki Matsushima, Jun Anzai, Toshihisa Nakano, Tomoyuki Haga, Manabu Maeda, Takeshi Kishikawa
  • Patent number: 12164963
    Abstract: A system and method detecting an artificial intelligence (AI) pipeline in a cloud computing environment. The method includes: inspecting a cloud computing environment for an AI pipeline component; detecting a connection between a first AI pipeline component and a second AI pipeline component; generating a representation of each of: the first AI pipeline component, the second AI pipeline component, and the connection, in a security database; and generating an AI pipeline based on the generated representations.
    Type: Grant
    Filed: November 16, 2023
    Date of Patent: December 10, 2024
    Assignee: Wiz, Inc.
    Inventors: Ami Luttwak, Alon Schindel, Amitai Cohen, Yinon Costica, Roy Reznik, Mattan Shalev
  • Patent number: 12141616
    Abstract: Systems and methods presented herein provide examples for distributing resources in a UEM system. In one example, the UEM system can receive a request to check out a user device enrolled in the UEM system. The request can include a profile identifier (“ID”) of a user profile making the request and attributes of the user device. The UEM system can create a hash of group IDs associated with the profile ID. The UEM system can create a device context that includes the device attributes and the hash. The UEM system can then determine if the device context matches to a resource context. Resource contexts can identify a set of UEM resources associated with a device context. Where a match is found, the UEM system can provide the corresponding resources to the user device.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: November 12, 2024
    Assignee: Omnissa, LLC
    Inventors: Shanger Sivaramachandran, Prashanth Rao, Janani Vedapuri, Adarsh Subhash Chandra Jain
  • Patent number: 12136001
    Abstract: A computer system that includes a plurality of compute clusters that are located at different geographical locations. Each compute cluster is powered by a local energy source at a geographical location of that compute cluster. Each local energy source has a pattern of energy supply that is variable over time based on an environmental factor. The computer system further includes a server system that executes a global scheduler that distributes virtual machines that perform compute tasks for server-executed software programs to the plurality of compute clusters of the distributed compute platform. To distribute virtual machines for a target server-executed software program, the global scheduler is configured to select a subset of compute clusters that have different complementary patterns of energy supply such that the subset of compute clusters aggregately provide a target compute resource availability for virtual machines for the target server-executed software program.
    Type: Grant
    Filed: September 2, 2021
    Date of Patent: November 5, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Shadi Abdollahian Noghabi, Ranveer Chandra, Anirudh Badam, Riyaz Mohamed Pishori, Shivkumar Kalyanaraman, Srinivasan Iyengar
  • Patent number: 12131238
    Abstract: A method for database management that includes receiving an algorithm from a user. Based on the algorithm, a hierarchical dataflow graph (hDFG) may be generated. The method may further include generating an architecture for a chip based on the hDFG. The architecture for a chip may retrieve a data table from a database. The data table may be associated with the architecture for a chip. Finally, the algorithm may be executed against the data table, such that an action included in the algorithm is performed.
    Type: Grant
    Filed: October 12, 2022
    Date of Patent: October 29, 2024
    Assignee: Georgia Tech Research Corporation
    Inventors: Hadi Esmaeilzadeh, Divya Mahajan, Joon Kyung Kim
  • Patent number: 12124884
    Abstract: Examples described herein relate to a management node and a method for managing deployment of a workload. The management node may obtain values of resource labels related to platform characteristics of a plurality of worker nodes. Further, the management node may determine values of one or more custom resource labels for each of the plurality of worker nodes, wherein a value of each custom resource label of the one or more custom resource labels is determined based on values of a respective set of resource labels of the resource labels. Furthermore, the management node may receive a workload deployment request including a workload description of a workload. Moreover, the management node may deploy the workload on a worker node of the plurality of worker nodes based on the workload description and the values of the one or more custom resource labels.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: October 22, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Srinivasan Varadarajan Sahasranamam, Mohan Parthasarathy
  • Patent number: 12112208
    Abstract: A method for automated switching of workloads on computing devices to increase returns such as rewards or transaction fees in a cryptocurrency blockchain network is disclosed. A plurality of signals impacting the profitability of mining for a plurality of different cryptocurrencies and plurality of different mining pools are monitored. In response to the plurality of signals indicating a different cryptocurrency and mining pool combination is more profitable, the computing device workload is automatically switched. Switching cost may be calculated and used to prevent unprofitable switches. The signals may be used to train a machine learning model that may be used to predict future profitability for automatic switching.
    Type: Grant
    Filed: July 22, 2021
    Date of Patent: October 8, 2024
    Assignee: Core Scientific, Inc.
    Inventors: Kristy-Leigh A. Minehan, Ganesh Balakrishnan, Evan Adams, Carla Cortez, Ian Ferreira
  • Patent number: 12106137
    Abstract: A server-based desktop-virtual machines architecture may be extended to a client machine. In one embodiment, a user desktop is remotely accessed from a client system. The remote desktop is generated by a first virtual machine running on a server system, which may comprise one or more server computers. During execution of the first virtual machine, writes to a corresponding virtual disk are directed to a delta disk file or redo log. A copy of the virtual disk is created on the client system. When a user decides to “check out” his or her desktop, the first virtual machine is terminated (if it is running) and a copy of the delta disk is created on the client system. Once the delta disk is present on the client system, a second virtual machine can be started on the client system using the virtual disk and delta disk to provide local access to the user's desktop at the client system. This allows the user to then access his or her desktop without being connected to a network.
    Type: Grant
    Filed: April 26, 2023
    Date of Patent: October 1, 2024
    Assignee: Omnissa, LLC
    Inventors: Yaron Halperin, Jad Chamcham, Christian Matthew Leroy, Gerald Cheong, Matthew Eccleston, Ji Feng
  • Patent number: 12106150
    Abstract: The present invention relates to a system for data analytics in a network between one or more local device(s) (130) and a cloud computing platform (120), in which data collected and/or stored on the local device(s) (130) and/or stored on the cloud computing platform (120) are processed by an analytical algorithm (A) which is subdivided into at least two sub-algorithms (SA1, SA2), wherein one sub-algorithm (SA1) is executed on the local device(s) (130) and the other sub-algorithm (SA2) is executed on the cloud computing platform (120).
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: October 1, 2024
    Assignee: Siemens Aktiengesellschaft
    Inventor: Amit Verma
  • Patent number: 12093155
    Abstract: Certain aspects of the present disclosure provide techniques for improved hardware utilization. An input data tensor is divided into a first plurality of sub-tensors, and a plurality of logical sub-arrays in a physical multiply-and-accumulate (MAC) array is identified. For each respective sub-tensor of the first plurality of sub-tensors, the respective sub-tensor is mapped to a respective logical sub-array of the plurality of logical sub-arrays, and the respective sub-tensor is processed using the respective logical sub-array.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: September 17, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Hee Jun Park, Bohuslav Rychlik, Niraj Shantilal Paliwal
  • Patent number: 12086632
    Abstract: A task manager tightly coupled to a programmable real-time unit (PRU), the task manager configured to: detect a first event; assert, a request to the PRU during a first clock cycle that the PRU perform a second task; receive an acknowledgement of the request from the PRU during the first clock cycle; save a first address in a memory during the first clock cycle of the PRU, the first address corresponding to a first task of the PRU, the first address present in a current program counter of the PRU; load a second address of the memory into a second program counter during the first clock cycle, the second address corresponding to the second task; and load, during a second clock cycle, the second address into the current program counter, wherein the second clock cycle immediately follows the first clock cycle.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: September 10, 2024
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Thomas Anton Leyrer, William Cronin Wallace
  • Patent number: 12073242
    Abstract: A method for containerized workload scheduling can include determining a network state for a first hypervisor in a virtual computing cluster (VCC). The method can further include determining a network state for a second hypervisor. Containerized workload scheduling can further include deploying a container to run a containerized workload on a virtual computing instance (VCI) deployed on the first hypervisor or the second hypervisor based, at least in part, on the determined network state for the first hypervisor and the second hypervisor.
    Type: Grant
    Filed: December 21, 2022
    Date of Patent: August 27, 2024
    Assignee: VMware LLC
    Inventors: Aditi Ghag, Pranshu Jain, Yaniv Ben-Itzhak, Jianjun Shen
  • Patent number: 12073254
    Abstract: A system for providing synchronous access to hardware resources includes a first network interface element to receive a network time signal from a data communication network and a memory to store sequence of one or more instructions selected from an instruction set of the first processing circuit. The sequence of one or more instructions include a first instruction that is configured to synchronize execution of a second instruction of the sequence of one or more instructions with the network time signal. The system further includes a first processing circuit to use the first instruction and a timing parameter associated with a second instruction to execute the second instruction in synchrony with the network time signal.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: August 27, 2024
    Assignee: Analog Devices International Unlimited Company
    Inventors: David Kenneth Bydeley, Gary Wayne Ng, Gordon Alexander Charles
  • Patent number: 12067419
    Abstract: Load balancing processes are performed in an observability pipeline system comprising a plurality of computing resources. In some aspects, the observability pipeline system defines a leader role and worker roles. A plurality of computing jobs each include computing tasks associated with event data. The leader role dispatches the computing tasks to the worker roles according to a least in-flight task dispatch criteria, which includes iteratively: identifying an available worker role; identifying one or more incomplete computing jobs; selecting, from the one or more incomplete computing jobs, a computing job that has the least number of in-flight computing tasks currently being executed in the observability pipeline system; identifying a next computing task from the selected computing job; and dispatching the next computing task to the available worker role. The worker roles execute the computing tasks by applying an observability pipeline process to the event data associated with the respective computing task.
    Type: Grant
    Filed: June 27, 2023
    Date of Patent: August 20, 2024
    Assignee: Cribl, Inc.
    Inventors: Dritan Bitincka, Ledion Bitincka, Nicholas Robert Romito, Clint Sharp
  • Patent number: 12033000
    Abstract: In one embodiment, an apparatus comprises a communication interface to communicate over a network, and a processor. The processor is to: receive a workload provisioning request from a user, wherein the workload provisioning request comprises information associated with a workload, a network topology, and a plurality of potential hardware choices for deploying the workload over the network topology; receive hardware performance information for the plurality of potential hardware choices from one or more hardware providers; generate a task dependency graph associated with the workload; generate a device connectivity graph associated with the network topology; select, based on the task dependency graph and the device connectivity graph, one or more hardware choices from the plurality of potential hardware choices; and provision a plurality of resources for deploying the workload over the network topology, wherein the plurality of resources are provisioned based on the one or more hardware choices.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: July 9, 2024
    Assignee: Intel Corporation
    Inventor: Shao-Wen Yang