Patents Examined by Eric C. Wai
  • Patent number: 12164963
    Abstract: A system and method detecting an artificial intelligence (AI) pipeline in a cloud computing environment. The method includes: inspecting a cloud computing environment for an AI pipeline component; detecting a connection between a first AI pipeline component and a second AI pipeline component; generating a representation of each of: the first AI pipeline component, the second AI pipeline component, and the connection, in a security database; and generating an AI pipeline based on the generated representations.
    Type: Grant
    Filed: November 16, 2023
    Date of Patent: December 10, 2024
    Assignee: Wiz, Inc.
    Inventors: Ami Luttwak, Alon Schindel, Amitai Cohen, Yinon Costica, Roy Reznik, Mattan Shalev
  • Patent number: 12141616
    Abstract: Systems and methods presented herein provide examples for distributing resources in a UEM system. In one example, the UEM system can receive a request to check out a user device enrolled in the UEM system. The request can include a profile identifier (“ID”) of a user profile making the request and attributes of the user device. The UEM system can create a hash of group IDs associated with the profile ID. The UEM system can create a device context that includes the device attributes and the hash. The UEM system can then determine if the device context matches to a resource context. Resource contexts can identify a set of UEM resources associated with a device context. Where a match is found, the UEM system can provide the corresponding resources to the user device.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: November 12, 2024
    Assignee: Omnissa, LLC
    Inventors: Shanger Sivaramachandran, Prashanth Rao, Janani Vedapuri, Adarsh Subhash Chandra Jain
  • Patent number: 12136001
    Abstract: A computer system that includes a plurality of compute clusters that are located at different geographical locations. Each compute cluster is powered by a local energy source at a geographical location of that compute cluster. Each local energy source has a pattern of energy supply that is variable over time based on an environmental factor. The computer system further includes a server system that executes a global scheduler that distributes virtual machines that perform compute tasks for server-executed software programs to the plurality of compute clusters of the distributed compute platform. To distribute virtual machines for a target server-executed software program, the global scheduler is configured to select a subset of compute clusters that have different complementary patterns of energy supply such that the subset of compute clusters aggregately provide a target compute resource availability for virtual machines for the target server-executed software program.
    Type: Grant
    Filed: September 2, 2021
    Date of Patent: November 5, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Shadi Abdollahian Noghabi, Ranveer Chandra, Anirudh Badam, Riyaz Mohamed Pishori, Shivkumar Kalyanaraman, Srinivasan Iyengar
  • Patent number: 12131238
    Abstract: A method for database management that includes receiving an algorithm from a user. Based on the algorithm, a hierarchical dataflow graph (hDFG) may be generated. The method may further include generating an architecture for a chip based on the hDFG. The architecture for a chip may retrieve a data table from a database. The data table may be associated with the architecture for a chip. Finally, the algorithm may be executed against the data table, such that an action included in the algorithm is performed.
    Type: Grant
    Filed: October 12, 2022
    Date of Patent: October 29, 2024
    Assignee: Georgia Tech Research Corporation
    Inventors: Hadi Esmaeilzadeh, Divya Mahajan, Joon Kyung Kim
  • Patent number: 12124884
    Abstract: Examples described herein relate to a management node and a method for managing deployment of a workload. The management node may obtain values of resource labels related to platform characteristics of a plurality of worker nodes. Further, the management node may determine values of one or more custom resource labels for each of the plurality of worker nodes, wherein a value of each custom resource label of the one or more custom resource labels is determined based on values of a respective set of resource labels of the resource labels. Furthermore, the management node may receive a workload deployment request including a workload description of a workload. Moreover, the management node may deploy the workload on a worker node of the plurality of worker nodes based on the workload description and the values of the one or more custom resource labels.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: October 22, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Srinivasan Varadarajan Sahasranamam, Mohan Parthasarathy
  • Patent number: 12112208
    Abstract: A method for automated switching of workloads on computing devices to increase returns such as rewards or transaction fees in a cryptocurrency blockchain network is disclosed. A plurality of signals impacting the profitability of mining for a plurality of different cryptocurrencies and plurality of different mining pools are monitored. In response to the plurality of signals indicating a different cryptocurrency and mining pool combination is more profitable, the computing device workload is automatically switched. Switching cost may be calculated and used to prevent unprofitable switches. The signals may be used to train a machine learning model that may be used to predict future profitability for automatic switching.
    Type: Grant
    Filed: July 22, 2021
    Date of Patent: October 8, 2024
    Assignee: Core Scientific, Inc.
    Inventors: Kristy-Leigh A. Minehan, Ganesh Balakrishnan, Evan Adams, Carla Cortez, Ian Ferreira
  • Patent number: 12106137
    Abstract: A server-based desktop-virtual machines architecture may be extended to a client machine. In one embodiment, a user desktop is remotely accessed from a client system. The remote desktop is generated by a first virtual machine running on a server system, which may comprise one or more server computers. During execution of the first virtual machine, writes to a corresponding virtual disk are directed to a delta disk file or redo log. A copy of the virtual disk is created on the client system. When a user decides to “check out” his or her desktop, the first virtual machine is terminated (if it is running) and a copy of the delta disk is created on the client system. Once the delta disk is present on the client system, a second virtual machine can be started on the client system using the virtual disk and delta disk to provide local access to the user's desktop at the client system. This allows the user to then access his or her desktop without being connected to a network.
    Type: Grant
    Filed: April 26, 2023
    Date of Patent: October 1, 2024
    Assignee: Omnissa, LLC
    Inventors: Yaron Halperin, Jad Chamcham, Christian Matthew Leroy, Gerald Cheong, Matthew Eccleston, Ji Feng
  • Patent number: 12106150
    Abstract: The present invention relates to a system for data analytics in a network between one or more local device(s) (130) and a cloud computing platform (120), in which data collected and/or stored on the local device(s) (130) and/or stored on the cloud computing platform (120) are processed by an analytical algorithm (A) which is subdivided into at least two sub-algorithms (SA1, SA2), wherein one sub-algorithm (SA1) is executed on the local device(s) (130) and the other sub-algorithm (SA2) is executed on the cloud computing platform (120).
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: October 1, 2024
    Assignee: Siemens Aktiengesellschaft
    Inventor: Amit Verma
  • Patent number: 12093155
    Abstract: Certain aspects of the present disclosure provide techniques for improved hardware utilization. An input data tensor is divided into a first plurality of sub-tensors, and a plurality of logical sub-arrays in a physical multiply-and-accumulate (MAC) array is identified. For each respective sub-tensor of the first plurality of sub-tensors, the respective sub-tensor is mapped to a respective logical sub-array of the plurality of logical sub-arrays, and the respective sub-tensor is processed using the respective logical sub-array.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: September 17, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Hee Jun Park, Bohuslav Rychlik, Niraj Shantilal Paliwal
  • Patent number: 12086632
    Abstract: A task manager tightly coupled to a programmable real-time unit (PRU), the task manager configured to: detect a first event; assert, a request to the PRU during a first clock cycle that the PRU perform a second task; receive an acknowledgement of the request from the PRU during the first clock cycle; save a first address in a memory during the first clock cycle of the PRU, the first address corresponding to a first task of the PRU, the first address present in a current program counter of the PRU; load a second address of the memory into a second program counter during the first clock cycle, the second address corresponding to the second task; and load, during a second clock cycle, the second address into the current program counter, wherein the second clock cycle immediately follows the first clock cycle.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: September 10, 2024
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Thomas Anton Leyrer, William Cronin Wallace
  • Patent number: 12073254
    Abstract: A system for providing synchronous access to hardware resources includes a first network interface element to receive a network time signal from a data communication network and a memory to store sequence of one or more instructions selected from an instruction set of the first processing circuit. The sequence of one or more instructions include a first instruction that is configured to synchronize execution of a second instruction of the sequence of one or more instructions with the network time signal. The system further includes a first processing circuit to use the first instruction and a timing parameter associated with a second instruction to execute the second instruction in synchrony with the network time signal.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: August 27, 2024
    Assignee: Analog Devices International Unlimited Company
    Inventors: David Kenneth Bydeley, Gary Wayne Ng, Gordon Alexander Charles
  • Patent number: 12073242
    Abstract: A method for containerized workload scheduling can include determining a network state for a first hypervisor in a virtual computing cluster (VCC). The method can further include determining a network state for a second hypervisor. Containerized workload scheduling can further include deploying a container to run a containerized workload on a virtual computing instance (VCI) deployed on the first hypervisor or the second hypervisor based, at least in part, on the determined network state for the first hypervisor and the second hypervisor.
    Type: Grant
    Filed: December 21, 2022
    Date of Patent: August 27, 2024
    Assignee: VMware LLC
    Inventors: Aditi Ghag, Pranshu Jain, Yaniv Ben-Itzhak, Jianjun Shen
  • Patent number: 12067419
    Abstract: Load balancing processes are performed in an observability pipeline system comprising a plurality of computing resources. In some aspects, the observability pipeline system defines a leader role and worker roles. A plurality of computing jobs each include computing tasks associated with event data. The leader role dispatches the computing tasks to the worker roles according to a least in-flight task dispatch criteria, which includes iteratively: identifying an available worker role; identifying one or more incomplete computing jobs; selecting, from the one or more incomplete computing jobs, a computing job that has the least number of in-flight computing tasks currently being executed in the observability pipeline system; identifying a next computing task from the selected computing job; and dispatching the next computing task to the available worker role. The worker roles execute the computing tasks by applying an observability pipeline process to the event data associated with the respective computing task.
    Type: Grant
    Filed: June 27, 2023
    Date of Patent: August 20, 2024
    Assignee: Cribl, Inc.
    Inventors: Dritan Bitincka, Ledion Bitincka, Nicholas Robert Romito, Clint Sharp
  • Patent number: 12033000
    Abstract: In one embodiment, an apparatus comprises a communication interface to communicate over a network, and a processor. The processor is to: receive a workload provisioning request from a user, wherein the workload provisioning request comprises information associated with a workload, a network topology, and a plurality of potential hardware choices for deploying the workload over the network topology; receive hardware performance information for the plurality of potential hardware choices from one or more hardware providers; generate a task dependency graph associated with the workload; generate a device connectivity graph associated with the network topology; select, based on the task dependency graph and the device connectivity graph, one or more hardware choices from the plurality of potential hardware choices; and provision a plurality of resources for deploying the workload over the network topology, wherein the plurality of resources are provisioned based on the one or more hardware choices.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: July 9, 2024
    Assignee: Intel Corporation
    Inventor: Shao-Wen Yang
  • Patent number: 12032993
    Abstract: Techniques described herein relate to a method for deploying workflows. The method may include receiving, by a platform controller of a domain, a workflow portion from a service controller of a federated controller, provisioning a set of devices in the domain to the workflow portion based on a first fit, generating, by the platform controller, a workflow fingerprint based on the provisioning of the set of devices and based on the workflow portion, executing the workflow portion in the domain using the set of devices, making a determination that the workflow portion requires additional resources, based on the determination, provisioning additional resources of the domain to the workflow portion to obtain an updated execution resource set, and updating the workflow fingerprint based on the updated execution resource set to obtain an updated workflow fingerprint, and executing the workflow portion using the updated execution resource set.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: July 9, 2024
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: John S. Harwood, Robert Anthony Lincourt, Jr., William Jeffery White, William Price Dawkins, Elie Antoun Jreij, Susan Elizabeth Young
  • Patent number: 12026556
    Abstract: A method for processing a neural network includes receiving a graph corresponding to an artificial neural network including multiple nodes connected by edges. The method determines a set of independent nodes of multiple nodes to be executed in a neural network. The method also determines a next node in the set of independent nodes to add to an ordered set of the multiple nodes corresponding to an order of execution via a hardware resource for processing the neural network. The next node is determined based on a common hardware resource with a first preceding node in the ordered set or a frequency of nodes in the set of independent nodes to be executed via a same hardware resource. The ordered set of the plurality of nodes is generated based on the next node. The method may be repeated until each of the nodes of the graph are included in the ordered set of the nodes.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: July 2, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Zakir Hossain Syed, Durk Van Veen, Nathan Omer Kaslan
  • Patent number: 12026557
    Abstract: In general, the invention relates to providing computer implemented services using information handling systems. One or more embodiments of the invention includes identifying a hardware resource requirement in a composition request for a composed information handling system, wherein the hardware resource requirement specifies a hardware resource with data transformation functionality (DTF), identifying a hardware resource that does not have the DTF, connecting the hardware resource to a DTF container, wherein the DTF container implements the DTF and emulates the hardware resource with DTF, and initiating composition of the composed information handling system using the DTF container, wherein the DTF container satisfies the hardware resource requirement.
    Type: Grant
    Filed: July 22, 2021
    Date of Patent: July 2, 2024
    Assignee: DELL PRODUCTS L.P.
    Inventors: William Price Dawkins, Valerie Diane Padilla, Sudhir Vittal Shetty, Jon Robert Hass, James Robert King
  • Patent number: 12014219
    Abstract: A method can include receiving monitoring information associated with a machine learning (ML) or artificial intelligence (AI) workload implemented by an edge compute unit of a plurality of edge compute units. Status information corresponding to a plurality of connected edge assets can be received, the plurality of edge compute units and connected edge assets included in a fleet of edge devices. A remote fleet management graphical user interface (GUI) can display a portion of the monitoring or status information for a subset of the fleet of edge devices, based on a user selection input, and can receive a user configuration input indicative of an updated configuration associated with at least one edge compute unit of the fleet. A cloud computing environment can transmit control information corresponding to the updated configuration to the at least one edge compute unit.
    Type: Grant
    Filed: September 5, 2023
    Date of Patent: June 18, 2024
    Assignee: Armada Systems Inc.
    Inventors: Pradeep Nair, Pragyana K Mishra, Anish Swaminathan, Janardhan Prabhakara
  • Patent number: 12001889
    Abstract: An approach is provided for forecasting and reporting available access times of a physical resource. Availability data of a physical resource may be determined from historical data of the physical resource, counter data of the physical resource, reservation data of the physical resource, constraint data of the physical resource, and criteria data of the physical resource, or any combination thereof. The availability data may be displayed on client computing devices upon request.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: June 4, 2024
    Assignee: RICOH COMPANY, LTD.
    Inventor: Alex Reyes
  • Patent number: 12001864
    Abstract: Containerized software discover and identification can include discovering a plurality of container remnants by electronically scanning portions of computer memory of at least one computer system of one or more of computing nodes, the portions of computer memory being allocated to persistent storage of computer data, and each container remnant containing computer data providing a record of system-generated execution attributes generated in response to execution of one or more containerized applications. One or more inactive container remnants unutilized by a currently running containerized application can be identified among the plurality of container remnants. Each inactive container remnant can be categorized, the categorizing being based on system-generated execution attributes contained in each inactive container remnant.
    Type: Grant
    Filed: December 24, 2020
    Date of Patent: June 4, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Piotr P. Godowski, Michal Paluch, Tomasz Hanusiak, Szymon Kowalczyk, Andrzej Pietrzak