Patents Examined by Qing-Yuan Wu
  • Patent number: 11966766
    Abstract: A data processing system, that includes: one or more host processing devices, the one or more host processing devices may be configured to support instantiation of a plurality of virtual machines such that a first set of virtual machines run one or more worker processes, each worker process operating on a respective data set to produce a respective gradient. The host processing devices may be configured to support instantiation of a second set of virtual machines running one or more reducer processes that operate on each respective gradient produced by each worker process to produce an aggregated gradient. The one or more reducer processes may cause the aggregated gradient to be broadcasted to each worker process.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: April 23, 2024
    Assignee: Google LLC
    Inventors: Chang Lan, Soroush Radpour
  • Patent number: 11960898
    Abstract: The technology disclosed herein enables a processor that processes instructions synchronously in accordance with a processor clock to identify a first instruction specifying an asynchronous operation to be processed independently of the processor clock. The asynchronous operation is performed by an asynchronous execution unit that executes the asynchronous operation independently of the processor clock and generates at least one result of the asynchronous operation. A synchronous execution unit executes, in parallel with the execution of the asynchronous operation by the asynchronous execution unit, one or more second instructions specifying respective synchronous operations. Responsive to determining that the asynchronous execution unit has generated the at least one result of the asynchronous operation, the processor receives the at least one result of the asynchronous operation.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: April 16, 2024
    Assignee: Red Hat, Inc.
    Inventor: Ulrich Drepper
  • Patent number: 11954535
    Abstract: Systems, devices, and methods of execution one or more tasks in an Internet-of-Things (IoT) environment are disclosed herein. An exemplary method includes determining an event associated with overloading of a first sensor node in the IoT environment based on resources available in real-time on the first sensor node, wherein the event is determined based on number of tasks pending for execution at the first sensor node. Further, the method includes identifying the one or more tasks executable by a second sensor node. Furthermore, the method includes establishing communication with the second sensor node in the IoT environment and assigning the one or more tasks to the second sensor node such that the second sensor node executes the one or more tasks.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: April 9, 2024
    Assignee: Siemens Aktiengesellschaft
    Inventors: Raju Hs, Himanshu Kumar Singh
  • Patent number: 11941454
    Abstract: Features are disclosed for correlating a workload type with particular volume characteristics for a block storage volume. The volume characteristics may include a durability or a performance consistency for a particular volume. A computing device can obtain a set of workload parameters indicating a workload associated with a particular block storage volume. Based on the set of workload parameters, the computing device can determine a workload classification that links the set of workload parameters to a set of volume characteristics. The computing device can further compare the set of volume characteristics with the current set of volume characteristics for the block storage volume. Based on comparing the sets of volume characteristics, the computing device may determine a recommendation for a user. The computing device can dynamically modify the block storage volume based on the recommendation.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: March 26, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Mohit Gupta, Letian Feng, Leslie Johann Lamprecht
  • Patent number: 11924293
    Abstract: A system and method for configuring components added to a network is disclosed. The method includes determining, by a host, that network identifying information of a plurality of networks to which the host is connected is unknown, listening for messages on the plurality of networks to obtain network identifying information for respective networks, receiving a message on a network of the plurality of networks, the message including a network identifier and a set of configuration settings, and configuring a network connection of the host for the network in view of the network identifier and the set of configuration settings from the message.
    Type: Grant
    Filed: February 17, 2023
    Date of Patent: March 5, 2024
    Assignee: Red Hat Israel, Ltd.
    Inventors: Michael Kolesnik, Mordechay Asayag
  • Patent number: 11922217
    Abstract: A computer system includes a transceiver that receives over a data communications network different types of input data from multiple source nodes and a processing system that defines for each of multiple data categories, a set of groups of data objects for the data category based on the different types of input data. Predictive machine learning model(s) predict a selection score for each group of data objects in the set of groups of data objects for the data category for a predetermined time period. Control machine learning model(s) determine how many data objects are permitted for each group of data objects based on the selection score. Decision-making machine learning model(s) prioritize the permitted data objects based on one or more predetermined priority criteria. Subsequent activities of the computer system are monitored to calculate performance metrics for each group of data objects and for data objects actually selected during the predetermined time period.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: March 5, 2024
    Assignee: Nasdaq, Inc.
    Inventors: Shihui Chen, Keon Shik Kim, Douglas Hamilton
  • Patent number: 11907762
    Abstract: A method for conserving resources in a distributed system includes receiving an event-criteria list from a resource controller. The event-criteria list includes one or more events watched by the resource controller and the resource controller controls at least one target resource and is configured to respond to events from the event-criteria list that occur. The method also includes determining whether the resource controller is idle. When the resource controller is idle, the method includes terminating the resource controller, determining whether any event from the event-criteria list occurs after terminating the resource controller, and, when at least one event from the event-criteria list occurs after terminating the resource controller, recreating the resource controller.
    Type: Grant
    Filed: January 26, 2021
    Date of Patent: February 20, 2024
    Assignee: Google LLC
    Inventors: Justin Santa Barbara, Timothe Hockin, Robert Bailey, Jeffrey Johnson
  • Patent number: 11886223
    Abstract: In one set of embodiments, confidential data needed by a workload component running within a worker VM can be placed on an encrypted virtual disk that is attached to the worker VM and hardware-based attestation can be used to validate the worker VM's software and isolate its guest memory from its hypervisor. Upon successful completion of this attestation process, a data decryption key can be delivered to the worker VM via a secure channel established via the attestation, such that the hypervisor cannot read or alter the key. The worker VM can then decrypt the contents of the encrypted virtual disk using the data decryption key, thereby granting the workload component access to the confidential data.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: January 30, 2024
    Assignee: VMware LLC
    Inventors: Abhishek Srivastava, David Dunn, Jesse Pool, Adrian Drzewiecki
  • Patent number: 11868803
    Abstract: A method and apparatus for controlling and coordinating a multi-component system. Each component in the system contains a computing device. Each computing device is controlled by software running on the computing device. A first portion of the software resident on each computing device is used to control operations needed to coordinate the activities of all the components in the system. This first portion is known as a “coordinating process.” A second portion of the software resident on each computing device is used to control local processes (local activities) specific to that component. Each component in the system is capable of hosting and running the coordinating process. The coordinating process continually cycles from component to component while it is running.
    Type: Grant
    Filed: April 15, 2020
    Date of Patent: January 9, 2024
    Inventors: Kenneth M. Ford, Niranjan Suri
  • Patent number: 11853793
    Abstract: An electronic device includes at least one transceiver, at least one memory, and at least one processor coupled to the at least one transceiver and the at least one memory. The at least one processor is configured to receive, via the at least one transceiver, an AI model in a trusted execution environment (TEE). The at least one processor is also configured to receive an inference request and input data from a source outside the TEE. The at least one processor is further configured to partition a calculation of an inference result between an internal calculation performed by processor resources within the TEE and an external calculation performed by processor resources outside the TEE. In addition, the at least one processor is configured to produce the inference result based on results of the internal calculation and the external calculation.
    Type: Grant
    Filed: October 9, 2020
    Date of Patent: December 26, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Xun Chen, Jianwei Qian
  • Patent number: 11847504
    Abstract: A method for a CPU to execute artificial intelligence related processes is disclosed. The method includes: when executing TensorFlow on an electronic device, calling a corresponding AI model of TensorFlow according to content of program codes; determining and obtaining one or multiple sparse matrixes used by the AI model in performing calculations; executing a matrix simplifying procedure to the one or multiple sparse matrixes; executing an instruction transforming procedure to an instruction set applied for the AI model; issuing an instruction to a weighted CPU of the electronic device by the AI model according to transformed instruction set; and the weighted CPU, after receiving the instruction, averagely distributing multiple procedures indicated by the AI model to each of multiple threads of the weighted CPU to be respectively executed according to a weighting value of each of the multiple procedures.
    Type: Grant
    Filed: December 16, 2020
    Date of Patent: December 19, 2023
    Assignee: NEXCOM INTERNATIONAL CO., LTD.
    Inventors: Chien-Wei Tseng, Chin-Ling Chiang, Po-Hsu Chen
  • Patent number: 11842197
    Abstract: A new approach for supporting tag-based synchronization among different tasks of a machine learning (ML) operation. When a first task tagged with a set tag indicating that one or more subsequent tasks need to be synchronized with it is received at an instruction streaming engine, the engine saves the set tag in a tag table and transmits instructions of the first task to a set of processing tiles for execution. When a second task having an instruction sync tag indicating that it needs to be synchronized with one or more prior tasks is received at the engine, the engine matches the instruction sync tag with the set tags in the tag table to identify prior tasks that the second task depends on. The engine holds instructions of the second task until these matching prior tasks have been completed and then releases the instructions to the processing tiles for execution.
    Type: Grant
    Filed: February 28, 2023
    Date of Patent: December 12, 2023
    Assignee: Marvell Asia Pte Ltd
    Inventors: Avinash Sodani, Gopal Nalamalapu
  • Patent number: 11842219
    Abstract: Computer agents can be throttled individually. In an example, when a computer agent completes a work item, the computer agent reports this to a central component that maintains a vote value for that agent and that increases the respective vote value based on the completed work item. When the central component determines that system performance is sufficiently diminished, central component can throttle the performance of those computer agents having respective vote values above a predetermined threshold value.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: December 12, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventor: Leo Hendrik Reyes Lozano
  • Patent number: 11816204
    Abstract: Some embodiments may be associated with a cloud-based actor framework. A dispatcher platform may determine that a first tenant actor is to be created for a first tenant in connection with a workload associated with a plurality of tenant identifiers. The first tenant may be, for example, associated with a first tenant identifier. The dispatch platform may then select a first thread for the first tenant actor from a pool of available threads and spin a first web assembly module such that execution of the first web assembly module is associated with a first web assembly browser sandbox. The dispatcher platform can then securely create the first tenant actor within the first web assembly browser sandbox to execute the workflow for the first tenant identifier. Similarly, a second web assembly browser sandbox may execute a second tenant actor for a second tenant identifier.
    Type: Grant
    Filed: February 3, 2022
    Date of Patent: November 14, 2023
    Assignee: SAP SE
    Inventor: Shashank Mohan Jain
  • Patent number: 11816511
    Abstract: According to embodiments, a method for virtual partitioning of data includes receiving a data stream comprising a plurality of traces, each trace comprising a plurality of spans from a plurality of users. The method also includes assigning the plurality of traces of the data stream to a plurality of virtual partitions based on each user of the plurality of users, each virtual partition of the plurality of virtual partitions comprising data of a user of the plurality of users. The method also includes scheduling at least a subset of the plurality of virtual partitions to at least one user partition of a shared topic, the at least one user partition comprising data from at least one virtual partition of at least one user of the plurality of users. The method also includes indexing each user partition of the shared topic based on each user and each virtual partition.
    Type: Grant
    Filed: February 28, 2023
    Date of Patent: November 14, 2023
    Assignee: Splunk Inc.
    Inventors: Steven Karis, Maxime Petazzoni, Matthew William Pound, Charles Smith, Chengyu Yang
  • Patent number: 11809910
    Abstract: A system includes a subsystem, a database, a memory, and a processor. The subsystem includes a computational resource associated with a resource usage and having a capacity corresponding to a maximum resource usage value. The database stores training data that includes historical resource usages and historical events. The memory stores a machine learning algorithm that is trained, based on the training data, to predict, based on the occurrence of an event, that a future value of the resource usage at a future time will be greater than the maximum value. The processor detects that the event has occurred. In response, the processor applies the machine learning algorithm to predict that the future value of the resource usage will be greater than the maximum value. Prior to the future time, the processor increases the capacity of the computational resource to accommodate the future value of the resource usage.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: November 7, 2023
    Assignee: Bank of America Corporation
    Inventor: Naga Vamsi Krishna Akkapeddi
  • Patent number: 11809889
    Abstract: In an approach to intelligent connection placement across multiple logical ports, a mapping table for a virtual machine is created. A connection request to connect a local port to a port on a peer device is received. Whether an entry exists in the mapping table for the port on the peer device is determined. Responsive to determining that an entry exists in the mapping table for the port on the peer device, whether a virtual function exists the port on the peer device in the mapping table for the same physical function is determined. A virtual function is selected from the mapping table to connect the local port to the port on the peer device.
    Type: Grant
    Filed: August 11, 2020
    Date of Patent: November 7, 2023
    Assignee: International Business Machines Corporation
    Inventors: Vishal Mansur, Sivakumar Krishnasamy, Niranjan Srinivasan
  • Patent number: 11809218
    Abstract: Systems and methods are provided for incorporating an optimized dispatcher with an FaaS infrastructure to permit and restrict access to resources. For example, the dispatcher may assign requests to “warm” resources and initiate a fault process if the resource is overloaded or a cache-miss is identified (e.g., by restarting or rebooting the resource). The warm instances or accelerators associated with the allocation size that are identified may be commensurate to the demand and help dynamically route requests to faster accelerators.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: November 7, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Dejan S. Milojicic, Kimberly Keeton, Paolo Faraboschi, Cullen E. Bash
  • Patent number: 11803425
    Abstract: An apparatus comprises at least one processing device comprising a processor coupled to a memory. The at least one processing device is configured to select an application workload running on a storage system, and to identify one or more copies of the application workload running on the storage system. The at least one processing device is also configured to determine an amount of storage resources of the storage system to allocate to the identified one or more copies of the application workload running on the storage system. The at least one processing device is further configured to allocate a portion of the determined amount of storage resources of the storage system to each of the identified one or more copies of the application workload.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: October 31, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Prashant Pokharna, Sunil Kumar, Shivasharan Dalasanur Narayana Gowda
  • Patent number: 11801643
    Abstract: A method of enhancing a performance characteristic of an additive manufacturing apparatus, the method including: (a) dispensing a batch of a light polymerizable resin into the additive manufacturing apparatus, the batch characterized by at least one physical characteristic; (b) determining the unique identity of the batch; (c) sending the unique identity of the batch to a database; then (d) either: (i) receiving on the controller from the database modified operating instructions for the resin batch, which modified operating instructions have been modified based on the at least one physical characteristic, or (ii) receiving on the controller from the database the at least one physical characteristic for the specific resin batch and modifying the operating instructions based on the at least one physical characteristic; and then (e) producing the object from the batch of light polymerizable resin on the additive manufacturing apparatus with the modified operating instructions.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: October 31, 2023
    Assignee: Carbon, Inc.
    Inventors: John R. Tumbleston, Clarissa Gutierrez, Ronald Truong, Kyle Laaker, Craig B. Carlson, Roy Goldman, Abhishek Parmar