Patents by Inventor Arjun Anand

Arjun Anand has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240155025
    Abstract: An apparatus of an edge computing node, a method, and a machine-readable storage medium. The apparatus is to decode messages from a plurality of clients within the edge computing network, the messages including respective coded data for respective ones of the plurality of clients; computing estimates of metrics related to a global model for federated learning using the coded data, the metrics including a gradient on the coded data; use the metrics to update the global model to generate an updated global model, wherein the edge computing node is to update the global model by calculating the gradient on the coded data based on a linear fit of the global model to estimated labels from the federated learning; and send a message including the updated global model for transmission to at least some of the clients.
    Type: Application
    Filed: June 9, 2022
    Publication date: May 9, 2024
    Applicant: Intel Corporation
    Inventors: Mustafa Riza Akdeniz, Arjun Anand, Ravikumar Balakrishnan, Sagar Dhakal, Nageen Himayat
  • Patent number: 11917655
    Abstract: For example, a wireless communication device may be configured to determine a Resource Unit (RU) allocation of a plurality of RUs to a plurality of wireless communication stations (STAs), respectively, the RU allocation to allocate to a STA of the plurality of STAs an RU of the plurality of RUs, wherein an RU size of the RU allocated to the STA is based at least on a traffic rate parameter, which is dependent on a traffic rate of Downlink (DL) traffic for the STA; and to transmit a Multi-User (MU) DL Orthogonal-Frequency-Division-Multiple-Access (OFDMA) Physical-layer Protocol Data Unit (PPDU) to the plurality of STAs according to the RU allocation.
    Type: Grant
    Filed: December 26, 2019
    Date of Patent: February 27, 2024
    Assignee: INTEL CORPORATION
    Inventors: Arjun Anand, Vinod Kristem, Rath Vannithamby
  • Publication number: 20230252359
    Abstract: Techniques for non-linear distributed multitask support vector machines are disclosed. In the illustrative embodiment, a coordinator node sends initial parameters (or a random number generator along with model choice) for a global model to participant nodes. Each participant node performs a round of training based on the common global model parameters, the model models, and local data. Each participant node determines updated parameters for the global model and updated parameters for a local model. Each participant node sends an update of the parameters to the global model to the coordinator node, while keeping the parameters of the local model private. The coordinator node aggregates the updates from the participant nodes, updates the global model parameters, and sends them back to the participant nodes. The process can repeat until a desired error level is reached.
    Type: Application
    Filed: April 21, 2023
    Publication date: August 10, 2023
    Inventors: Aleksei Ponomarenko-Timofeev, Olga Galinina, Ravikumar Balakrishnan, Arjun Anand, Nageen Himayat, Sergey Andreev
  • Publication number: 20230195531
    Abstract: A task modeling system, including a plurality of processing clients having a plurality of processing cores; a task modeler, including a memory storing an artificial neural network; and a processor, configured to receive input data representing a plurality of processing tasks to be completed by the processing client within a predefined time duration; and implement its artificial neural network to determine from the input data an assignment of the processing tasks among the processing cores for completion of the processing tasks within the predefined time duration, and determine a power management factor for each of the plurality of processing cores for power management during the predefined time duration; wherein the artificial neural network is configured to select the power management factor for each of the plurality of processing cores to achieve a power usage within a predefined threshold for the plurality of processing cores during the predefined time duration.
    Type: Application
    Filed: December 22, 2021
    Publication date: June 22, 2023
    Inventors: Maruti GUPTA HYDE, Nageen HIMAYAT, Ravikumar BALAKRISHNAN, Mustafa AKDENIZ, Marcin SPOCZYNSKI, Arjun ANAND, Marius ARVINTE
  • Publication number: 20230177349
    Abstract: The apparatus of an edge computing node, a system, a method and a machine-readable medium. The apparatus includes a processor to cause an initial set of weights for a global machine learning (ML) model to be transmitted a set of client compute nodes of the edge computing network; process Hessians computed by each of the client compute nodes based on a dataset stored on the client compute node; evaluate a gradient expression for the ML model based on a second dataset and an updated set of weights received from the client compute nodes; and generate a meta-updated set of weights for the global model based on the initial set of weights, the Hessians received, and the evaluated gradient expression.
    Type: Application
    Filed: May 29, 2021
    Publication date: June 8, 2023
    Applicant: Intel Corporation
    Inventors: Ravikumar Balakrishnan, Nageen Himayat, Mustafa Riza Akdeniz, Sagar Dhakal, Arjun Anand, Hesham Mostafa
  • Publication number: 20230068386
    Abstract: The apparatus of an edge computing node, a system, a method and a machine-readable medium. The apparatus includes a processor to perform rounds of federated machine learning training including: processing client reports from a plurality of clients of the edge computing network; selecting a candidate set of clients from the plurality of clients for an epoch of the federated machine learning training; causing a global model to be sent to the candidate set of clients; and performing the federated machine learning training on the candidate set of clients. The processor may perform rounds of federated machine learning training including: obtaining coded training data from each of the selected clients; and performing machine learning training on the coded training data.
    Type: Application
    Filed: December 26, 2020
    Publication date: March 2, 2023
    Applicant: Intel Corporation
    Inventors: Mustafa Riza Akdeniz, Arjun Anand, Nageen Himayat, Amir S. Avestimehr, Ravikumar Balakrishnan, Prashant Bhardwaj, Jeongsik Choi, Yang-Seok Choi, Sagar Dhakal, Brandon Gary Edwards, Saurav Prakash, Amit Solomon, Shilpa Talwar, Yair Eliyahu Yona
  • Publication number: 20220377614
    Abstract: An apparatus of a transmitter computing node n (TX node n) of a wireless network, one or more computer readable media, a system, and a method.
    Type: Application
    Filed: April 1, 2022
    Publication date: November 24, 2022
    Applicant: Intel Corporation
    Inventors: Ravikumar Balakrishnan, Nageen Himayat, Arjun Anand, Mustafa Riza Akdeniz, Sagar Dhakal, Mark R. Eisen, Navid Naderializadeh
  • Publication number: 20220114033
    Abstract: An apparatus, one or more computer readable media, a distributed edge computing system, and a method. The apparatus includes one or more processors to determine dependencies between sets of tasks of a plurality of tasks to be executed by a plurality of cores of a network; determine latency deadlines of respective ones of the plurality of tasks; and determine an allocation of individual ones of the plurality of among the plurality of cores for execution based on the dependencies and based on the latency deadlines.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventors: Marius O. Arvinte, Maruti Gupta Hyde, Mustafa Riza Akdeniz, Arjun Anand, Ravikumar Balakrishnan, Nageen Himayat, Sumesh Subramanian, Alexander Bachmutsky, John M. Belstner
  • Publication number: 20220007382
    Abstract: In one embodiment, an apparatus of an access point (AP) node of a network includes an interconnect interface to connect the apparatus to one or more components of the AP node and a processor to: access scheduling requests from a plurality of devices, select a subset of the devices for scheduling of resource blocks in a time slot, and schedule wireless resource blocks in the time slot for the subset of devices using a neural network (NN) trained via deep reinforcement learning (DRL).
    Type: Application
    Filed: September 20, 2021
    Publication date: January 6, 2022
    Applicant: Intel Corporation
    Inventors: Arjun Anand, Ravikumar Balakrishnan, Vallabhajosyula S. Somayazulu, Rath Vannithamby
  • Publication number: 20210204291
    Abstract: For example, a wireless communication device may be configured to determine a Resource Unit (RU) allocation of a plurality of RUs to a plurality of wireless communication stations (STAs), respectively, the RU allocation to allocate to a STA of the plurality of STAs an RU of the plurality of RUs, wherein an RU size of the RU allocated to the STA is based at least on a traffic rate parameter, which is dependent on a traffic rate of Downlink (DL) traffic for the STA; and to transmit a Multi-User (MU) DL Orthogonal-Frequency-Division-Multiple-Access (OFDMA) Physical-layer Protocol Data Unit (PPDU) to the plurality of STAs according to the RU allocation.
    Type: Application
    Filed: December 26, 2019
    Publication date: July 1, 2021
    Applicant: INTEL CORPORATION
    Inventors: Arjun Anand, Vinod Kristem, Rath Vannithamby
  • Publication number: 20210204303
    Abstract: Embodiments of the present disclosure provide for determination of transmit power allocations and modulation and coding schemes for multiuser orthogonal frequency division multiple access downlink transmissions. Other embodiments may be described and claimed.
    Type: Application
    Filed: December 26, 2019
    Publication date: July 1, 2021
    Inventors: Vinod Kristem, Arjun Anand, Alexander W. Min, Rath Vannithamby, Shahrnaz Azizi
  • Publication number: 20210120497
    Abstract: For example, an Access Point (AP) wireless communication station (STA) may be configured to identify a plurality of non-AP STAs in a Power Save (PS) poll-less (PS-Poll-less) Power Save Mode (PSM) to be addressed by a PS-Poll-less Multi-User (MU) Downlink (DL) transmission, which does not require prior receipt of PS Poll frames from the plurality of non-AP STAs in the PS-Poll-less PSM; to transmit a beacon frame including a traffic indication to indicate that traffic is pending for transmission from the AP STA to the plurality of non-AP STAs in the PS-Poll-less PSM; and, subsequent to the beacon frame, transmit the PS-Poll-less MU DL transmission to the plurality of non-AP STAs in the PS-Poll-less PSM.
    Type: Application
    Filed: December 24, 2020
    Publication date: April 22, 2021
    Inventors: Alexander W. Min, Rath Vannithamby, Arjun Anand, Vinod Kristem
  • Publication number: 20200358685
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed that generate dynamic latency values. An example apparatus includes an active status controller to determine that a modem is active based on a number of packets obtained from a network, a prediction controller to predict that the number of packets are indicative of a workload type based on a trained model, and a latency value generator to generate a latency value based on the workload type of the number of packets, the latency value to cause a processor processing the number of packets to enter a power saving state or a power executing state.
    Type: Application
    Filed: July 23, 2020
    Publication date: November 12, 2020
    Inventors: Ajay Gupta, Ravikumar Balakrishnan, Shahrnaz Azizi, Maruti Gupta Hyde, Ariela Zeira, Arjun Anand, Jacob Winick
  • Publication number: 20200022166
    Abstract: For example, a wireless communication device may be configured to, based on a grouping criterion, select from a plurality of STAs a group of two or more STAs for a Trigger-Based (TB) Multi-User (MU) UL OFDMA control frame transmission to be communicated from the group of two or more STAs to the wireless communication device, the grouping criterion based on two or more RSSI values corresponding to the two or more STAs, respectively; to transmit a trigger frame to trigger the TB MU UL OFDMA control frame transmission, the trigger frame including two or more STA Identifiers to identify the two or more STAs, respectively; and to process the TB MU UL OFDMA control frame transmission from the group of two or more STAs, the TB MU UL OFDMA control frame transmission including two or more control frames from the two or more STAs, respectively.
    Type: Application
    Filed: September 26, 2019
    Publication date: January 16, 2020
    Inventors: Alexander W. Min, Rath Vannithamby, Arjun Anand, Vinod Kristem