Patents by Inventor Brinda Ganesh

Brinda Ganesh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250097120
    Abstract: Examples include techniques for artificial intelligence (AI) capabilities at a network switch. These examples include receiving a request to register a neural network for loading to an inference resource located at the network switch and loading the neural network based on information included in the request to support an AI service to be provided by users requesting the AI service.
    Type: Application
    Filed: December 2, 2024
    Publication date: March 20, 2025
    Applicant: Intel Corporation
    Inventors: Francesc GUIM BERNAT, Suraj PRABHAKARAN, Kshitij A. DOSHI, Brinda GANESH, Timothy VERRALL
  • Publication number: 20250097306
    Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QOS pre-allocation; and automatic QoS migration across edge computing nodes.
    Type: Application
    Filed: September 24, 2024
    Publication date: March 20, 2025
    Inventors: Francesc Guim Bernat, Patrick Bohan, Kshitij Arun Doshi, Brinda Ganesh, Andrew J. Herdrich, Monica Kenguva, Karthik Kumar, Patrick G. Kutch, Felipe Pastor Beneyto, Rashmin Patel, Suraj Prabhakaran, Ned M. Smith, Petar Torre, Alexander Vul
  • Publication number: 20250055987
    Abstract: Techniques related to distributing the video encoding processing of an input video across hardware and software systems. Such techniques include evaluating the content of the video and determine whether or the encoding operation is best to be done on the hardware system only, software system only or a hybrid hardware and software system.
    Type: Application
    Filed: August 22, 2024
    Publication date: February 13, 2025
    Applicant: Intel Corporation
    Inventors: Brinda Ganesh, Nilesh Jain, Sumit Mohan, Faouzi Kossentini, Jill Boyce, James Holland, Zhijun Lei, Chekib Nouira, Foued Ben Amara, Hassene Tmar, Sebastian Possos, Craig Hurst
  • Patent number: 12197601
    Abstract: Examples described herein relate to offload circuitry comprising one or more compute engines that are configurable to perform a workload offloaded from a process executed by a processor based on a descriptor particular to the workload. In some examples, the offload circuitry is configurable to perform the workload, among multiple different workloads. In some examples, the multiple different workloads include one or more of: data transformation (DT) for data format conversion, Locality Sensitive Hashing (LSH) for neural network (NN), similarity search, sparse general matrix-matrix multiplication (SpGEMM) acceleration of hash based sparse matrix multiplication, data encode, data decode, or embedding lookup.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: January 14, 2025
    Assignee: Intel Corporation
    Inventors: Ren Wang, Sameh Gobriel, Somnath Paul, Yipeng Wang, Priya Autee, Abhirupa Layek, Shaman Narayana, Edwin Verplanke, Mrittika Ganguli, Jr-Shian Tsai, Anton Sorokin, Suvadeep Banerjee, Abhijit Davare, Desmond Kirkpatrick, Rajesh M. Sankaran, Jaykant B. Timbadiya, Sriram Kabisthalam Muthukumar, Narayan Ranganathan, Nalini Murari, Brinda Ganesh, Nilesh Jain
  • Patent number: 12132790
    Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes.
    Type: Grant
    Filed: July 28, 2022
    Date of Patent: October 29, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Patrick Bohan, Kshitij Arun Doshi, Brinda Ganesh, Andrew J. Herdrich, Monica Kenguva, Karthik Kumar, Patrick G Kutch, Felipe Pastor Beneyto, Rashmin Patel, Suraj Prabhakaran, Ned M. Smith, Petar Torre, Alexander Vul
  • Patent number: 12132805
    Abstract: Technologies for fulfilling service requests in an edge architecture include an edge gateway device to receive a request from an edge device or an intermediate tier device of an edge network to perform a function of a service by an entity hosting the service. The edge gateway device is to identify one or more input data to fulfill the request by the service and request the one or more input data from an edge resource identified to provide the input data. The edge gateway device is to provide the input data to the entity associated with the request.
    Type: Grant
    Filed: December 3, 2021
    Date of Patent: October 29, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Petar Torre, Ned Smith, Brinda Ganesh, Evan Custodio, Suraj Prabhakaran
  • Patent number: 12120012
    Abstract: A device of a service coordinating entity includes communications circuitry to communicate with a plurality of access networks via a corresponding plurality of network function virtualization (NFV) instances, processing circuitry, and a memory device. The processing circuitry is to perform operations to monitor stored performance metrics for the plurality of NFV instances. Each of the NFV instances is instantiated by a corresponding scheduler of a plurality of schedulers on a virtualization infrastructure of the service coordinating entity. A plurality of stored threshold metrics is retrieved, indicating a desired level for each of the plurality of performance metrics. A threshold condition is detected for at least one of the performance metrics for an NF V instance of the plurality of NFV instances, based on the retrieved plurality of threshold metrics. A hardware resource used by the NFV instance to communicate with an access network is adjusted based on the detected threshold condition.
    Type: Grant
    Filed: August 19, 2021
    Date of Patent: October 15, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Andrew J. Herdrich, Karthik Kumar, Felipe Pastor Beneyto, Edwin Verplanke, Rashmin Patel, Monica Kenguva, Brinda Ganesh, Alexander Vul, Ned M. Smith
  • Patent number: 12113853
    Abstract: Example methods, apparatus, and systems to manage quality of service with respect to service level agreements in a computing device are disclosed. An example apparatus includes a first mesh proxy assigned to a first platform-agnostic application, the first mesh proxy to generate a first resource request signal based on a first service level agreement requirement from the first platform-agnostic application; a second mesh proxy assigned to a second platform-agnostic application, the second mesh proxy to generate a second resource request signal based on a second service level agreement requirement from second platform-agnostic application; and a load balancer to allocate hardware resources for the first platform-agnostic application and the second platform-agnostic application based on the first resource request signal and the second resource request signal.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: October 8, 2024
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Andrew J. Herdrich, Kshitij Arun Doshi, Monica Kenguva, Ned M. Smith, Nilesh Jain, Brinda Ganesh, Rashmin Patel, Alexander Vul
  • Patent number: 12101475
    Abstract: Techniques related to distributing the video encoding processing of an input video across hardware and software systems. Such techniques include evaluating the content of the video and determine whether or the encoding operation is best to be done on the hardware system only, software system only or a hybrid hardware and software system.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: September 24, 2024
    Assignee: Intel Corporation
    Inventors: Brinda Ganesh, Nilesh Jain, Sumit Mohan, Faouzi Kossentini, Jill Boyce, James Holland, Zhijun Lei, Chekib Nouira, Foued Ben Amara, Hassene Tmar, Sebastian Possos, Craig Hurst
  • Patent number: 12095844
    Abstract: Methods, apparatus, systems and articles of manufacture for re-use of a container in an edge computing environment are disclosed. An example method includes detecting that a container executed at an edge node of a cloud computing environment is to be cleaned, deleting user data from the container, the deletion of the user data performed without deleting the container from the memory of the edge node, restoring settings of the container to a default state; and storing information identifying the container, the information including a flavor of the container, the storing of the information to enable the container to be re-used by a subsequent requestor.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: September 17, 2024
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Brinda Ganesh, Timothy Verrall, Ned Smith, Kshitij Doshi
  • Patent number: 11972001
    Abstract: Technologies for securely providing one or more remote accelerators hosted on edge resources to a client compute device includes a device that further includes an accelerator and one or more processors. The one or more processors are to determine whether to enable acceleration of an encrypted workload, receive, via an edge network, encrypted data from a client compute device, and transfer the encrypted data to the accelerator without exposing content of the encrypted data to the one or more processors. The accelerator is to receive, in response to a determination to enable the acceleration of the encrypted workload, an accelerator key from a secure server via a secured channel, and process, in response to a transfer of the encrypted data from the one or more processors, the encrypted data using the accelerator key.
    Type: Grant
    Filed: May 13, 2022
    Date of Patent: April 30, 2024
    Assignee: Intel Corporation
    Inventors: Ned M. Smith, Brinda Ganesh, Francesc Guim Bernat, Eoin Walsh, Evan Custodio
  • Publication number: 20240080246
    Abstract: Examples include techniques for artificial intelligence (AI) capabilities at a network switch. These examples include receiving a request to register a neural network for loading to an inference resource located at the network switch and loading the neural network based on information included in the request to support an AI service to be provided by users requesting the AI service.
    Type: Application
    Filed: October 2, 2023
    Publication date: March 7, 2024
    Inventors: Francesc GUIM BERNAT, Suraj PRABHAKARAN, Kshitij A. DOSHI, Brinda GANESH, Timothy VERRALL
  • Patent number: 11824732
    Abstract: Examples include techniques for artificial intelligence (AI) capabilities at a network switch. These examples include receiving a request to register a neural network for loading to an inference resource located at the network switch and loading the neural network based on information included in the request to support an AI service to be provided by users requesting the AI service.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: November 21, 2023
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Suraj Prabhakaran, Kshitij A. Doshi, Brinda Ganesh, Timothy Verrall
  • Patent number: 11797343
    Abstract: Data management for edge architected computing systems extends current storage and memory schemes of edge resources to expose interfaces to allow a device, such as an endpoint or client device, or another edge resource, to specify criteria for managing data originating from the device and stored in an edge resource, and extends the storage and memory controllers to manage data in accordance with the criteria, including removing stored data that no longer satisfies the criteria. The criteria includes a temporal hint to specify a time after which the data can be removed, a physical hint to specify a list of edge resources outside of which the data can be removed, an event-based hint to specify an event after which the data can be removed, and a quality of service condition to modify the time specified in the temporal hint based on a condition, such as memory and storage capacity of the edge resource in which the data is managed.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: October 24, 2023
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Ramanathan Sethuraman, Karthik Kumar, Mark A. Schmisseur, Brinda Ganesh
  • Publication number: 20230102279
    Abstract: Systems, methods, and apparatuses relating sparsity based FMA. In some examples, an instance of a single FMA instruction has one or more fields for an opcode, one or more fields to identify a source/destination matrix operand, one or more fields to identify a first plurality of source matrix operands, one or more fields to identify a second plurality of matrix operands, wherein the opcode is to indicate that execution circuitry is to select a proper subset of data elements from the first plurality of source matrix operands based on sparsity controls from a first matrix operand of the second plurality of matrix operands and perform a FMA.
    Type: Application
    Filed: September 25, 2021
    Publication date: March 30, 2023
    Inventors: Menachem ADELMAN, Robert VALENTINE, Dan BAUM, Amit GRADSTEIN, Simon RUBANOVICH, Regev SHEMY, Zeev SPERBER, Alexander HEINECKE, Christopher HUGHES, Evangelos GEORGANAS, Mark CHARNEY, Arik NARKIS, Rinat RAPPOPORT, Barukh ZIV, Yaroslav POLLAK, Nilesh JAIN, Yash AKHAURI, Brinda GANESH, Rajesh POORNACHANDRAN, Guy BOUDOUKH
  • Publication number: 20230091205
    Abstract: Methods and apparatus relating to memory side prefetch architecture for improved memory bandwidth are described. In an embodiment, logic circuitry transmits a Direct Memory Controller Prefetch (DMCP) request to cause a prefetch of data from memory. A memory controller receives the DMCP request and issues a plurality of read operations to the memory in response to the DMCP request. Data read from the memory is stored in a storage structure in response to the plurality of read operations. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: September 20, 2021
    Publication date: March 23, 2023
    Applicant: Intel Corporation
    Inventors: Adrian Moga, Ugonna Echeruo, Eduard Roytman, Krishnakanth Sistla, Joseph Nuzman, Brinda Ganesh, Meenakshisundaram Chinthamani, Yen-Cheng Liu, Sai Prashanth Muralidhara, Vivek Kozhikkottu, Hanna Alam, Narasimha Sridhar Srirangam
  • Publication number: 20230035468
    Abstract: Technologies for securely providing one or more remote accelerators hosted on edge resources to a client compute device includes a device that further includes an accelerator and one or more processors. The one or more processors are to determine whether to enable acceleration of an encrypted workload, receive, via an edge network, encrypted data from a client compute device, and transfer the encrypted data to the accelerator without exposing content of the encrypted data to the one or more processors. The accelerator is to receive, in response to a determination to enable the acceleration of the encrypted workload, an accelerator key from a secure server via a secured channel, and process, in response to a transfer of the encrypted data from the one or more processors, the encrypted data using the accelerator key.
    Type: Application
    Filed: May 13, 2022
    Publication date: February 2, 2023
    Inventors: Ned M. Smith, Brinda Ganesh, Francesc Guim Bernat, Eoin Walsh, Evan Custodio
  • Publication number: 20230022620
    Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes.
    Type: Application
    Filed: July 28, 2022
    Publication date: January 26, 2023
    Inventors: Francesc Guim Bernat, Patrick Bohan, Kshitij Arun Doshi, Brinda Ganesh, Andrew J. Herdrich, Monica Kenguva, Karthik Kumar, Patrick G. Kutch, Felipe Pastor Beneyto, Rashmin Patel, Suraj Prabhakaran, Ned M. Smith, Petar Torre, Alexander Vul
  • Patent number: 11451435
    Abstract: Technologies for providing multi-tenant support in edge resources using edge channels include a device that includes circuitry to obtain a message associated with a service provided at the edge of a network. Additionally, the circuitry is to identify an edge channel based on metadata associated with the message. The edge channel has a predefined amount of resource capacity allocated to the edge channel to process the message. Further, the circuitry is to determine the predefined amount of resource capacity allocated to the edge channel and process the message using the allocated resource capacity for the identified edge channel.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: September 20, 2022
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Karthik Kumar, Benjamin Graniello, Timothy Verrall, Andrew J. Herdrich, Rashmin Patel, Monica Kenguva, Brinda Ganesh, Alexander Vul, Ned M. Smith, Suraj Prabhakaran
  • Patent number: 11412052
    Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: August 9, 2022
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Patrick Bohan, Kshitij Arun Doshi, Brinda Ganesh, Andrew J. Herdrich, Monica Kenguva, Karthik Kumar, Patrick G Kutch, Felipe Pastor Beneyto, Rashmin Patel, Suraj Prabhakaran, Ned M. Smith, Petar Torre, Alexander Vul