Patents by Inventor Ranganath Sunku

Ranganath Sunku has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11888710
    Abstract: Technologies for managing cache quality of service (QoS) include a compute node that includes a network interface controller (NIC) configured to identify a total amount of available shared cache ways of a last level cache (LLC) of the compute node and identify a destination address for each of a plurality of virtual machines (VMs) managed by the compute node. The NIC is further configured to calculate a recommended amount of cache ways for each workload type associated with VMs based on network traffic to be received by the NIC and processed by each of the VMs, wherein the recommended amount of cache ways includes a recommended amount of hardware I/O LLC cache ways and a recommended amount of isolated LLC cache ways usable to update a cache QoS register that includes the recommended amount of cache ways for each workload type. Other embodiments are described herein.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: January 30, 2024
    Assignee: Intel Corporation
    Inventors: Iosif Gasparakis, Malini Bhandaru, Ranganath Sunku
  • Publication number: 20240031236
    Abstract: A cross-domain distributed network function may be constructed by instantiating a local-domain endpoint for a first application component. Here, the local-domain endpoint is in a first network domain that includes the first application component. A connection to an extra-domain endpoint may then be made. Here, the extra domain endpoint is in a second network domain that does not include the first network domain, and the second network domain includes a second application component for the application. The local-domain endpoint may then provide a network service for a third network domain that includes the application. The first application component may then use that network service to connect to the second application component.
    Type: Application
    Filed: September 29, 2023
    Publication date: January 25, 2024
    Inventors: Akhilesh S. Thyagaturu, Mohit Kumar Garg, Ranganath Sunku
  • Publication number: 20240004708
    Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed for adaptive platform power management. These improve energy source management through switching energy sources of an edge node by incorporating memory, machine readable instructions, and processor circuitry to execute the functions of: evaluate operational parameters of a first energy source connected to a node and a second energy source connected to the node; determine an energy source to run a workload of the edge node based on a comparison of a first renewability to a second renewability, the evaluation of the operational parameters, and a power requirement of the workload, wherein the preferred energy source is the first energy source or the second energy source; and cause the edge node to switch to the preferred energy source.
    Type: Application
    Filed: June 29, 2023
    Publication date: January 4, 2024
    Inventors: Neal Conrad Oliver, Ranganath Sunku, Francesc Guim Bernat, Kannan Babu Ramachari Ramia, Gregory James Allison
  • Publication number: 20230105491
    Abstract: Examples described herein relate to a system to estimate latency of operations of a process without receiving a latency value directly based on received performance values and/or estimate throughput of packets transmitted for the process without receiving a throughput value directly based on received performance values. In some examples, the system is to request to adjust resource allocation to perform the process based on the determined latency and throughput.
    Type: Application
    Filed: December 2, 2022
    Publication date: April 6, 2023
    Inventors: Mrittika GANGULI, Dmytro YERMOLENKO, Adrian C. MOGA, Abhirupa LAYEK, Qiming LIU, Robert ZMUDA TRZEBIATOWSKI, Rafal SZNEJDER, Piotr WYSOCKI, Mohan J. KUMAR, Ranganath SUNKU, Vishakh NAIR
  • Publication number: 20220413915
    Abstract: Techniques are disclosed for the cell/cluster formation of compute nodes and workload and processing resource scheduling. Compute nodes within an environment may be grouped (clustered) together to perform one or more designated workload tasks. The clustered compute nodes may be associated with (or assigned to) a workload cell formed to perform one or more identified task(s).
    Type: Application
    Filed: September 2, 2022
    Publication date: December 29, 2022
    Inventors: Raju Arvind, Anil Keshavamurthy, Greeshma Pisharody, Masoud Sajadieh, Mukund Shenoy, Ranganath Sunku
  • Patent number: 11494212
    Abstract: Technologies for adaptive platform resource management include a compute node to manage a processor core mapping scheme between virtual machines (VMs) and a virtual switch of the compute node via a set of virtual ports. The virtual switch is also coupled to a network interface controller (NIC) of the compute node via another set of virtual ports. Each of the VMs is configured to either provide ingress or egress to the NIC or provide ingress/egress across the VMs, via the virtual ports. The virtual ports for providing ingress or egress to the NIC are pinned to a same processor core of a processor of the compute node, and each of the virtual ports for providing ingress and/or egress across the VMs are pinned to a respective processor core of the processor such that data is transferred across VMs by coupled virtual ports that are pinned to the same processor core.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: November 8, 2022
    Assignee: Intel Corporation
    Inventors: Ranganath Sunku, Dinesh Kumar, Irene Liew, Kavindya Deegala, Sravanthi Tangeda
  • Publication number: 20190042298
    Abstract: Technologies for adaptive platform resource management include a compute node to manage a processor core mapping scheme between virtual machines (VMs) and a virtual switch of the compute node via a set of virtual ports. The virtual switch is also coupled to a network interface controller (NIC) of the compute node via another set of virtual ports. Each of the VMs is configured to either provide ingress or egress to the NIC or provide ingress/egress across the VMs, via the virtual ports. The virtual ports for providing ingress or egress to the NIC are pinned to a same processor core of a processor of the compute node, and each of the virtual ports for providing ingress and/or egress across the VMs are pinned to a respective processor core of the processor such that data is transferred across VMs by coupled virtual ports that are pinned to the same processor core.
    Type: Application
    Filed: September 27, 2018
    Publication date: February 7, 2019
    Inventors: Ranganath Sunku, Dinesh Kumar, Irene Liew, Kavindya Deegala, Sravanthi Tangeda
  • Publication number: 20190044828
    Abstract: Technologies for managing cache quality of service (QoS) include a compute node that includes a network interface controller (NIC) configured to identify a total amount of available shared cache ways of a last level cache (LLC) of the compute node and identify a destination address for each of a plurality of virtual machines (VMs) managed by the compute node. The NIC is further configured to calculate a recommended amount of cache ways for each workload type associated with VMs based on network traffic to be received by the NIC and processed by each of the VMs, wherein the recommended amount of cache ways includes a recommended amount of hardware I/O LLC cache ways and a recommended amount of isolated LLC cache ways usable to update a cache QoS register that includes the recommended amount of cache ways for each workload type. Other embodiments are described herein.
    Type: Application
    Filed: September 25, 2018
    Publication date: February 7, 2019
    Inventors: Iosif Gasparakis, Malini Bhandaru, Ranganath Sunku