Patents by Inventor Parthasarathy Sarangam

Parthasarathy Sarangam has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10819643
    Abstract: Embodiments regard load balancing data on one or more network ports. A device may include processing circuitry, the processing circuitry to transmit a first packet of a first series of packets to a destination device via a first port, determine a time gap between a first packet and a second packet of the first series of packets, and in response to a determination that the time gap is greater than a time threshold, transmit the second packet to the destination device via a second port.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: October 27, 2020
    Assignee: Intel Corporation
    Inventors: Patrick L. Connor, Parthasarathy Sarangam
  • Publication number: 20200314011
    Abstract: Flexible schemes for adding rules to a NIC pipeline and associated apparatus. Multiple match-action tables are implemented in host memory of a platform defining actions to be taken for matching packet flows. A packet processing pipeline and an exact match (EM) cache is implemented on a network interface, such as a NIC, installed in the platform. A portion of the match-action entries in the host memory match-action tables are cached in the EM cache. Received packets are processed to generate a key that is used as a lookup for the EM cache. If a match is found, the action is taken. For a miss, the key is forwarded to the host software and the match-action tables are searched. For a match, the action is taken, and the entry is added to the EM cache. If no match is found, a new match-action entry is added to a match-action table. Aging-out mechanisms are used for the match-action tables and the EM cache. A multi-hash scheme is used to that supports a very large number of match-action entries.
    Type: Application
    Filed: June 16, 2020
    Publication date: October 1, 2020
    Inventors: Manasi Deval, Elazar Cohen, Shaul Yifrach, Parthasarathy Sarangam
  • Patent number: 10768841
    Abstract: Technologies for managing network statistic counters include a network interface controller (NIC) of a computing device configured to identify a statistic counter of and a software consumer associated with a received network packet and identify an active counter page as a function of the identified software consumer. The NIC is further configured to read a value of the statistic counter stored at a counter memory address of a corresponding counter identifier entry of the identified active counter page, increment a read value of the statistic counter, and write the incremented value of the statistic counter back to the counter memory address. Additionally, in response to detecting a notification triggering event, generating a notification message that includes a present value of the statistic counter and a present value of each of the other statistic counters of the active counter page, and transmit the generated notification message to the software consumer. Other embodiments are described herein.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: September 8, 2020
    Assignee: Intel Corporation
    Inventors: Linden Cornett, Chih-Jen Chang, Manasi Deval, Parthasarathy Sarangam, Naru D. Sundar, Padma Akkiraju, Alexander Nguyen
  • Publication number: 20200274952
    Abstract: Technologies for programming flexible accelerated network pipelines include a comping device with a network controller. The computing device loads a program binary file that includes a packet processing program and a requested hint section. The binary file may be an executable and linkable format (ELF) file with an extended Berkeley packet filter (eBPF) program. The computing device determines a hardware configuration for the network controller based on the requested offload hints and programs the network controller. The network controller processes network packets with the requested offloads, such as packet classification, hashing, checksums, traffic shaping, or other offloads. The network controller returns results of the offloads as hints in metadata. The packet processing program performs actions based on the metadata, such as forwarding, dropping, packet modification, or other actions. The computing device may compile an eBPF source file to generate the binary file.
    Type: Application
    Filed: September 10, 2018
    Publication date: August 27, 2020
    Inventors: Peter P. WASKIEWICZ, Anjali Singhai JAIN, Neerav PARIKH, Parthasarathy SARANGAM
  • Publication number: 20200204503
    Abstract: Packets received non-contiguously from a network are processed by a network interface controller by coalescing received packet payload into receive buffers on a receive buffer queue and writing descriptors associated with the receive buffers for a same flow consecutively in a receive completion queue. System performance is optimized by reusing a small working set of provisioned receive buffers to minimize the memory footprint of memory allocated to store packet data. The remainder of the provisioned buffers are in an overflow queue and can be assigned to the network interface controller if the small working set of receive buffers is not sufficient to keep up with the received packet rate. The receive buffer queue can be refilled based on either timers or when the number of buffers in the receive buffer queue is below a configurable low watermark.
    Type: Application
    Filed: March 2, 2020
    Publication date: June 25, 2020
    Inventors: Linden CORNETT, Noam ELATI, Anjali Singhai JAIN, Parthasarathy SARANGAM, Eliel LOUZOUN, Manasi DEVAL
  • Publication number: 20200183729
    Abstract: Methods and apparatus for evolving hypervisor pass-through devices supporting platform independence through a core solution called MUSE (Mdev in User SpacE) that allows mediated pass-through device being served by software running in user space. The MUSE architecture supports platform hardware independence while providing pass-through performance similar to hardware-specific solutions and providing enhanced performance in virtualized environments using existing software components, including various operating systems and associated libraries for implementing SDN (Software Defined Networking) and VNF (Virtualized Network Function).
    Type: Application
    Filed: January 9, 2020
    Publication date: June 11, 2020
    Inventors: Xiuchun Lu, Cunming Liang, Shaopeng He, Nrupal Jani, Anjali Jain, Edwin Verplanke, Parthasarathy Sarangam, Zhirun Yan
  • Publication number: 20200183732
    Abstract: Methods for performing efficient receive interrupt signaling and associated apparatus, computing platform, software, and firmware. Receive (RX) queues in which descriptors associated with packets are enqueued are implemented in host memory and logically partitioned into pools, with each RX queue pool associated with a respective interrupt vector. Receive event queues (REQs) associated with respective RX queue pools and interrupt vectors are also implemented in host memory. Event generation is selectively enabled for some RX queues, while event generation is masked for others. In response to event causes for RX queues that are event generation-enabled, associated events are generated and enqueued in the REQs and interrupts on associated interrupt vectors are asserted. The events are serviced by accessing the events in the REQs, which identify the RX queue for the event and a next activity location at which a next descriptor to be processed is located.
    Type: Application
    Filed: December 11, 2019
    Publication date: June 11, 2020
    Inventors: Linden Cornett, Anil Vasudevan, Parthasarathy Sarangam, Kiran Patil
  • Publication number: 20200117605
    Abstract: Examples described herein can be used to allocate replacement receive buffers for use by a network interface, switch, or accelerator. Multiple refill queues can be used to receive identifications of available receive buffers. A refill processor can select one or more identifications from a refill queue and allocate the identifications to a buffer queue. None of the refill queues is locked from receiving identifications of available receive buffers but merely one of the refill buffers is accessed at a time to provide identifications of available receive buffers. Identifications of available receive buffers from the buffer queue are provide to the network interface, switch, or accelerator to store content of received packets.
    Type: Application
    Filed: December 16, 2019
    Publication date: April 16, 2020
    Inventors: Linden CORNETT, Parthasarathy Sarangam, Jesse Brandeburg
  • Publication number: 20190391940
    Abstract: Technologies for interrupt disassociated queuing for multi-queue input/output devices includes determining whether a network packet has arrived in an interrupt-disassociated queue and delivering the network packet to an application managed by the compute node. The application is associated with an application thread and the interrupt-disassociated queue may be in a polling mode. Subsequently, in response to a transition event, the interrupt-disassociated queue may be transitioned to an interrupt mode.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 26, 2019
    Inventors: Anil Vasudevan, Sridhar Samudrala, Parthasarathy Sarangam, Kiran Patil
  • Publication number: 20190386924
    Abstract: A switch or network interface can detect congestion caused by a flow of packets. The switch or network interface can generate a congestion hint packet and send the congestion hint packet directly to a source transmitter of the flow of packets that caused the congestion. The congestion hint packet can include information that the source transmitter can use to determine a remedial action to attempt to alleviate or stop congestion at the switch or network interface. For example, the transmitter can reduce a transmit rate of the flow of packets and/or select another route for the flow of packets. Some or all switches or network interfaces between the source transmitter and a destination endpoint can employ flow differentiation whereby a queue is selected to accommodate for a flow's sensitivity to latency.
    Type: Application
    Filed: July 19, 2019
    Publication date: December 19, 2019
    Inventors: Arvind SRINIVASAN, Ramakrishna HUGGAHALLI, Parthasarathy SARANGAM, Sunil AHLUWALIA, Mrittika GANGULI, Malek MUSLEH
  • Publication number: 20190372914
    Abstract: Technologies for network interface controllers (NICs) include a compute sled and an accelerator sled in communication over a network. The accelerator sled configures a virtual switch endpoint associated with a remote direct memory access (RDMA) server instance that is associated with a field-programmable gate array (FPGA) of the accelerator sled. The accelerator sled updates local software defined networking (SDN) tables with a virtual tunnel associated with the virtual switch endpoint and a remote compute sled. A virtual switch of the accelerator sled switches virtual tunnel traffic from the remote compute sled to the RDMA server instance, which transfers data to or from the FPGA. The compute sled also updates a local SDN table with the virtual tunnel, and a virtual switch of the compute sled switches virtual tunnel traffic to or from the accelerator sled. Other embodiments are described and claimed.
    Type: Application
    Filed: August 14, 2019
    Publication date: December 5, 2019
    Inventors: Mrittika Ganguli, Sugesh Chandran, Parthasarathy Sarangam, Sujoy Sen, Susanne M. Balle, Rajesh Sankaran
  • Publication number: 20190297015
    Abstract: A network interface controller can be programmed to direct write received data to a memory buffer via either a host-to-device fabric or an accelerator fabric. For packets received that are to be written to a memory buffer associated with an accelerator device, the network interface controller can determine an address translation of a destination memory address of the received packet and determine whether to use a secondary head. If a translated address is available and a secondary head is to be used, a direct memory access (DMA) engine is used to copy a portion of the received packet via the accelerator fabric to a destination memory buffer associated with the address translation. Accordingly, copying a portion of the received packet through the host-to-device fabric and to a destination memory can be avoided and utilization of the host-to-device fabric can be reduced for accelerator bound traffic.
    Type: Application
    Filed: June 7, 2019
    Publication date: September 26, 2019
    Inventors: Pratik M. MAROLIA, Rajesh M. SANKARAN, Ashok RAJ, Nrupal JANI, Parthasarathy SARANGAM, Robert O. SHARP
  • Patent number: 10346326
    Abstract: Generally, this disclosure relates to adaptive interrupt moderation. A method may include determining, by a host device, a number of connections between the host device and one or more link partners based, at least in part, on a connection identifier associated with each connection; determining, by the host device, a new interrupt rate based at least in part on a number of connections; updating, by the host device, an interrupt moderation timer with a value related to the new interrupt rate; and configuring the interrupt moderation timer to allow interrupts to occur at the new interrupt rate.
    Type: Grant
    Filed: January 27, 2016
    Date of Patent: July 9, 2019
    Assignee: Intel Corporation
    Inventors: Yadong Li, Linden Cornett, Manasi Deval, Anil Vasudevan, Parthasarathy Sarangam
  • Publication number: 20190190832
    Abstract: Embodiments regard load balancing data on one or more network ports. A device may include processing circuitry, the processing circuitry to transmit a first packet of a first series of packets to a destination device via a first port, determine a time gap between a first packet and a second packet of the first series of packets, and in response to a determination that the time gap is greater than a time threshold, transmit the second packet to the destination device via a second port.
    Type: Application
    Filed: February 22, 2019
    Publication date: June 20, 2019
    Inventors: Patrick L. Connor, Parthasarathy Sarangam
  • Publication number: 20190114196
    Abstract: Examples include a method of live migrating a virtual device by creating a virtual device in a virtual machine, creating first and second interfaces for the virtual device, transferring data over the first interface, detecting a disconnection of the virtual device from the virtual machine, switching data transfers for the virtual device from the first interface to the second interface, detecting a reconnection of the virtual device to the virtual machine, and switching data transfers for the virtual device from the second interface to the first interface.
    Type: Application
    Filed: December 6, 2018
    Publication date: April 18, 2019
    Inventors: Mitu AGGARWAL, Nrupal JANI, Manasi DEVAL, Kiran PATIL, Parthasarathy SARANGAM, Rajesh M. SANKARAN, Sanjay K. KUMAR, Utkarsh Y. KAKAIYA, Philip LANTZ, Kun TIAN
  • Publication number: 20190114283
    Abstract: Examples may include a computing platform having a host driver to get a packet descriptor of a received packet stored in a receive queue and to modify the packet descriptor from a first format to a second format. The computing platform also includes a guest virtual machine including a guest driver coupled to the host driver, the guest driver to receive the modified packet descriptor and to read a packet buffer stored in the receive queue using the modified packet descriptor, the packet buffer corresponding to the packet descriptor.
    Type: Application
    Filed: December 6, 2018
    Publication date: April 18, 2019
    Applicants: Intel Corporation, Intel Corporation
    Inventors: Manasi DEVAL, Nrupal JANI, Anjali SINGHAI, Parthasarathy SARANGAM, Mitu AGGARWAL, Neerav PARIKH, Kiran PATIL, Rajesh M. SANKARAN, Sanjay K. KUMAR, Utkarsh Y. KAKAIYA, Philip LANTZ, Kun TIAN
  • Publication number: 20190114195
    Abstract: Examples may include a method of instantiating a virtual machine, instantiating a virtual device to transmit data to and receive data from assigned resources of a shared physical device; and assigning the virtual device to the virtual machine, the virtual machine to transmit data to and receive data from the physical device via the virtual device.
    Type: Application
    Filed: December 6, 2018
    Publication date: April 18, 2019
    Inventors: Nrupal JANI, Manasi DEVAL, Anjali SINGHAI, Parthasarathy SARANGAM, Mitu AGGARWAL, Neerav PARIKH, Alexander H. DUYCK, Kiran PATIL, Rajesh M. SANKARAN, Sanjay K. KUMAR, Utkarsh Y. KAKAIYA, Philip LANTZ, Kun TIAN
  • Publication number: 20190114194
    Abstract: Examples may include a method of instantiating a virtual machine; instantiating a virtual device to transmit data to and receive data from assigned resources of a shared physical device by receiving input data requesting assigned resources for the virtual device, allocating assigned resources to the virtual device based at least in part on the input data, and mapping a page location in an address space of the shared physical device for a selected one of the assigned resources to a page location in a memory-mapped input/output (MMIO) space of the virtual device; and assigning the virtual device to the virtual machine, the virtual machine to transmit data to and receive data from the physical device via the MMIO space of the virtual device.
    Type: Application
    Filed: December 6, 2018
    Publication date: April 18, 2019
    Inventors: Nrupal JANI, Manasi DEVAL, Anjali SINGHAI, Parthasarathy SARANGAM, Mitu AGGARWAL, Neerav PARIKH, Alexander H. DUYCK, Kiran PATIL, Rajesh M. SANKARAN, Sanjay K. KUMAR, Utkarsh Y. KAKAIYA, Philip LANTZ, Kun TIAN
  • Publication number: 20190107965
    Abstract: Examples may include a method of protecting memory and I/O transactions. The method includes allocating memory for an application, assigning a resource of a physical device to the application, assigning a process address space identifier to the assigned resource, creating a security enclave to protect the allocated memory of the application, and associating the security enclave with the process address space identifier to protect the allocated memory and the assigned resource.
    Type: Application
    Filed: December 6, 2018
    Publication date: April 11, 2019
    Inventors: Manasi DEVAL, Nrupal JANI, Parthasarathy SARANGAM, Mitu AGGARWAL, Kiran PATIL, Rajesh M. SANKARAN, Sanjay K. KUMAR, Utkarsh Y. KAKAIYA, Philip LANTZ, Kun TIAN
  • Publication number: 20190108106
    Abstract: Examples include a method of performing failover of in an I/O architecture by allocating a first set of resources, associated with a first port of a physical device, to a virtual device, allocating a second set of resources, associated with a second port of the physical device, to the virtual device, assigning the virtual device to a virtual machine, activating the first set of resources, and transferring data between the virtual machine and the first port using the virtual device and the first set of resources. The method further includes detecting an error in the first set of resources, deactivating the first set of resources and activating the second set of resources, and transferring data between the virtual machine and the second port using the virtual device and the second set of resources.
    Type: Application
    Filed: December 6, 2018
    Publication date: April 11, 2019
    Inventors: Mitu AGGARWAL, Nrupal JANI, Manasi DEVAL, Kiran PATIL, Parthasarathy SARANGAM, Rajesh M. SANKARAN, Sanjay K. KUMAR, Utkarsh Y. KAKAIYA, Philip LANTZ, Kun TIAN