Patents by Inventor Rishi Mehta

Rishi Mehta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11928502
    Abstract: Some embodiments provide a method for scheduling networking threads associated with a data compute node (DCN) executing at a host computer. When a virtual networking device is instantiated for the DCN, the method assigns the virtual networking device to a particular non-uniform memory access (NUMA) node of multiple NUMA nodes associated with the DCN. Based on the assignment of the virtual networking device to the particular NUMA node, the method assigns networking threads associated with the DCN to the same particular NUMA node and provides information to the DCN regarding the particular NUMA node in order for the DCN to assign a thread associated with an application executing on the DCN to the same particular NUMA node.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: March 12, 2024
    Assignee: VMware LLC
    Inventors: Rishi Mehta, Boon S. Ang, Petr Vandrovec, Xunjia Lu
  • Publication number: 20220350647
    Abstract: Some embodiments provide a method for scheduling networking threads associated with a data compute node (DCN) executing at a host computer. When a virtual networking device is instantiated for the DCN, the method assigns the virtual networking device to a particular non-uniform memory access (NUMA) node of multiple NUMA nodes associated with the DCN. Based on the assignment of the virtual networking device to the particular NUMA node, the method assigns networking threads associated with the DCN to the same particular NUMA node and provides information to the DCN regarding the particular NUMA node in order for the DCN to assign a thread associated with an application executing on the DCN to the same particular NUMA node.
    Type: Application
    Filed: April 29, 2021
    Publication date: November 3, 2022
    Inventors: Rishi Mehta, Boon S. Ang, Petr Vandrovec, Xunjia Lu
  • Publication number: 20220178557
    Abstract: A heat source unit for a heat pump having a refrigerant circuit, the heat source unit having: an outer casing including a bottom plate; and a compressor assembly accommodated in the outer casing, the compressor assembly including a compressor of the refrigerant circuit of the heat pump including a compressor housing, a support plate supporting the compressor, the support plate being mounted via dampers to the bottom plate, and a compressor casing enclosing the compressor and the compressor housing. A damping mechanism is arranged between the compressor and the support plate, and the compressor casing is fixed to the support plate out of contact with the compressor housing.
    Type: Application
    Filed: February 14, 2020
    Publication date: June 9, 2022
    Applicants: DAIKIN INDUSTRIES, LTD., DAIKIN EUROPE N.V.
    Inventors: Kouta YOSHIKAWA, Wim VANSTEENKISTE, Akshay HATTIANGADI, Jose Daniel GARCIA LOPEZ, Tom SURMONT, Rishi MEHTA
  • Patent number: 11356381
    Abstract: A method for managing several queues of a network interface card (NIC) of a computer. The method initially configures the NIC to direct data messages received for a data compute node (DCN) executing on the computer to a default first NIC queue. When the DCN requests data messages addressed to the particular DCN to be processed with a first feature for load balancing data messages across multiple queues and a second feature for aggregating multiple related data messages into a single data message, the method configures the NIC to direct subsequent data messages received for the DCN to a second queue in a first subset of queues associated with the first feature if a load on the default first queue exceeds a first threshold. Otherwise, if a load on the first subset of queues exceeds a second threshold, the method configures the NIC to direct subsequent data messages received for the particular DCN to a third queue in a second subset of queues associated with both the first and second features.
    Type: Grant
    Filed: June 6, 2020
    Date of Patent: June 7, 2022
    Assignee: VMWARE, INC.
    Inventors: Aditya G. Holla, Rishi Mehta, Boon Ang, Rajeev Nair, Wenyi Jiang
  • Publication number: 20220136750
    Abstract: An outdoor unit for a heat pump includes a refrigerant circuit, the outdoor unit including a compressor, a discharge pipe of the refrigerant circuit connected to a discharge side of the compressor, a bottom plate, the bottom plate having a base and an outer flange protruding upward from an outer edge of the base, a heat source heat exchanger supported on the bottom plate, a liquid refrigerant pipe of the refrigerant circuit connected to the heat source heat exchanger, and a defrosting bypass pipe connected at one end to the discharge pipe and at the opposite end to the liquid refrigerant pipe, the defrosting bypass pipe being arranged between an inner side of the flange and an outer side of the heat source heat exchanger.
    Type: Application
    Filed: March 6, 2020
    Publication date: May 5, 2022
    Applicants: DAIKIN INDUSTRIES, LTD., DAIKIN EUROPE N.V.
    Inventors: Kouta YOSHIKAWA, Wim VANSTEENKISTE, Akshay HATTIANGADI, Jose Daniel GARCIA LOPEZ, Tom SURMONT, Rishi MEHTA
  • Patent number: 11240111
    Abstract: Some embodiments provide a method for presenting packets captured in a network. The method identifies a first set of packets from a first packet group of multiple captured packet groups. The method identifies a second set of packets, that corresponds to the first set of packets, from a second packet group of the multiple captured packet groups. The method displays representations of the multiple captured packet groups. At least one of the first set of packets and the second set of packets are presented with a different appearance from other packets of their respective packet group.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: February 1, 2022
    Assignee: NICIRA, INC.
    Inventors: Neelima Balakrishnan, Ninad Ghodke, Rishi Mehta, Banit Agrawal, Ramya Bolla, Siming Li
  • Publication number: 20210250278
    Abstract: A computer network device includes: a data plane and ports that direct packets or frames in a network based at least in part on destinations of the packets or frames, where the ports are associated with physical links that use communication protocols; and a control plane that performs network functions Moreover, the computer network device may determine one or more first communication performance metrics of a first port in a first physical link in a link aggregation group (LAG) and one or more second communication performance metrics of a second port in a second physical link in the LAG. Then, based at least in part on the determined one or more first communication performance metrics and the determined one or more second communication performance metrics, the computer network device may assign a given packet or a given frame to the first port or the second port in the LAG.
    Type: Application
    Filed: February 8, 2021
    Publication date: August 12, 2021
    Applicant: ARRIS Enterprises LLC
    Inventors: Virendra Malaviya, Rishi Mehta
  • Publication number: 20200304418
    Abstract: Some embodiments provide a method for managing multiple queues of a network interface card (NIC) of a host computer that executes a data compute node (DCN). The method defines first, second, and third subsets of the queues. The first subset of queues is associated with a first feature for processing data messages received by the NIC, the second subset of queues is associated with a second feature, and the third subset is associated with both features. The method receives a request from the DCN to process data messages addressed to the DCN using both the first and second features. The method configures the NIC to direct data messages received for the DCN to a queue that is selected from the third subset of queues.
    Type: Application
    Filed: June 6, 2020
    Publication date: September 24, 2020
    Inventors: Aditya G. Holla, Rishi Mehta, Boon Ang, Rajeev Nair, Wenyi Jiang
  • Patent number: 10747577
    Abstract: Computer system and method for characterizing throughput performance of a datacenter utilize bandwidth information of physical network interfaces in the datacenter and results of benchmark testing on throughput on a single processor core to compute a plurality of throughput constraints that define a throughput capacity region for the datacenter to improve throughput performance of the datacenter.
    Type: Grant
    Filed: August 25, 2018
    Date of Patent: August 18, 2020
    Assignee: NICIRA, INC.
    Inventors: Dexiang Wang, Bin Qian, Jinqiang Yang, Naga S. S. Kishore Kankipati, Sanal Pillai, Sujatha Sundararaman, Ganesan Chandrashekhar, Rishi Mehta
  • Patent number: 10686716
    Abstract: Some embodiments provide a method for managing multiple queues of a network interface card (NIC) of a host computer that executes a data compute node (DCN). The method defines first, second, and third subsets of the queues. The first subset of queues is associated with a first feature for processing data messages received by the NIC, the second subset of queues is associated with a second feature, and the third subset is associated with both features. The method receives a request from the DCN to process data messages addressed to the DCN using both the first and second features. The method configures the NIC to direct data messages received for the DCN to a queue that is selected from the third subset of queues.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: June 16, 2020
    Assignee: VMWARE, INC.
    Inventors: Aditya G. Holla, Rishi Mehta, Boon Ang, Rajeev Nair, Wenyi Jiang
  • Publication number: 20200028792
    Abstract: Some embodiments provide a method for managing multiple queues of a network interface card (NIC) of a host computer that executes a data compute node (DCN). The method defines first, second, and third subsets of the queues. The first subset of queues is associated with a first feature for processing data messages received by the NIC, the second subset of queues is associated with a second feature, and the third subset is associated with both features. The method receives a request from the DCN to process data messages addressed to the DCN using both the first and second features. The method configures the NIC to direct data messages received for the DCN to a queue that is selected from the third subset of queues.
    Type: Application
    Filed: July 23, 2018
    Publication date: January 23, 2020
    Inventors: Aditya G. Holla, Rishi Mehta, Boon Ang, Rajeev Nair, Wenyi Jiang
  • Patent number: 10476801
    Abstract: An approach for dynamically distributing RSS engines to virtual machines based on flow data is disclosed. A method comprises receiving first absolute counts of data packets that belong to at least one data flow. Flow load indicator values are computed based on the first absolute counts, and stored in a lookup table. A sorted table is generated by sorting entries of the lookup table. A first count of filters that can be applied on RSS engines is determined. A second count of data flows in the sorted table and having corresponding flow load indicator values exceeding a threshold value is determined. If the second count exceeds the first count, then the first count of data flows is selected from the sorted table. The first count of filters that correspond to the data flows is determined, and the first count of the filters is assigned to at least one RSS engine.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: November 12, 2019
    Assignee: NICIRA, INC.
    Inventors: Aditya G. Holla, Shrikrishna Khare, Rajeev Nair, Aditya Sonthy, Wenyi Jiang, Rishi Mehta
  • Publication number: 20190334821
    Abstract: An approach for dynamically distributing RSS engines to virtual machines based on flow data is disclosed. A method comprises receiving first absolute counts of data packets that belong to at least one data flow. Flow load indicator values are computed based on the first absolute counts, and stored in a lookup table. A sorted table is generated by sorting entries of the lookup table. A first count of filters that can be applied on RSS engines is determined. A second count of data flows in the sorted table and having corresponding flow load indicator values exceeding a threshold value is determined. If the second count exceeds the first count, then the first count of data flows is selected from the sorted table. The first count of filters that correspond to the data flows is determined, and the first count of the filters is assigned to at least one RSS engine.
    Type: Application
    Filed: April 27, 2018
    Publication date: October 31, 2019
    Applicant: NICIRA, INC.
    Inventors: Aditya G. HOLLA, Shrikrishna KHARE, Rajeev NAIR, Aditya SONTHY, Wenyi JIANG, Rishi MEHTA
  • Patent number: 10338822
    Abstract: Systems and methods described herein align various types of hypervisor threads with a non-uniform memory access (NUMA) client of a virtual machine (VM) that is driving I/O transactions from an application so that no remote memory access is required and the I/O transactions can be completed with local accesses to CPUs, caches, and the I/O devices of a same NUMA node of a hardware NUMA system. First, hypervisor of the VM detects whether the VM runs on a single or multiple NUMA nodes. If the VM runs on multiple NUMA nodes, a NUMA client on which the application is executing the I/O transactions is identified and knowledge of resource sharing between the NUMA client and its related hypervisor threads is established. Such knowledge is then utilized to schedule the NUMA client and its related hypervisor threads to the same NUMA node of the NUMA system.
    Type: Grant
    Filed: June 15, 2016
    Date of Patent: July 2, 2019
    Assignee: VMware, Inc.
    Inventors: Amitabha Banerjee, Rishi Mehta, Xiaochuan Shen, Seongbeom Kim
  • Patent number: 10341245
    Abstract: In a computer-implemented method for reducing delay of bursty data transmission in a network employing a congestion control protocol, data is accessed that is to be periodically transmitted over a network employing a congestion control protocol. The data is to be periodically transmitted with a high burst rate followed by an idle period. The congestion control protocol progressively increases a data transmission rate during a data transmission rate increase period invoked immediately following a predetermined idle period. Prior to transmitting the data, priming data is transmitted during at least a portion of the idle period until the congestion control protocol progressively increases the data transmission rate to a desired transmission rate. The data is transmitted at the desired transmission rate.
    Type: Grant
    Filed: March 24, 2014
    Date of Patent: July 2, 2019
    Assignee: VMWare, Inc.
    Inventors: Kalyan Saladi, Rishi Mehta
  • Patent number: 10313926
    Abstract: Example methods are provided for a host to perform large receive offload (LRO) processing in a virtualized computing environment. The method may comprise receiving, via a physical network interface controller (NIC), incoming packets that are destined for the virtualized computing instance, and processing the incoming packets to generate at least one processed packet using a networking service pipeline that includes a packet aggregation service and multiple networking services. The packet aggregation service may be configured to aggregate the incoming packets into an aggregated packet and enabled at a service point along the networking service pipeline based on an LRO capability of at least one of the multiple networking services to process the aggregated packet. The method may also comprise forwarding the at least one processed packet generated by the networking service pipeline to the virtualized computing instance.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: June 4, 2019
    Assignee: NICIRA, INC.
    Inventors: Rishi Mehta, Boon Ang, Guolin Yang, Wenyi Jiang, Jayant Jain
  • Patent number: 10250450
    Abstract: Some embodiments provide a method for performing a multi-point capture of packets in a network. The method identifies multiple nodes for the multi-point capture in the network. The method configures each node of the multiple nodes to capture a set of packets. The method receives multiple captured packet sets from the multiple nodes. The method analyzes the multiple captured packet sets.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: April 2, 2019
    Assignee: NICIRA, INC.
    Inventors: Neelima Balakrishnan, Ninad Ghodke, Rishi Mehta, Banit Agrawal, Ramya Bolla, Siming Li
  • Publication number: 20190065265
    Abstract: Computer system and method for characterizing throughput performance of a datacenter utilize bandwidth information of physical network interfaces in the datacenter and results of benchmark testing on throughput on a single processor core to compute a plurality of throughput constraints that define a throughput capacity region for the datacenter to improve throughput performance of the datacenter.
    Type: Application
    Filed: August 25, 2018
    Publication date: February 28, 2019
    Inventors: Dexiang WANG, Bin QIAN, Jinqiang YANG, Naga S. S. Kishore KANKIPATI, Sanal PILLAI, Sujatha SUNDARARAMAN, Ganesan CHANDRASHEKHAR, Rishi MEHTA
  • Publication number: 20180352474
    Abstract: Example methods are provided for a host to perform large receive offload (LRO) processing in a virtualized computing environment. The method may comprise receiving, via a physical network interface controller (NIC), incoming packets that are destined for the virtualized computing instance, and processing the incoming packets to generate at least one processed packet using a networking service pipeline that includes a packet aggregation service and multiple networking services. The packet aggregation service may be configured to aggregate the incoming packets into an aggregated packet and enabled at a service point along the networking service pipeline based on an LRO capability of at least one of the multiple networking services to process the aggregated packet. The method may also comprise forwarding the at least one processed packet generated by the networking service pipeline to the virtualized computing instance.
    Type: Application
    Filed: May 31, 2017
    Publication date: December 6, 2018
    Applicant: Nicira, Inc.
    Inventors: Rishi MEHTA, Boon ANG, Guolin YANG, Wenyi JIANG, Jayant JAIN
  • Patent number: 9992113
    Abstract: Techniques disclosed herein provide an approach for using receive side scaling (RSS) offloads from a physical network interface controller (PNIC) to improve the performance of a virtual network interface controller (VNIC). In one embodiment, the PNIC is configured to write hash values it computes for RSS purposes to packets themselves. The VNIC then reads the hash values from the packets and places the packets into VNIC RSS queues, which are processed by respective CPUs, based on the hash values. CPU overhead is thereby reduced, as RSS processing by the VNIC no longer requires computing hash values. In another embodiment in which the number of PNIC RSS queues and VNIC RSS queues are identical, the VNIC may map packets from PNIC RSS queues to VNIC RSS queues using the PNIC RSS queue ID numbers, which also does not require the computing RSS hash values.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: June 5, 2018
    Assignee: VMware, Inc.
    Inventors: Rishi Mehta, Lenin Singaravelu