Patents by Inventor Wenyi Jiang

Wenyi Jiang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10965601
    Abstract: A method for a sender side assisted flow classification is disclosed. In an embodiment, a method comprises detecting a packet by a network virtualization layer engine implemented in a hypervisor on a sender side of a virtualization computer system; and determining, by the network virtualization layer engine, whether the packet requires special processing. In response to determining that the packet requires special processing, a special processing flag is inserted in a certain field of an outer header of the packet; and the packet is forwarded toward a destination of the packet for a PNIC on a receiver side to process the packet.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: March 30, 2021
    Assignee: VMware, Inc.
    Inventors: Wenyi Jiang, Guolin Yang, Boon Seong Ang, Ying Gross
  • Publication number: 20210029083
    Abstract: Example methods and computer systems are provided for filter-based packet handling at a virtual network adapter. The method may comprise: receiving an ingress packet destined for the virtualized computing instance that is supported by the host and connected to the virtual network adapter; and matching the ingress packet to one of multiple filters configured for the virtual network adapter. The multiple filters may include a first filter specifying one or more first packet characteristics and a second filter specifying one or more second packet characteristics. The method may also comprise: in response to matching the ingress packet to the first filter, assigning the ingress packet to a first packet queue; and in response to matching the ingress packet to the second filter, assigning the ingress packet to a second packet queue.
    Type: Application
    Filed: July 22, 2019
    Publication date: January 28, 2021
    Applicant: VMware, Inc.
    Inventors: Peng LI, Guolin YANG, Yong WANG, Wenyi JIANG, Boon Seong ANG
  • Publication number: 20200304418
    Abstract: Some embodiments provide a method for managing multiple queues of a network interface card (NIC) of a host computer that executes a data compute node (DCN). The method defines first, second, and third subsets of the queues. The first subset of queues is associated with a first feature for processing data messages received by the NIC, the second subset of queues is associated with a second feature, and the third subset is associated with both features. The method receives a request from the DCN to process data messages addressed to the DCN using both the first and second features. The method configures the NIC to direct data messages received for the DCN to a queue that is selected from the third subset of queues.
    Type: Application
    Filed: June 6, 2020
    Publication date: September 24, 2020
    Inventors: Aditya G. Holla, Rishi Mehta, Boon Ang, Rajeev Nair, Wenyi Jiang
  • Publication number: 20200274820
    Abstract: An approach for a dynamic provisioning of multiple RSS engines is provided. In an embodiment, a method comprises monitoring a CPU usage of hardware queues implemented in a plurality of RSS pools, and determining whether a CPU usage of any hardware queue, implemented in a particular RSS pool of the plurality of RSS pools, has increased above a threshold value. In response to determining that a CPU usage of a particular hardware queue, implemented in the particular RSS pool, has increased above the threshold value, it is determined whether the particular RSS pool includes an unused hardware queue (a queue with light CPU usage). If such an unused hardware queue is presented, then an indirection table that is associated with the particular RSS pool is modified to remap one or more data flows from the particular hardware queue to the unused hardware queue.
    Type: Application
    Filed: May 6, 2020
    Publication date: August 27, 2020
    Inventors: Aditya G. Holla, Rajeev Nair, Shilpi Agarwal, Subbarao Narahari, Zongyun Lai, Wenyi Jiang, Srikar Tati
  • Patent number: 10757076
    Abstract: Described herein are systems, methods, and software to enhance the management of packet filters for host computing systems. In one implementation, a method of managing packet filters for a physical network interface on a host computing system includes obtaining dispatch statistics for media access control (MAC) addresses associated with virtual nodes communicating over the physical network interface via a virtual switch. After obtaining the dispatch statistics, the method further provides identifying a filter configuration based on the dispatch statistics, wherein the filter configuration classifies received packets at the physical network interface into processing queues based on attributes identified in the received packets, and applying the filter configuration in the physical network interface.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: August 25, 2020
    Assignee: Nicira, Inc.
    Inventors: Shrikrishna Khare, Ayyappan Veeraiyan, Craige Wenyi Jiang, Guolin Yang
  • Patent number: 10735341
    Abstract: An approach for a dynamic provisioning of multiple RSS engines is provided. In an embodiment, a method comprises monitoring a CPU usage of hardware queues implemented in a plurality of RSS pools, and determining whether a CPU usage of any hardware queue, implemented in a particular RSS pool of the plurality of RSS pools, has increased above a threshold value. In response to determining that a CPU usage of a particular hardware queue, implemented in the particular RSS pool, has increased above the threshold value, it is determined whether the particular RSS pool includes an unused hardware queue (a queue with light CPU usage). If such an unused hardware queue is presented, then an indirection table that is associated with the particular RSS pool is modified to remap one or more data flows from the particular hardware queue to the unused hardware queue.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: August 4, 2020
    Assignee: NICIRA, INC.
    Inventors: Aditya G. Holla, Rajeev Nair, Shilpi Agarwal, Subbarao Narahari, Zongyun Lai, Wenyi Jiang, Srikar Tati
  • Patent number: 10686716
    Abstract: Some embodiments provide a method for managing multiple queues of a network interface card (NIC) of a host computer that executes a data compute node (DCN). The method defines first, second, and third subsets of the queues. The first subset of queues is associated with a first feature for processing data messages received by the NIC, the second subset of queues is associated with a second feature, and the third subset is associated with both features. The method receives a request from the DCN to process data messages addressed to the DCN using both the first and second features. The method configures the NIC to direct data messages received for the DCN to a queue that is selected from the third subset of queues.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: June 16, 2020
    Assignee: VMWARE, INC.
    Inventors: Aditya G. Holla, Rishi Mehta, Boon Ang, Rajeev Nair, Wenyi Jiang
  • Publication number: 20200067842
    Abstract: A method for a sender side assisted flow classification is disclosed. In an embodiment, a method comprises detecting a packet by a network virtualization layer engine implemented in a hypervisor on a sender side of a virtualization computer system; and determining, by the network virtualization layer engine, whether the packet requires special processing. In response to determining that the packet requires special processing, a special processing flag is inserted in a certain field of an outer header of the packet; and the packet is forwarded toward a destination of the packet for a PNIC on a receiver side to process the packet.
    Type: Application
    Filed: August 23, 2018
    Publication date: February 27, 2020
    Applicant: VMware, Inc.
    Inventors: Wenyi JIANG, Guolin YANG, Boon Seong ANG, Ying GROSS
  • Publication number: 20200036636
    Abstract: Some embodiments provide a method for selecting a transmit queue of a network interface card (NIC) of a host computer for an outbound data message. The NIC includes multiple transmit queues and multiple receive queues. Each of the transmit queues is individually associated with a different receive queue, and the MC performs a load balancing operation to distribute inbound data messages among multiple receive queues. The method extracts a set of header values from a header of the outbound data message. The method uses the extracted set of header values to identify a receive queue which the NIC would select for a corresponding inbound data message upon which the NIC performed the load balancing operation. The method selects a transmit queue associated with the identified receive queue to process the outbound data message.
    Type: Application
    Filed: July 25, 2018
    Publication date: January 30, 2020
    Inventors: Aditya G. Holla, Wenyi Jiang, Rajeev Nair, Srikar Tati, Boon Ang, Kairav Padarthy
  • Publication number: 20200028792
    Abstract: Some embodiments provide a method for managing multiple queues of a network interface card (NIC) of a host computer that executes a data compute node (DCN). The method defines first, second, and third subsets of the queues. The first subset of queues is associated with a first feature for processing data messages received by the NIC, the second subset of queues is associated with a second feature, and the third subset is associated with both features. The method receives a request from the DCN to process data messages addressed to the DCN using both the first and second features. The method configures the NIC to direct data messages received for the DCN to a queue that is selected from the third subset of queues.
    Type: Application
    Filed: July 23, 2018
    Publication date: January 23, 2020
    Inventors: Aditya G. Holla, Rishi Mehta, Boon Ang, Rajeev Nair, Wenyi Jiang
  • Publication number: 20200028785
    Abstract: A method to offload network function packet processing from a virtual machine onto an offload destination is disclosed. In an embodiment, a method comprises: defining an application programing interface (“API”) for capturing, in a packet processor offload, a network function packet processing for a data flow by specifying how to perform the network function packet processing on data packets that belong to the data flow. Based on capabilities of the packet processor offload and available resources, a packet processing offload destination is selected. Based at least on the API, the packet processor offload for the packet processing offload destination is generated. The packet processor offload is downloaded to the packet processing offload destination to configure the packet processing offload destination to provide the network function packet processing on the data packets that belong to the data flow. The packet processing offload destination is a PNIC or a hypervisor.
    Type: Application
    Filed: July 19, 2018
    Publication date: January 23, 2020
    Applicant: VMware, Inc.
    Inventors: Boon Seong ANG, Yong WANG, Guolin YANG, Craige Wenyi JIANG
  • Patent number: 10476801
    Abstract: An approach for dynamically distributing RSS engines to virtual machines based on flow data is disclosed. A method comprises receiving first absolute counts of data packets that belong to at least one data flow. Flow load indicator values are computed based on the first absolute counts, and stored in a lookup table. A sorted table is generated by sorting entries of the lookup table. A first count of filters that can be applied on RSS engines is determined. A second count of data flows in the sorted table and having corresponding flow load indicator values exceeding a threshold value is determined. If the second count exceeds the first count, then the first count of data flows is selected from the sorted table. The first count of filters that correspond to the data flows is determined, and the first count of the filters is assigned to at least one RSS engine.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: November 12, 2019
    Assignee: NICIRA, INC.
    Inventors: Aditya G. Holla, Shrikrishna Khare, Rajeev Nair, Aditya Sonthy, Wenyi Jiang, Rishi Mehta
  • Publication number: 20190334821
    Abstract: An approach for dynamically distributing RSS engines to virtual machines based on flow data is disclosed. A method comprises receiving first absolute counts of data packets that belong to at least one data flow. Flow load indicator values are computed based on the first absolute counts, and stored in a lookup table. A sorted table is generated by sorting entries of the lookup table. A first count of filters that can be applied on RSS engines is determined. A second count of data flows in the sorted table and having corresponding flow load indicator values exceeding a threshold value is determined. If the second count exceeds the first count, then the first count of data flows is selected from the sorted table. The first count of filters that correspond to the data flows is determined, and the first count of the filters is assigned to at least one RSS engine.
    Type: Application
    Filed: April 27, 2018
    Publication date: October 31, 2019
    Applicant: NICIRA, INC.
    Inventors: Aditya G. HOLLA, Shrikrishna KHARE, Rajeev NAIR, Aditya SONTHY, Wenyi JIANG, Rishi MEHTA
  • Publication number: 20190334829
    Abstract: An approach for a dynamic provisioning of multiple RSS engines is provided. In an embodiment, a method comprises monitoring a CPU usage of hardware queues implemented in a plurality of RSS pools, and determining whether a CPU usage of any hardware queue, implemented in a particular RSS pool of the plurality of RSS pools, has increased above a threshold value. In response to determining that a CPU usage of a particular hardware queue, implemented in the particular RSS pool, has increased above the threshold value, it is determined whether the particular RSS pool includes an unused hardware queue (a queue with light CPU usage). If such an unused hardware queue is presented, then an indirection table that is associated with the particular RSS pool is modified to remap one or more data flows from the particular hardware queue to the unused hardware queue.
    Type: Application
    Filed: April 26, 2018
    Publication date: October 31, 2019
    Applicant: NICIRA, INC.
    Inventors: Aditya G. HOLLA, Rajeev NAIR, Shilpi AGARWAL, Subbarao NARAHARI, Zongyun LAI, Wenyi JIANG, Srikar TATI
  • Patent number: 10348683
    Abstract: Described herein are systems, methods, and software to enhance the management of packet filters for host computing systems. In one implementation, a computing system may identify media access control (MAC) addresses and communication statistics for virtual nodes communicating over physical network interfaces of the computing system. The computing system may further prioritize the MAC addresses based on the virtual network interface ports and physical network interface ports that the MAC addresses were identified on, and generate a filter configuration for the physical network interfaces based on the prioritization and the communication statistics.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: July 9, 2019
    Assignee: Nicira Inc.
    Inventors: Aditya Holla, Wenyi Jiang, Shrikrishna Khare, Ayyappan Veeraiyan, Rajeev Nair
  • Patent number: 10313926
    Abstract: Example methods are provided for a host to perform large receive offload (LRO) processing in a virtualized computing environment. The method may comprise receiving, via a physical network interface controller (NIC), incoming packets that are destined for the virtualized computing instance, and processing the incoming packets to generate at least one processed packet using a networking service pipeline that includes a packet aggregation service and multiple networking services. The packet aggregation service may be configured to aggregate the incoming packets into an aggregated packet and enabled at a service point along the networking service pipeline based on an LRO capability of at least one of the multiple networking services to process the aggregated packet. The method may also comprise forwarding the at least one processed packet generated by the networking service pipeline to the virtualized computing instance.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: June 4, 2019
    Assignee: NICIRA, INC.
    Inventors: Rishi Mehta, Boon Ang, Guolin Yang, Wenyi Jiang, Jayant Jain
  • Publication number: 20190132286
    Abstract: Described herein are systems, methods, and software to enhance the management of packet filters for host computing systems. In one implementation, a computing system may identify media access control (MAC) addresses and communication statistics for virtual nodes communicating over physical network interfaces of the computing system. The computing system may further prioritize the MAC addresses based on the virtual network interface ports and physical network interface ports that the MAC addresses were identified on, and generate a filter configuration for the physical network interfaces based on the prioritization and the communication statistics.
    Type: Application
    Filed: November 2, 2017
    Publication date: May 2, 2019
    Inventors: Aditya Holla, Wenyi Jiang, Shrikrishna Khare, Ayyappan Veeraiyan, Rajeev Nair
  • Publication number: 20190132296
    Abstract: A first host receives a packet from a first compute node for a second compute node of a second host. The payload is larger than a maximum transmission unit size. The first packet is encapsulated with an outer header. The first host analyzes a length of at least a portion of the outer header in determining a size of an encrypted segment of the payload. Then, the first host forms a plurality of packets where each packet in the packets includes an encrypted segment of the payload, a respective encryption header, and a respective authentication value. The payload of the first packet is segmented to form a plurality of encrypted segments based on the size. The first host sends the packets to the second host and receives an indication that a packet was not received. A second packet including the encrypted segment is sent to the second compute node.
    Type: Application
    Filed: October 27, 2017
    Publication date: May 2, 2019
    Inventors: Wenyi Jiang, Daniel G. Wing, Bin Qian, Dexiang Wang
  • Publication number: 20190028435
    Abstract: Described herein are systems, methods, and software to enhance the management of packet filters for host computing systems. In one implementation, a method of managing packet filters for a physical network interface on a host computing system includes obtaining dispatch statistics for media access control (MAC) addresses associated with virtual nodes communicating over the physical network interface via a virtual switch. After obtaining the dispatch statistics, the method further provides identifying a filter configuration based on the dispatch statistics, wherein the filter configuration classifies received packets at the physical network interface into processing queues based on attributes identified in the received packets, and applying the filter configuration in the physical network interface.
    Type: Application
    Filed: July 20, 2017
    Publication date: January 24, 2019
    Inventors: Shrikrishna Khare, Ayyappan Veeraiyan, Craige Wenyi Jiang, Guolin Yang
  • Publication number: 20190014039
    Abstract: A method of creating containers in a physical host that includes a managed forwarding element (MFE) configured to forward packets to and from a set of data compute nodes (DCNs) hosted by the physical host. The method creates a container DCN in the host. The container DCN includes a virtual network interface card (VNIC) configured to exchange packets with the MFE. The method creates a plurality of containers in the container DCN. The method, for each container in the container DCN, creates a corresponding port on the MFE. The method sends packets addressed to each of the plurality of containers from the corresponding MFE port to the VNIC of the container DCN.
    Type: Application
    Filed: August 25, 2018
    Publication date: January 10, 2019
    Inventors: Jianjun Shen, Ganesan Chandrashekhar, Donghai Han, Jingchun Jason Jiang, Wenyi Jiang, Ayyappan Veeraiyan