Patents by Inventor Boon Ang

Boon Ang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11848869
    Abstract: Some embodiments provide a method for selecting a transmit queue of a network interface card (NIC) of a host computer for an outbound data message. The NIC includes multiple transmit queues and multiple receive queues. Each of the transmit queues is individually associated with a different receive queue, and the MC performs a load balancing operation to distribute inbound data messages among multiple receive queues. The method extracts a set of header values from a header of the outbound data message. The method uses the extracted set of header values to identify a receive queue which the MC would select for a corresponding inbound data message upon which the NIC performed the load balancing operation. The method selects a transmit queue associated with the identified receive queue to process the outbound data message.
    Type: Grant
    Filed: May 5, 2021
    Date of Patent: December 19, 2023
    Assignee: VMWARE, INC.
    Inventors: Aditya G. Holla, Wenyi Jiang, Rajeev Nair, Srikar Tati, Boon Ang, Kairav Padarthy
  • Patent number: 11356381
    Abstract: A method for managing several queues of a network interface card (NIC) of a computer. The method initially configures the NIC to direct data messages received for a data compute node (DCN) executing on the computer to a default first NIC queue. When the DCN requests data messages addressed to the particular DCN to be processed with a first feature for load balancing data messages across multiple queues and a second feature for aggregating multiple related data messages into a single data message, the method configures the NIC to direct subsequent data messages received for the DCN to a second queue in a first subset of queues associated with the first feature if a load on the default first queue exceeds a first threshold. Otherwise, if a load on the first subset of queues exceeds a second threshold, the method configures the NIC to direct subsequent data messages received for the particular DCN to a third queue in a second subset of queues associated with both the first and second features.
    Type: Grant
    Filed: June 6, 2020
    Date of Patent: June 7, 2022
    Assignee: VMWARE, INC.
    Inventors: Aditya G. Holla, Rishi Mehta, Boon Ang, Rajeev Nair, Wenyi Jiang
  • Patent number: 11196651
    Abstract: Some embodiments provide a method for monitoring the status of a network connection between first and second host computers. The method is performed in some embodiments by a tunnel monitor executing on the first host computer that also separately executes a machine, where the machine uses a tunnel to send and receive messages to and from the second host computer. The method establishes a liveness channel with the machine to iteratively determine whether the first machine is operational. The method further establishes a monitoring session with the second host computer to iteratively determine whether the tunnel is operational. When a determination is made through the liveness channel that the machine is no longer operational, the method terminates the monitoring session with the second host computer. When a determination is made that the tunnel is no longer operational, the method notifies the machine through the liveness channel.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: December 7, 2021
    Assignee: VMWARE, INC.
    Inventors: Yong Wang, Boon Ang, Guolin Yang, Wenyi Jiang
  • Publication number: 20210258257
    Abstract: Some embodiments provide a method for selecting a transmit queue of a network interface card (NIC) of a host computer for an outbound data message. The NIC includes multiple transmit queues and multiple receive queues. Each of the transmit queues is individually associated with a different receive queue, and the MC performs a load balancing operation to distribute inbound data messages among multiple receive queues. The method extracts a set of header values from a header of the outbound data message. The method uses the extracted set of header values to identify a receive queue which the MC would select for a corresponding inbound data message upon which the NIC performed the load balancing operation. The method selects a transmit queue associated with the identified receive queue to process the outbound data message.
    Type: Application
    Filed: May 5, 2021
    Publication date: August 19, 2021
    Inventors: Aditya G. Holla, Wenyi Jiang, Rajeev Nair, Srikar Tati, Boon Ang, Kairav Padarthy
  • Patent number: 11025546
    Abstract: Some embodiments provide a method for selecting a transmit queue of a network interface card (NIC) of a host computer for an outbound data message. The NIC includes multiple transmit queues and multiple receive queues. Each of the transmit queues is individually associated with a different receive queue, and the MC performs a load balancing operation to distribute inbound data messages among multiple receive queues. The method extracts a set of header values from a header of the outbound data message. The method uses the extracted set of header values to identify a receive queue which the NIC would select for a corresponding inbound data message upon which the NIC performed the load balancing operation. The method selects a transmit queue associated with the identified receive queue to process the outbound data message.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: June 1, 2021
    Assignee: VMWARE, INC.
    Inventors: Aditya G. Holla, Wenyi Jiang, Rajeev Nair, Srikar Tati, Boon Ang, Kairav Padarthy
  • Publication number: 20210126848
    Abstract: Some embodiments provide a method for monitoring the status of a network connection between first and second host computers. The method is performed in some embodiments by a tunnel monitor executing on the first host computer that also separately executes a machine, where the machine uses a tunnel to send and receive messages to and from the second host computer. The method establishes a liveness channel with the machine to iteratively determine whether the first machine is operational. The method further establishes a monitoring session with the second host computer to iteratively determine whether the tunnel is operational. When a determination is made through the liveness channel that the machine is no longer operational, the method terminates the monitoring session with the second host computer. When a determination is made that the tunnel is no longer operational, the method notifies the machine through the liveness channel.
    Type: Application
    Filed: October 23, 2019
    Publication date: April 29, 2021
    Inventors: Yong Wang, Boon Ang, Guolin Yang, Wenyi Jiang
  • Publication number: 20200304418
    Abstract: Some embodiments provide a method for managing multiple queues of a network interface card (NIC) of a host computer that executes a data compute node (DCN). The method defines first, second, and third subsets of the queues. The first subset of queues is associated with a first feature for processing data messages received by the NIC, the second subset of queues is associated with a second feature, and the third subset is associated with both features. The method receives a request from the DCN to process data messages addressed to the DCN using both the first and second features. The method configures the NIC to direct data messages received for the DCN to a queue that is selected from the third subset of queues.
    Type: Application
    Filed: June 6, 2020
    Publication date: September 24, 2020
    Inventors: Aditya G. Holla, Rishi Mehta, Boon Ang, Rajeev Nair, Wenyi Jiang
  • Patent number: 10686716
    Abstract: Some embodiments provide a method for managing multiple queues of a network interface card (NIC) of a host computer that executes a data compute node (DCN). The method defines first, second, and third subsets of the queues. The first subset of queues is associated with a first feature for processing data messages received by the NIC, the second subset of queues is associated with a second feature, and the third subset is associated with both features. The method receives a request from the DCN to process data messages addressed to the DCN using both the first and second features. The method configures the NIC to direct data messages received for the DCN to a queue that is selected from the third subset of queues.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: June 16, 2020
    Assignee: VMWARE, INC.
    Inventors: Aditya G. Holla, Rishi Mehta, Boon Ang, Rajeev Nair, Wenyi Jiang
  • Publication number: 20200036636
    Abstract: Some embodiments provide a method for selecting a transmit queue of a network interface card (NIC) of a host computer for an outbound data message. The NIC includes multiple transmit queues and multiple receive queues. Each of the transmit queues is individually associated with a different receive queue, and the MC performs a load balancing operation to distribute inbound data messages among multiple receive queues. The method extracts a set of header values from a header of the outbound data message. The method uses the extracted set of header values to identify a receive queue which the NIC would select for a corresponding inbound data message upon which the NIC performed the load balancing operation. The method selects a transmit queue associated with the identified receive queue to process the outbound data message.
    Type: Application
    Filed: July 25, 2018
    Publication date: January 30, 2020
    Inventors: Aditya G. Holla, Wenyi Jiang, Rajeev Nair, Srikar Tati, Boon Ang, Kairav Padarthy
  • Publication number: 20200028792
    Abstract: Some embodiments provide a method for managing multiple queues of a network interface card (NIC) of a host computer that executes a data compute node (DCN). The method defines first, second, and third subsets of the queues. The first subset of queues is associated with a first feature for processing data messages received by the NIC, the second subset of queues is associated with a second feature, and the third subset is associated with both features. The method receives a request from the DCN to process data messages addressed to the DCN using both the first and second features. The method configures the NIC to direct data messages received for the DCN to a queue that is selected from the third subset of queues.
    Type: Application
    Filed: July 23, 2018
    Publication date: January 23, 2020
    Inventors: Aditya G. Holla, Rishi Mehta, Boon Ang, Rajeev Nair, Wenyi Jiang
  • Patent number: 10313926
    Abstract: Example methods are provided for a host to perform large receive offload (LRO) processing in a virtualized computing environment. The method may comprise receiving, via a physical network interface controller (NIC), incoming packets that are destined for the virtualized computing instance, and processing the incoming packets to generate at least one processed packet using a networking service pipeline that includes a packet aggregation service and multiple networking services. The packet aggregation service may be configured to aggregate the incoming packets into an aggregated packet and enabled at a service point along the networking service pipeline based on an LRO capability of at least one of the multiple networking services to process the aggregated packet. The method may also comprise forwarding the at least one processed packet generated by the networking service pipeline to the virtualized computing instance.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: June 4, 2019
    Assignee: NICIRA, INC.
    Inventors: Rishi Mehta, Boon Ang, Guolin Yang, Wenyi Jiang, Jayant Jain
  • Patent number: 10225106
    Abstract: Certain embodiments described herein are generally directed to a hypervisor-wide data structure that holds service rule address information for multiple VIFs in a compact way, which can later be processed per-VIF, in order to perform VIF-specific address group updates. For example, certain embodiments described herein provide a network controller that maintains a global hash table for multiple VIFs that maps network addresses to groups of one or more service rules. In certain embodiments, a network address to service rules table for each VIF may be derived based on the global hash table by using set intersections.
    Type: Grant
    Filed: November 29, 2016
    Date of Patent: March 5, 2019
    Assignee: VMware, Inc.
    Inventors: Soner Sevinc, Anupam Chanda, Pankaj Thakkar, Boon Ang
  • Patent number: 10225233
    Abstract: Example methods are provided for a host to perform Media Access Control (MAC) address learning in a virtualized computing environment. The host includes multiple physical network interface controllers (NICs) configured as a team. The method may comprise: in response to detecting an egress packet that includes a source MAC address from a virtualized computing instance, learning address mapping information that associates the source MAC address with a virtual port; and sending the egress packet to a physical network via a first physical NIC selected from the team based on a NIC teaming policy. The method may also comprise: in response to detecting an ingress packet that also includes the source MAC address, determining whether the source MAC address has moved based on whether the ingress packet is received via the first physical NIC, or a second physical NIC from the team, but otherwise, maintaining the address mapping information.
    Type: Grant
    Filed: June 7, 2017
    Date of Patent: March 5, 2019
    Assignee: NICIRA, INC.
    Inventors: Shrikrishna Khare, Boon Ang, Guolin Yang, Subin Cyriac Mathew
  • Publication number: 20180359215
    Abstract: Example methods are provided for a host to perform Media Access Control (MAC) address learning in a virtualized computing environment. The host includes multiple physical network interface controllers (NICs) configured as a team. The method may comprise: in response to detecting an egress packet that includes a source MAC address from a virtualized computing instance, learning address mapping information that associates the source MAC address with a virtual port; and sending the egress packet to a physical network via a first physical NIC selected from the team based on a NIC teaming policy. The method may also comprise: in response to detecting an ingress packet that also includes the source MAC address, determining whether the source MAC address has moved based on whether the ingress packet is received via the first physical NIC, or a second physical NIC from the team, but otherwise, maintaining the address mapping information.
    Type: Application
    Filed: June 7, 2017
    Publication date: December 13, 2018
    Applicant: Nicira, Inc.
    Inventors: Shrikrishna KHARE, Boon ANG, Guolin YANG, Subin Cyriac MATHEW
  • Publication number: 20180352474
    Abstract: Example methods are provided for a host to perform large receive offload (LRO) processing in a virtualized computing environment. The method may comprise receiving, via a physical network interface controller (NIC), incoming packets that are destined for the virtualized computing instance, and processing the incoming packets to generate at least one processed packet using a networking service pipeline that includes a packet aggregation service and multiple networking services. The packet aggregation service may be configured to aggregate the incoming packets into an aggregated packet and enabled at a service point along the networking service pipeline based on an LRO capability of at least one of the multiple networking services to process the aggregated packet. The method may also comprise forwarding the at least one processed packet generated by the networking service pipeline to the virtualized computing instance.
    Type: Application
    Filed: May 31, 2017
    Publication date: December 6, 2018
    Applicant: Nicira, Inc.
    Inventors: Rishi MEHTA, Boon ANG, Guolin YANG, Wenyi JIANG, Jayant JAIN
  • Publication number: 20180152321
    Abstract: Certain embodiments described herein are generally directed to a hypervisor-wide data structure that holds service rule address information for multiple VIFs in a compact way, which can later be processed per-VIF, in order to perform VIF-specific address group updates. For example, certain embodiments described herein provide a network controller that maintains a global hash table for multiple VIFs that maps network addresses to groups of one or more service rules. In certain embodiments, a network address to service rules table for each VIF may be derived based on the global hash table by using set intersections.
    Type: Application
    Filed: November 29, 2016
    Publication date: May 31, 2018
    Inventors: Soner SEVINC, Anupam CHANDA, Pankaj THAKKAR, Boon ANG
  • Patent number: 8909872
    Abstract: A computer system is provided including a central processing unit having an internal cache, a memory controller is coupled to the central processing unit, and a closely coupled peripheral is coupled to the central processing unit. A coherent interconnection may exist between the internal cache and both the memory controller and the closely coupled peripheral, wherein the coherent interconnection is a bus.
    Type: Grant
    Filed: October 31, 2006
    Date of Patent: December 9, 2014
    Assignee: Hewlett-Packard Development Company, L. P.
    Inventors: Michael S. Schlansker, Boon Ang, Erwin Oertli
  • Patent number: 7993497
    Abstract: In a magnetic disk having at least a glass substrate, a plurality of underlayers formed over the glass substrate, and a magnetic layer formed over the plurality of underlayers, at least one of the underlayers is an amorphous underlayer containing a VIa group element and carbon and, given that the remanent magnetization in a circumferential direction of the disk is Mrc and the remanent magnetization in a radial direction of the disk is Mrr, the magnetic disk has a magnetic anisotropy in which Mrc/Mrr being a ratio between Mrc and Mrr exceeds 1.
    Type: Grant
    Filed: November 21, 2006
    Date of Patent: August 9, 2011
    Assignee: WD Media (Singapore) Pte. Ltd.
    Inventors: Keiji Moroishi, Chor Boon Ang
  • Patent number: 7788437
    Abstract: A computer system is provided including a computer having a bus coupled to a computer system memory with a user buffer allocated therein. A network interface controller is coupled between the bus and a network. A retransmit buffer is coupled to the computer system memory, a transmit/receive buffer coupled to the computer system memory, and a retransmit direct memory access is within the network interface controller for moving data between the user buffer and the transmit/receive buffer, the retransmit buffer, or both as well as for moving the data to the network.
    Type: Grant
    Filed: October 27, 2006
    Date of Patent: August 31, 2010
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Michael S. Schlansker, Boon Ang, Erwin Oertli
  • Publication number: 20080162663
    Abstract: A computer system is provided including a computer having a bus coupled to a computer system memory with a user buffer allocated therein. A network interface controller is coupled between the bus and a network. A retransmit buffer is coupled to the computer system memory, a transmit/receive buffer coupled to the computer system memory, and a retransmit direct memory access is within the network interface controller for moving data between the user buffer and the transmit/receive buffer, the retransmit buffer, or both as well as for moving the data to the network.
    Type: Application
    Filed: October 27, 2006
    Publication date: July 3, 2008
    Applicant: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
    Inventors: Michael S. Schlansker, Boon Ang, Erwin Oertli