Patents by Inventor Nithin B. Raju
Nithin B. Raju has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11336486Abstract: Some embodiments provide a method for a set of central controllers that manages forwarding elements operating in a plurality of datacenters. The method receives a configuration for a bridge between (i) a logical L2 network that spans at least two datacenters and (ii) a physical L2 network. The configuration specifies a particular one of the datacenters for implementation of the bridge. The method identifies multiple managed forwarding elements that implement the logical L2 network and are operating in the particular datacenter. The method selects one of the identified managed forwarding elements to implement the bridge. The method distributes bridge configuration data to the selected managed forwarding element.Type: GrantFiled: November 4, 2019Date of Patent: May 17, 2022Assignee: NICIRA, INC.Inventors: Ankur Kumar Sharma, Xiaohu Wang, Hongwei Zhu, Ganesan Chandrashekhar, Vivek Agarwal, Nithin B. Raju
-
Publication number: 20200067732Abstract: Some embodiments provide a method for a set of central controllers that manages forwarding elements operating in a plurality of datacenters. The method receives a configuration for a bridge between (i) a logical L2 network that spans at least two datacenters and (ii) a physical L2 network. The configuration specifies a particular one of the datacenters for implementation of the bridge. The method identifies multiple managed forwarding elements that implement the logical L2 network and are operating in the particular datacenter. The method selects one of the identified managed forwarding elements to implement the bridge. The method distributes bridge configuration data to the selected managed forwarding element.Type: ApplicationFiled: November 4, 2019Publication date: February 27, 2020Inventors: Ankur Kumar Sharma, Xiaohu Wang, Hongwei Zhu, Ganesan Chandrashekhar, Vivek Agarwal, Nithin B. Raju
-
Patent number: 10511459Abstract: Some embodiments provide a method for a set of central controllers that manages forwarding elements operating in a plurality of datacenters. The method receives a configuration for a bridge between (i) a logical L2 network that spans at least two datacenters and (ii) a physical L2 network. The configuration specifies a particular one of the datacenters for implementation of the bridge. The method identifies multiple managed forwarding elements that implement the logical L2 network and are operating in the particular datacenter. The method selects one of the identified managed forwarding elements to implement the bridge. The method distributes bridge configuration data to the selected managed forwarding element.Type: GrantFiled: November 14, 2017Date of Patent: December 17, 2019Assignee: NICIRA, INC.Inventors: Ankur Kumar Sharma, Xiaohu Wang, Hongwei Zhu, Ganesan Chandrashekhar, Vivek Agarwal, Nithin B. Raju
-
Patent number: 10412015Abstract: The congestion notification system of some embodiments sends congestion notification messages from lower layer (e.g., closer to a network) components to higher layer (e.g., closer to a packet sender) components. When the higher layer components receive the congestion notification messages, the higher layer components reduce the sending rate of packets (in some cases the rate is reduced to zero) to allow the lower layer components to lower congestion (i.e., create more space in their queues by sending more data packets along the series of components). In some embodiments, the higher layer components resume full speed sending of packets after a threshold time elapses without further notification of congestion. In other embodiments, the higher layer components resume full speed sending of packets after receiving a message indicating reduced congestion in the lower layers.Type: GrantFiled: January 31, 2017Date of Patent: September 10, 2019Assignee: VMware, Inc.Inventors: Santhosh Sundararaman, Nithin B. Raju, Akshay K. Sreeramoju, Ricardo Koller
-
Publication number: 20190149358Abstract: Some embodiments provide a method for a set of central controllers that manages forwarding elements operating in a plurality of datacenters. The method receives a configuration for a bridge between (i) a logical L2 network that spans at least two datacenters and (ii) a physical L2 network. The configuration specifies a particular one of the datacenters for implementation of the bridge. The method identifies multiple managed forwarding elements that implement the logical L2 network and are operating in the particular datacenter. The method selects one of the identified managed forwarding elements to implement the bridge. The method distributes bridge configuration data to the selected managed forwarding element.Type: ApplicationFiled: November 14, 2017Publication date: May 16, 2019Inventors: Ankur Kumar Sharma, Xiaohu Wang, Hongwei Zhu, Ganesan Chandrashekhar, Vivek Agarwal, Nithin B. Raju
-
Patent number: 10091125Abstract: Multiple TCP/IP stack processors on a host. The multiple TCP/IP stack processors are provided independently of TCP/IP stack processors implemented by virtual machines on the host. The TCP/IP stack processors provide multiple different default gateway addresses for use with multiple processes. The default gateway addresses allow a service to communicate across an L3 network. Processes outside of virtual machines that utilize the TCP/IP stack processor on a first host can benefit from using their own gateway, and communicate with their peer process on a second host, regardless of whether the second host is located within the same subnet or a different subnet. The multiple TCP/IP stack processors can use separately allocated resources. Separate TCP/IP stack processors can be provided for each of multiple tenants on the host. Separate loopback interfaces of multiple TCP/IP stack processors can be used to create separate containment for separate sets of processes on a host.Type: GrantFiled: March 31, 2014Date of Patent: October 2, 2018Assignee: NICIRA, INC.Inventors: Nithin B. Raju, Ganesan Chandrashekhar, Gopakumar Pillai
-
Patent number: 9940180Abstract: Multiple TCP/IP stack processors on a host. The multiple TCP/IP stack processors are provided independently of TCP/IP stack processors implemented by virtual machines on the host. The TCP/IP stack processors provide multiple different default gateway addresses for use with multiple processes. The default gateway addresses allow a service to communicate across an L3 network. Processes outside of virtual machines that utilize the TCP/IP stack processor on a first host can benefit from using their own gateway, and communicate with their peer process on a second host, regardless of whether the second host is located within the same subnet or a different subnet. The multiple TCP/IP stack processors can use separately allocated resources. Separate TCP/IP stack processors can be provided for each of multiple tenants on the host. Separate loopback interfaces of multiple TCP/IP stack processors can be used to create separate containment for separate sets of processes on a host.Type: GrantFiled: March 31, 2014Date of Patent: April 10, 2018Assignee: NICIRA, INC.Inventors: Nithin B. Raju, Ganesan Chandrashekhar
-
Patent number: 9832112Abstract: Multiple TCP/IP stack processors on a host. The multiple TCP/IP stack processors are provided independently of TCP/IP stack processors implemented by virtual machines on the host. The TCP/IP stack processors provide multiple different default gateway addresses for use with multiple processes. The default gateway addresses allow a service to communicate across an L3 network. Processes outside of virtual machines that utilize the TCP/IP stack processor on a first host can benefit from using their own gateway, and communicate with their peer process on a second host, regardless of whether the second host is located within the same subnet or a different subnet. The multiple TCP/IP stack processors can use separately allocated resources. Separate TCP/IP stack processors can be provided for each of multiple tenants on the host. Separate loopback interfaces of multiple TCP/IP stack processors can be used to create separate containment for separate sets of processes on a host.Type: GrantFiled: March 31, 2014Date of Patent: November 28, 2017Assignee: NICIRA, INC.Inventors: Nithin B. Raju, Ganesan Chandrashekhar, Frank Pan, Tihomir Varbanov, Tony Ganchev
-
Patent number: 9729679Abstract: Multiple TCP/IP stack processors on a host. The multiple TCP/IP stack processors are provided independently of TCP/IP stack processors implemented by virtual machines on the host. The TCP/IP stack processors provide multiple different default gateway addresses for use with multiple processes. The default gateway addresses allow a service to communicate across an L3 network. Processes outside of virtual machines that utilize the TCP/IP stack processor on a first host can benefit from using their own gateway, and communicate with their peer process on a second host, regardless of whether the second host is located within the same subnet or a different subnet. The multiple TCP/IP stack processors can use separately allocated resources. Separate TCP/IP stack processors can be provided for each of multiple tenants on the host. Separate loopback interfaces of multiple TCP/IP stack processors can be used to create separate containment for separate sets of processes on a host.Type: GrantFiled: March 31, 2014Date of Patent: August 8, 2017Assignee: NICIRA, INC.Inventors: Nithin B. Raju, Ganesan Chandrashekhar
-
Publication number: 20170142020Abstract: The congestion notification system of some embodiments sends congestion notification messages from lower layer (e.g., closer to a network) components to higher layer (e.g., closer to a packet sender) components. When the higher layer components receive the congestion notification messages, the higher layer components reduce the sending rate of packets (in some cases the rate is reduced to zero) to allow the lower layer components to lower congestion (i.e., create more space in their queues by sending more data packets along the series of components). In some embodiments, the higher layer components resume full speed sending of packets after a threshold time elapses without further notification of congestion. In other embodiments, the higher layer components resume full speed sending of packets after receiving a message indicating reduced congestion in the lower layers.Type: ApplicationFiled: January 31, 2017Publication date: May 18, 2017Inventors: Santhosh Sundararaman, Nithin B. Raju, Akshay K. Sreeramoju, Ricardo Koller
-
Patent number: 9621471Abstract: The congestion notification system of some embodiments sends congestion notification messages from lower layer (e.g., closer to a network) components to higher layer (e.g., closer to a packet sender) components. When the higher layer components receive the congestion notification messages, the higher layer components reduce the sending rate of packets (in some cases the rate is reduced to zero) to allow the lower layer components to lower congestion (i.e., create more space in their queues by sending more data packets along the series of components). In some embodiments, the higher layer components resume full speed sending of packets after a threshold time elapses without further notification of congestion. In other embodiments, the higher layer components resume full speed sending of packets after receiving a message indicating reduced congestion in the lower layers.Type: GrantFiled: June 30, 2014Date of Patent: April 11, 2017Assignee: VMware, Inc.Inventors: Santhosh Sundararaman, Nithin B. Raju, Akshay K. Sreeramoju, Ricardo Koller
-
Publication number: 20150381505Abstract: The congestion notification system of some embodiments sends congestion notification messages from lower layer (e.g., closer to a network) components to higher layer (e.g., closer to a packet sender) components. When the higher layer components receive the congestion notification messages, the higher layer components reduce the sending rate of packets (in some cases the rate is reduced to zero) to allow the lower layer components to lower congestion (i.e., create more space in their queues by sending more data packets along the series of components). In some embodiments, the higher layer components resume full speed sending of packets after a threshold time elapses without further notification of congestion. In other embodiments, the higher layer components resume full speed sending of packets after receiving a message indicating reduced congestion in the lower layers.Type: ApplicationFiled: June 30, 2014Publication date: December 31, 2015Inventors: Santhosh Sundararaman, Nithin B. Raju, Akshay K. Sreeramoju, Ricardo Koller
-
Publication number: 20150281112Abstract: Multiple TCP/IP stack processors on a host. The multiple TCP/IP stack processors are provided independently of TCP/IP stack processors implemented by virtual machines on the host. The TCP/IP stack processors provide multiple different default gateway addresses for use with multiple processes. The default gateway addresses allow a service to communicate across an L3 network. Processes outside of virtual machines that utilize the TCP/IP stack processor on a first host can benefit from using their own gateway, and communicate with their peer process on a second host, regardless of whether the second host is located within the same subnet or a different subnet. The multiple TCP/IP stack processors can use separately allocated resources. Separate TCP/IP stack processors can be provided for each of multiple tenants on the host. Separate loopback interfaces of multiple TCP/IP stack processors can be used to create separate containment for separate sets of processes on a host.Type: ApplicationFiled: March 31, 2014Publication date: October 1, 2015Inventors: Nithin B. Raju, Ganesan Chandrashekhar, Gopakumar Pillai
-
Publication number: 20150281407Abstract: Multiple TCP/IP stack processors on a host. The multiple TCP/IP stack processors are provided independently of TCP/IP stack processors implemented by virtual machines on the host. The TCP/IP stack processors provide multiple different default gateway addresses for use with multiple processes. The default gateway addresses allow a service to communicate across an L3 network. Processes outside of virtual machines that utilize the TCP/IP stack processor on a first host can benefit from using their own gateway, and communicate with their peer process on a second host, regardless of whether the second host is located within the same subnet or a different subnet. The multiple TCP/IP stack processors can use separately allocated resources. Separate TCP/IP stack processors can be provided for each of multiple tenants on the host. Separate loopback interfaces of multiple TCP/IP stack processors can be used to create separate containment for separate sets of processes on a host.Type: ApplicationFiled: March 31, 2014Publication date: October 1, 2015Inventors: Nithin B. Raju, Ganesan Chandrashekhar
-
Publication number: 20150281047Abstract: Multiple TCP/IP stack processors on a host. The multiple TCP/IP stack processors are provided independently of TCP/IP stack processors implemented by virtual machines on the host. The TCP/IP stack processors provide multiple different default gateway addresses for use with multiple processes. The default gateway addresses allow a service to communicate across an L3 network. Processes outside of virtual machines that utilize the TCP/IP stack processor on a first host can benefit from using their own gateway, and communicate with their peer process on a second host, regardless of whether the second host is located within the same subnet or a different subnet. The multiple TCP/IP stack processors can use separately allocated resources. Separate TCP/IP stack processors can be provided for each of multiple tenants on the host. Separate loopback interfaces of multiple TCP/IP stack processors can be used to create separate containment for separate sets of processes on a host.Type: ApplicationFiled: March 31, 2014Publication date: October 1, 2015Inventors: Nithin B. Raju, Ganesan Chandrashekhar, Frank Pan, Tihomir Varbanov, Tony Ganchev
-
Publication number: 20150277995Abstract: Multiple TCP/IP stack processors on a host. The multiple TCP/IP stack processors are provided independently of TCP/IP stack processors implemented by virtual machines on the host. The TCP/IP stack processors provide multiple different default gateway addresses for use with multiple processes. The default gateway addresses allow a service to communicate across an L3 network. Processes outside of virtual machines that utilize the TCP/IP stack processor on a first host can benefit from using their own gateway, and communicate with their peer process on a second host, regardless of whether the second host is located within the same subnet or a different subnet. The multiple TCP/IP stack processors can use separately allocated resources. Separate TCP/IP stack processors can be provided for each of multiple tenants on the host. Separate loopback interfaces of multiple TCP/IP stack processors can be used to create separate containment for separate sets of processes on a host.Type: ApplicationFiled: March 31, 2014Publication date: October 1, 2015Inventors: Nithin B. Raju, Ganesan Chandrashekhar