Patents by Inventor Shekhar Agarwal

Shekhar Agarwal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230140555
    Abstract: The present disclosure relates to systems, methods, and computer-readable media for facilitating the transparent insertion of network virtual appliances into a cloud computing system. For example, a transparent network virtual appliance system can dynamically, seamlessly, and quickly add one or more network virtual appliances utilizing a chained gateway load balancer. In particular, the transparent network virtual appliance system can provide additional services to an application virtual network within a cloud computing system without disrupting or modifying the existing architecture of the cloud computing system.
    Type: Application
    Filed: February 22, 2022
    Publication date: May 4, 2023
    Inventors: Geoffrey Hugh OUTHRED, Anavi Arun NAHAR, Shuo DONG, Xun FAN, Matthew Heeuk YANG, Plaban MOHANTY, Jinzhou JIANG, Yifeng HUANG, Nicole Antonette KISTER, Shekhar AGARWAL, Yanan SUN, Caleb Lee-Yen WYLLIE
  • Patent number: 11258876
    Abstract: The techniques disclosed herein improve the efficiency, reliability and scalability of flow processing systems by providing a multi-tier flow cache structure that can reduce the size of a flow table and also reduce replicated flow sets. In some configurations, a system can partition a flow space across workers and replicate the flows within a partition to a set of workers. In some configurations, a flow cache structure can include three tiers: (1) a scalable flow processing layer for executing the actions and transformations of a flow, (2) a flow state management layer for managing distributed flow state decisions, and (3) a flow decider layer for identifying actions and transformations needs to be executed on each packet of a flow. Flow replications allow other workers to pick up flows allocated to a particular worker that is taken offline in the event of a crash or update.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: February 22, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Selim Ciraci, Shekhar Agarwal, Geoffrey Outhred
  • Publication number: 20210329087
    Abstract: The techniques disclosed herein improve the efficiency, reliability and scalability of flow processing systems by providing a multi-tier flow cache structure that can reduce the size of a flow table and also reduce replicated flow sets. In some configurations, a system can partition a flow space across workers and replicate the flows within a partition to a set of workers. In some configurations, a flow cache structure can include three tiers: (1) a scalable flow processing layer for executing the actions and transformations of a flow, (2) a flow state management layer for managing distributed flow state decisions, and (3) a flow decider layer for identifying actions and transformations needs to be executed on each packet of a flow. Flow replications allow other workers to pick up flows allocated to a particular worker that is taken offline in the event of a crash or update.
    Type: Application
    Filed: August 6, 2020
    Publication date: October 21, 2021
    Inventors: Selim CIRACI, Shekhar AGARWAL, Geoffrey OUTHRED
  • Patent number: 10992582
    Abstract: A load balancer capable of instantiating a data plane within the load balancer, deleting the data plane from the load balancer, and/or enacting a change to the data plane. The load balancer instantiates a data plane for an identified tenant. The instantiated data plane is placed in a data path of network data transmitted from one or more sources to a plurality of tenant addresses that each corresponds to a different tenant or group of tenants. The instantiated data plane is also dedicated to the identified tenant such that the data plane isolates first network data destined to a first tenant address that corresponds to the identified tenant from second network data destined to one or more other tenant addresses. The load balancer also deletes the instantiated data plane from the load balancer, or enacts a change to the instantiated data plane.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: April 27, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Deepak Bansal, Geoffrey Hugh Outhred, Narasimhan Agrahara Venkataramaiah, Shekhar Agarwal
  • Patent number: 10911406
    Abstract: Techniques for allowing access to shared cloud resource using private network addresses are disclosed herein. In one embodiment, a connection packet representing a connection request to a shared cloud resource in the cloud computing system can be intercepted. In response, the connection packet can be encapsulated with data representing one or more of a VNET ID, a VNET source address, or a VNET destination address of a virtual network from which the connection packet is received. The encapsulated connection packet can then be forwarded to the shared cloud resource while retaining the data representing one or more of the VNET ID, the VNET source address, or the VNET destination address for access control at the shared cloud resource.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: February 2, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Rishabh Tewari, Deepak Bansal, Longzhang Fu, Harish Kumar Chandrappa, Tomas Talius, Dhruv Malik, Anitha Adusumilli, Parag Sharma, Nimish Aggarwal, Shekhar Agarwal, Joemmanuel Ponce Galindo
  • Publication number: 20200053008
    Abstract: A load balancer capable of instantiating a data plane within the load balancer, deleting the data plane from the load balancer, and/or enacting a change to the data plane. The load balancer instantiates a data plane for an identified tenant. The instantiated data plane is placed in a data path of network data transmitted from one or more sources to a plurality of tenant addresses that each corresponds to a different tenant or group of tenants. The instantiated data plane is also dedicated to the identified tenant such that the data plane isolates first network data destined to a first tenant address that corresponds to the identified tenant from second network data destined to one or more other tenant addresses. The load balancer also deletes the instantiated data plane from the load balancer, or enacts a change to the instantiated data plane.
    Type: Application
    Filed: October 8, 2019
    Publication date: February 13, 2020
    Inventors: Deepak BANSAL, Geoffrey Hugh OUTHRED, Narasimhan Agrahara VENKATARAMAIAH, Shekhar AGARWAL
  • Publication number: 20190334868
    Abstract: Techniques for allowing access to shared cloud resource using private network addresses are disclosed herein. In one embodiment, a connection packet representing a connection request to a shared cloud resource in the cloud computing system can be intercepted. In response, the connection packet can be encapsulated with data representing one or more of a VNET ID, a VNET source address, or a VNET destination address of a virtual network from which the connection packet is received. The encapsulated connection packet can then be forwarded to the shared cloud resource while retaining the data representing one or more of the VNET ID, the VNET source address, or the VNET destination address for access control at the shared cloud resource.
    Type: Application
    Filed: April 30, 2018
    Publication date: October 31, 2019
    Inventors: Rishabh Tewari, Deepak Bansal, Longzhang Fu, Harish Kumar Chandrappa, Tomas Talius, Dhruv Malik, Anitha Adusumilli, Parag Sharma, Nimish Aggarwal, Shekhar Agarwal, Joemmanuel Ponce Galindo
  • Patent number: 10447602
    Abstract: A load balancer capable of adjusting how network data is distributed to a tenant or group of tenants by manipulating the data plane. The load balancer is placed directly in the flow path of network data that is destined for a tenant or group of tenants having a tenant address. The load balancer includes a control plane and one or more data planes. Each data plane may contain one or more network traffic multiplexors. Further, each data plane may be dedicated to a tenant or group of tenants. Data planes may be added or deleted from the load balancer; additionally, multiplexors may be added or deleted from a data plane. Accordingly, network data directed towards one tenant is less likely to affect the performance of load balancing performed for another tenant.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: October 15, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Deepak Bansal, Geoffrey Hugh Outhred, Narasimhan Agrahara Venkataramaiah, Shekhar Agarwal
  • Publication number: 20180375762
    Abstract: A system is provided and includes a processor and a non-transitory computer-readable medium configured to store instructions for execution by the processor. The instructions include: accessing a resource via a first machine in a cloud-based network, where the first machine is a virtual machine; converting at the first machine an IPv4 packet to a IPv6 packet; while converting the IPv4 packet, embedding metadata in the IPv6 packet, where the metadata includes information identifying the first machine or a virtual network of the first machine; and transmitting the IPv6 packet to a second machine to limit access to the resource based on the information identifying the the first machine or the virtual network of the first machine. The second machine limits access to the resource based on the information identifying the at least one of the first machine or the virtual network of the first machine.
    Type: Application
    Filed: June 21, 2017
    Publication date: December 27, 2018
    Inventors: Deepak BANSAL, Parag SHARMA, Nimish AGGARWAL, Longzhang FU, Harish Kumar CHANDRAPPA, Daniel FIRESTONE, Shekhar AGARWAL, Anitha ADUSUMILLI
  • Publication number: 20180183713
    Abstract: A load balancer capable of adjusting how network data is distributed to a tenant or group of tenants by manipulating the data plane. The load balancer is placed directly in the flow path of network data that is destined for a tenant or group of tenants having a tenant address. The load balancer includes a control plane and one or more data planes. Each data plane may contain one or more network traffic multiplexors. Further, each data plane may be dedicated to a tenant or group of tenants. Data planes may be added or deleted from the load balancer; additionally, multiplexors may be added or deleted from a data plane. Accordingly, network data directed towards one tenant is less likely to affect the performance of load balancing performed for another tenant.
    Type: Application
    Filed: December 20, 2017
    Publication date: June 28, 2018
    Inventors: Deepak Bansal, Geoffrey Hugh Outhred, Narasimhan Agrahara Venkataramaiah, Shekhar Agarwal
  • Publication number: 20180054475
    Abstract: A load balancing system is provided including: one or more virtual machines implemented in a cloud-based network and including a processor; and a load balancing application implemented in the virtual machines and executed by the processor. The load balancing application is configured such that the processor: receives one or more health messages indicating states of health of network appliances implemented in an appliance layer of the cloud-based network; receives a forwarding packet from a network device for an application server; based on the health messages, determines whether to perform a failover process or select a network appliance; performs a first iteration of a symmetric conversion to route the forwarding packet to the application server via the selected network appliance; receives a return packet from the application server based on the forwarding packet; and performs a second iteration of the symmetric conversion to route the return packet to the network device.
    Type: Application
    Filed: August 16, 2016
    Publication date: February 22, 2018
    Inventors: Shekhar AGARWAL, Maitrey KUMAR, Narayan ANNAMALAI, Deepak BANSAL
  • Patent number: 9871731
    Abstract: A load balancer capable of adjusting how network data is distributed to a tenant or group of tenants by manipulating the data plane. The load balancer is placed directly in the flow path of network data that is destined for a tenant or group of tenants having a tenant address. The load balancer includes a control plane and one or more data planes. Each data plane may contain one or more network traffic multiplexors. Further, each data plane may be dedicated to a tenant or group of tenants. Data planes may be added or deleted from the load balancer; additionally, multiplexors may be added or deleted from a data plane. Accordingly, network data directed towards one tenant is less likely to affect the performance of load balancing performed for another tenant.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: January 16, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Deepak Bansal, Geoffrey Hugh Outhred, Narasimhan Agrahara Venkataramaiah, Shekhar Agarwal
  • Publication number: 20170093724
    Abstract: A load balancer capable of adjusting how network data is distributed to a tenant or group of tenants by manipulating the data plane. The load balancer is placed directly in the flow path of network data that is destined for a tenant or group of tenants having a tenant address. The load balancer includes a control plane and one or more data planes. Each data plane may contain one or more network traffic multiplexors. Further, each data plane may be dedicated to a tenant or group of tenants. Data planes may be added or deleted from the load balancer; additionally, multiplexors may be added or deleted from a data plane. Accordingly, network data directed towards one tenant is less likely to affect the performance of load balancing performed for another tenant.
    Type: Application
    Filed: September 30, 2015
    Publication date: March 30, 2017
    Inventors: Deepak Bansal, Geoffrey Hugh Outhred, Narasimhan Agrahara Venkataramaiah, Shekhar Agarwal