Patents by Inventor Aniruddha Bohra

Aniruddha Bohra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11658910
    Abstract: Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which throttling actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: May 23, 2023
    Assignee: Akamai Technologies, Inc.
    Inventors: Aniruddha Bohra, Vadim Grinshpun, Hari Raghunathan, Mithila Nagendra
  • Publication number: 20230053164
    Abstract: Edge server compute capacity demand in an overlay network is predicted and used to pre-position compute capacity in advance of application-specific demands. Preferably, machine learning is used to proactively predict anticipated compute capacity needs for an edge server region (e.g., a set of co-located edge servers). In advance, compute capacity (application instances) are made available in-region, and data associated with an application instance is migrated to be close to the instance. The approach facilitates compute-at-the-edge services, which require data (state) to be close to a pre-positioned latency-sensitive application instance. Overlay network mapping (globally) may be used for more long-term positioning, with short-duration scheduling then being done in-region as needed. Compute instances and associated state are migrated intelligently based on predicted (e.g., machine-learned) demand, and with full data consistency enforced.
    Type: Application
    Filed: November 1, 2022
    Publication date: February 16, 2023
    Applicant: Akamai Technologies, Inc.
    Inventors: Vinay Kanitkar, Robert B. Bird, Aniruddha Bohra, Michael Merideth
  • Publication number: 20230025059
    Abstract: This patent document describes failure recovery technologies for the processing of streaming data, also referred to as pipelined data. The technologies described herein have particular applicability in distributed computing systems that are required to process streams of data and provide at-most-once and/or exactly-once service levels. In a preferred embodiment, a system comprises many nodes configured in a network topology, such as a hierarchical tree structure. Data is generated at leaf nodes. Intermediate nodes process the streaming data in a pipelined fashion, sending towards the root aggregated or otherwise combined data from the source data streams towards. To reduce overhead and provide locally handled failure recovery, system nodes transfer data using a protocol that controls which node owns the data for purposes of failure recovery as it moves through the network.
    Type: Application
    Filed: July 22, 2021
    Publication date: January 26, 2023
    Inventors: Aniruddha Bohra, Florin Sultan, Umberto Boscolo Bragadin, James Lee, Solomon Lifshits
  • Publication number: 20220353189
    Abstract: Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which throttling actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.
    Type: Application
    Filed: March 28, 2022
    Publication date: November 3, 2022
    Applicant: Akamai Technologies, Inc.
    Inventors: Aniruddha Bohra, Vadim Grinshpun, Hari Raghunathan, Mithila Nagendra
  • Patent number: 11490307
    Abstract: Edge server compute capacity demand in an overlay network is predicted and used to pre-position compute capacity in advance of application-specific demands. Preferably, machine learning is used to proactively predict anticipated compute capacity needs for an edge server region (e.g., a set of co-located edge servers). In advance, compute capacity (application instances) are made available in-region, and data associated with an application instance is migrated to be close to the instance. The approach facilitates compute-at-the-edge services, which require data (state) to be close to a pre-positioned latency-sensitive application instance. Overlay network mapping (globally) may be used for more long-term positioning, with short-duration scheduling then being done in-region as needed. Compute instances and associated state are migrated intelligently based on predicted (e.g., machine-learned) demand, and with full data consistency enforced.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: November 1, 2022
    Assignee: Akamai Technologies, Inc.
    Inventors: Vinay Kanitkar, Robert B. Bird, Aniruddha Bohra, Michael Merideth
  • Patent number: 11290383
    Abstract: Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which throttling actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.
    Type: Grant
    Filed: September 1, 2020
    Date of Patent: March 29, 2022
    Assignee: Akamai Technologies, Inc.
    Inventors: Aniruddha Bohra, Vadim Grinshpun, Hari Raghunathan, Mithila Nagendra
  • Publication number: 20210051103
    Abstract: Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which throttling actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.
    Type: Application
    Filed: September 1, 2020
    Publication date: February 18, 2021
    Applicant: Akamai Technologies, Inc.
    Inventors: Aniruddha Bohra, Vadim Grinshpun, Hari Raghunathan, Mithila Nagendra
  • Patent number: 10798006
    Abstract: Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which traffic shaping actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: October 6, 2020
    Assignee: Akamai Technologies, Inc.
    Inventors: Aniruddha Bohra, Vadim Grinshpun, Hari Raghunathan, Mithila Nagendra
  • Publication number: 20200196210
    Abstract: Edge server compute capacity demand in an overlay network is predicted and used to pre-position compute capacity in advance of application-specific demands. Preferably, machine learning is used to proactively predict anticipated compute capacity needs for an edge server region (e.g., a set of co-located edge servers). In advance, compute capacity (application instances) are made available in-region, and data associated with an application instance is migrated to be close to the instance. The approach facilitates compute-at-the-edge services, which require data (state) to be close to a pre-positioned latency-sensitive application instance. Overlay network mapping (globally) may be used for more long-term positioning, with short-duration scheduling then being done in-region as needed. Compute instances and associated state are migrated intelligently based on predicted (e.g., machine-learned) demand, and with full data consistency enforced.
    Type: Application
    Filed: June 13, 2019
    Publication date: June 18, 2020
    Applicant: Akamai Technologies, Inc.
    Inventors: Vinay Kanitkar, Robert B. Bird, Aniruddha Bohra, Michael Merideth
  • Publication number: 20200120032
    Abstract: Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which throttling actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.
    Type: Application
    Filed: February 11, 2019
    Publication date: April 16, 2020
    Applicant: Akamai Technologies, Inc.
    Inventors: Aniruddha Bohra, Vadim Grinshpun, Hari Raghunathan, Mithila Nagendra
  • Patent number: 8868626
    Abstract: According to various embodiments of the invention, a system and method for controlling a file system. In some embodiments, a control plane interposes between a data plane user and a data plane, intercepts file system operations, and performs control plane operations upon the file system operations. In one such embodiment, the system and method is implemented between a data plane user that is a local file system user and a data plane that is a local file system. In another such embodiment, the system and method is implemented between a data plane user that is a client and a data plane that is a file server. Furthermore, for an embodiment where the control plane that interposes between a client and a file server, the control plane can be implemented as a file system proxy. Control plane operations include, but are not limited to, observation, verification, and transformation of a file system operation.
    Type: Grant
    Filed: April 14, 2008
    Date of Patent: October 21, 2014
    Assignee: Rutgers, The State University of New Jersey
    Inventors: Liviu Iftode, Stephen Smaldone, Aniruddha Bohra
  • Patent number: 7840618
    Abstract: Traditional networked file systems like NFS do not extend to wide-area due to network latency and dynamics introduced in the WAN environment. To address that problem, a wide-area networked file system is based on a traditional networked file system (NFS/CIFS) and extends to the WAN environment by introducing a file redirector infrastructure residing between the central file server and clients. The file redirector infrastructure is invisible to both the central server and clients so that the change to NFS is minimal. That minimizes the interruption to the existing file service when deploying WireFS on top of NFS. The system includes an architecture for an enterprise-wide read/write wide area network file system, protocols and data structures for metadata and data management in this system, algorithms for history based prefetching for access latency minimization in metadata operations, and a distributed randomized algorithm for the implementation of global LRU cache replacement scheme.
    Type: Grant
    Filed: December 28, 2006
    Date of Patent: November 23, 2010
    Assignee: NEC Laboratories America, Inc.
    Inventors: Hui Zhang, Aniruddha Bohra, Samrat Ganguly, Rauf Izmailov, Jian Liang
  • Patent number: 7643427
    Abstract: A multipath routing architecture for large data transfers is disclosed. The architecture employs an overlay network that provides diverse paths for packets from communicating end hosts to utilize as much capacity as available across multiple paths while ensuring network-wide fair allocation of resources across competing data transfers. A set of transit nodes are interposed between the end-hosts and for each end-to-end connection, a transit node can logically operate as an entry gateway, a relay or exit gateway. Packets from the sender enter the entry node and go to the exit node either directly or through one of a plurality of relay nodes. The exit node delivers the packets to the receiver. A multipath congestion control protocol is executed on the entry node to harness network capacity for large data transfers.
    Type: Grant
    Filed: March 26, 2007
    Date of Patent: January 5, 2010
    Assignee: NEC Laboratories America, Inc.
    Inventors: Ravindranath Kokku, Aniruddha Bohra, Samrat Ganguly, Rauf Izmailov
  • Publication number: 20090080377
    Abstract: In accordance with the invention, a method includes i) obtaining first for each AP hopping sequences of other interfering APs; and ii) determining for the AP a respective hopping sequence that maximizes each APs throughput. Preferably, the step of determining comprises for each slot AP choosing a channel that minimizes the number of edges which violate a k-coloring property. In an exemplary embodiment, the method further includes the step of the AP selecting one of (a) a channel uniformly at random from all such channels and (b) selecting a channel that distributes interference evenly as possible among neighboring APs.
    Type: Application
    Filed: September 24, 2007
    Publication date: March 26, 2009
    Applicant: NEC LABORATORIES AMERICA, INC.
    Inventors: Samrat Ganguly, Vishnu Navda, Aniruddha Bohra, Daniel S. Rubenstein
  • Publication number: 20090043823
    Abstract: According to various embodiments of the invention, a system and method for controlling a file system. In some embodiments, a control plane interposes between a data plane user and a data plane, intercepts file system operations, and performs control plane operations upon the file system operations. In one such embodiment, the system and method is implemented between a data plane user that is a local file system user and a data plane that is a local file system. In another such embodiment, the system and method is implemented between a data plane user that is a client and a data plane that is a file server. Furthermore, for an embodiment where the control plane that interposes between a client and a file server, the control plane can be implemented as a file system proxy. Control plane operations include, but are not limited to, observation, verification, and transformation of a file system operation.
    Type: Application
    Filed: April 14, 2008
    Publication date: February 12, 2009
    Inventors: LIVIU IFTODE, Stephen Smaldone, Aniruddha Bohra
  • Publication number: 20070230352
    Abstract: A multipath routing architecture for large data transfers is disclosed. The architecture employs an overlay network that provides diverse paths for packets from communicating end hosts to utilize as much capacity as available across multiple paths while ensuring network-wide fair allocation of resources across competing data transfers. A set of transit nodes are interposed between the end-hosts and for each end-to-end connection, a transit node can logically operate as an entry gateway, a relay or exit gateway. Packets from the sender enter the entry node and go to the exit node either directly or through one of a plurality of relay nodes. The exit node delivers the packets to the receiver. A multipath congestion control protocol is executed on the entry node to harness network capacity for large data transfers.
    Type: Application
    Filed: March 26, 2007
    Publication date: October 4, 2007
    Applicant: NEC Laboratories America, Inc.
    Inventors: Ravindranath Kokku, Aniruddha Bohra, Samrat Ganguly, Rauf Izmailov
  • Publication number: 20070177739
    Abstract: Disclosed is a data replication technique for providing erasure encoded replication of large data sets over a geographically distributed replica set. The technique utilizes a multicast tree to store, forward, and erasure encode the data set. The erasure encoding of data may be performed at various locations within the multicast tree, including the source, intermediate nodes, and destination nodes. In one embodiment, the system comprises a source node for storing the original data set, a plurality of intermediate nodes, and a plurality of leaf nodes for storing the unique replica fragments. The nodes are configured as a multicast tree to convert the original data into the unique replica fragments by performing distributed erasure encoding at a plurality of levels of the multicast tree.
    Type: Application
    Filed: January 27, 2006
    Publication date: August 2, 2007
    Applicant: NEC LABORATORIES AMERICA, INC.
    Inventors: Samrat Ganguly, Aniruddha Bohra, Rauf Izmailov, Yoshihide Kikuchi
  • Publication number: 20070162462
    Abstract: Traditional networked file systems like NFS do not extend to wide-area due to network latency and dynamics introduced in the WAN environment. To address that problem, a wide-area networked file system is based on a traditional networked file system (NFS/CIFS) and extends to the WAN environment by introducing a file redirector infrastructure residing between the central file server and clients. The file redirector infrastructure is invisible to both the central server and clients so that the change to NFS is minimal. That minimizes the interruption to the existing file service when deploying WireFS on top of NFS. The system includes an architecture for an enterprise-wide read/write wide area network file system, protocols and data structures for metadata and data management in this system, algorithms for history based prefetching for access latency minimization in metadata operations, and a distributed randomized algorithm for the implementation of global LRU cache replacement scheme.
    Type: Application
    Filed: December 28, 2006
    Publication date: July 12, 2007
    Applicant: NEC LABORATORIES AMERICA, INC.
    Inventors: Hui Zhang, Aniruddha Bohra, Samrat Ganguly, Rauf Izmailov, Jian Liang