Patents by Inventor Aniruddha Bohra
Aniruddha Bohra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11658910Abstract: Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which throttling actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.Type: GrantFiled: March 28, 2022Date of Patent: May 23, 2023Assignee: Akamai Technologies, Inc.Inventors: Aniruddha Bohra, Vadim Grinshpun, Hari Raghunathan, Mithila Nagendra
-
Publication number: 20230053164Abstract: Edge server compute capacity demand in an overlay network is predicted and used to pre-position compute capacity in advance of application-specific demands. Preferably, machine learning is used to proactively predict anticipated compute capacity needs for an edge server region (e.g., a set of co-located edge servers). In advance, compute capacity (application instances) are made available in-region, and data associated with an application instance is migrated to be close to the instance. The approach facilitates compute-at-the-edge services, which require data (state) to be close to a pre-positioned latency-sensitive application instance. Overlay network mapping (globally) may be used for more long-term positioning, with short-duration scheduling then being done in-region as needed. Compute instances and associated state are migrated intelligently based on predicted (e.g., machine-learned) demand, and with full data consistency enforced.Type: ApplicationFiled: November 1, 2022Publication date: February 16, 2023Applicant: Akamai Technologies, Inc.Inventors: Vinay Kanitkar, Robert B. Bird, Aniruddha Bohra, Michael Merideth
-
Publication number: 20230025059Abstract: This patent document describes failure recovery technologies for the processing of streaming data, also referred to as pipelined data. The technologies described herein have particular applicability in distributed computing systems that are required to process streams of data and provide at-most-once and/or exactly-once service levels. In a preferred embodiment, a system comprises many nodes configured in a network topology, such as a hierarchical tree structure. Data is generated at leaf nodes. Intermediate nodes process the streaming data in a pipelined fashion, sending towards the root aggregated or otherwise combined data from the source data streams towards. To reduce overhead and provide locally handled failure recovery, system nodes transfer data using a protocol that controls which node owns the data for purposes of failure recovery as it moves through the network.Type: ApplicationFiled: July 22, 2021Publication date: January 26, 2023Inventors: Aniruddha Bohra, Florin Sultan, Umberto Boscolo Bragadin, James Lee, Solomon Lifshits
-
Publication number: 20220353189Abstract: Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which throttling actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.Type: ApplicationFiled: March 28, 2022Publication date: November 3, 2022Applicant: Akamai Technologies, Inc.Inventors: Aniruddha Bohra, Vadim Grinshpun, Hari Raghunathan, Mithila Nagendra
-
Patent number: 11490307Abstract: Edge server compute capacity demand in an overlay network is predicted and used to pre-position compute capacity in advance of application-specific demands. Preferably, machine learning is used to proactively predict anticipated compute capacity needs for an edge server region (e.g., a set of co-located edge servers). In advance, compute capacity (application instances) are made available in-region, and data associated with an application instance is migrated to be close to the instance. The approach facilitates compute-at-the-edge services, which require data (state) to be close to a pre-positioned latency-sensitive application instance. Overlay network mapping (globally) may be used for more long-term positioning, with short-duration scheduling then being done in-region as needed. Compute instances and associated state are migrated intelligently based on predicted (e.g., machine-learned) demand, and with full data consistency enforced.Type: GrantFiled: June 13, 2019Date of Patent: November 1, 2022Assignee: Akamai Technologies, Inc.Inventors: Vinay Kanitkar, Robert B. Bird, Aniruddha Bohra, Michael Merideth
-
Patent number: 11290383Abstract: Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which throttling actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.Type: GrantFiled: September 1, 2020Date of Patent: March 29, 2022Assignee: Akamai Technologies, Inc.Inventors: Aniruddha Bohra, Vadim Grinshpun, Hari Raghunathan, Mithila Nagendra
-
Publication number: 20210051103Abstract: Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which throttling actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.Type: ApplicationFiled: September 1, 2020Publication date: February 18, 2021Applicant: Akamai Technologies, Inc.Inventors: Aniruddha Bohra, Vadim Grinshpun, Hari Raghunathan, Mithila Nagendra
-
Patent number: 10798006Abstract: Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which traffic shaping actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.Type: GrantFiled: February 11, 2019Date of Patent: October 6, 2020Assignee: Akamai Technologies, Inc.Inventors: Aniruddha Bohra, Vadim Grinshpun, Hari Raghunathan, Mithila Nagendra
-
Publication number: 20200196210Abstract: Edge server compute capacity demand in an overlay network is predicted and used to pre-position compute capacity in advance of application-specific demands. Preferably, machine learning is used to proactively predict anticipated compute capacity needs for an edge server region (e.g., a set of co-located edge servers). In advance, compute capacity (application instances) are made available in-region, and data associated with an application instance is migrated to be close to the instance. The approach facilitates compute-at-the-edge services, which require data (state) to be close to a pre-positioned latency-sensitive application instance. Overlay network mapping (globally) may be used for more long-term positioning, with short-duration scheduling then being done in-region as needed. Compute instances and associated state are migrated intelligently based on predicted (e.g., machine-learned) demand, and with full data consistency enforced.Type: ApplicationFiled: June 13, 2019Publication date: June 18, 2020Applicant: Akamai Technologies, Inc.Inventors: Vinay Kanitkar, Robert B. Bird, Aniruddha Bohra, Michael Merideth
-
Publication number: 20200120032Abstract: Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which throttling actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.Type: ApplicationFiled: February 11, 2019Publication date: April 16, 2020Applicant: Akamai Technologies, Inc.Inventors: Aniruddha Bohra, Vadim Grinshpun, Hari Raghunathan, Mithila Nagendra
-
Patent number: 8868626Abstract: According to various embodiments of the invention, a system and method for controlling a file system. In some embodiments, a control plane interposes between a data plane user and a data plane, intercepts file system operations, and performs control plane operations upon the file system operations. In one such embodiment, the system and method is implemented between a data plane user that is a local file system user and a data plane that is a local file system. In another such embodiment, the system and method is implemented between a data plane user that is a client and a data plane that is a file server. Furthermore, for an embodiment where the control plane that interposes between a client and a file server, the control plane can be implemented as a file system proxy. Control plane operations include, but are not limited to, observation, verification, and transformation of a file system operation.Type: GrantFiled: April 14, 2008Date of Patent: October 21, 2014Assignee: Rutgers, The State University of New JerseyInventors: Liviu Iftode, Stephen Smaldone, Aniruddha Bohra
-
Patent number: 7840618Abstract: Traditional networked file systems like NFS do not extend to wide-area due to network latency and dynamics introduced in the WAN environment. To address that problem, a wide-area networked file system is based on a traditional networked file system (NFS/CIFS) and extends to the WAN environment by introducing a file redirector infrastructure residing between the central file server and clients. The file redirector infrastructure is invisible to both the central server and clients so that the change to NFS is minimal. That minimizes the interruption to the existing file service when deploying WireFS on top of NFS. The system includes an architecture for an enterprise-wide read/write wide area network file system, protocols and data structures for metadata and data management in this system, algorithms for history based prefetching for access latency minimization in metadata operations, and a distributed randomized algorithm for the implementation of global LRU cache replacement scheme.Type: GrantFiled: December 28, 2006Date of Patent: November 23, 2010Assignee: NEC Laboratories America, Inc.Inventors: Hui Zhang, Aniruddha Bohra, Samrat Ganguly, Rauf Izmailov, Jian Liang
-
Patent number: 7643427Abstract: A multipath routing architecture for large data transfers is disclosed. The architecture employs an overlay network that provides diverse paths for packets from communicating end hosts to utilize as much capacity as available across multiple paths while ensuring network-wide fair allocation of resources across competing data transfers. A set of transit nodes are interposed between the end-hosts and for each end-to-end connection, a transit node can logically operate as an entry gateway, a relay or exit gateway. Packets from the sender enter the entry node and go to the exit node either directly or through one of a plurality of relay nodes. The exit node delivers the packets to the receiver. A multipath congestion control protocol is executed on the entry node to harness network capacity for large data transfers.Type: GrantFiled: March 26, 2007Date of Patent: January 5, 2010Assignee: NEC Laboratories America, Inc.Inventors: Ravindranath Kokku, Aniruddha Bohra, Samrat Ganguly, Rauf Izmailov
-
Publication number: 20090080377Abstract: In accordance with the invention, a method includes i) obtaining first for each AP hopping sequences of other interfering APs; and ii) determining for the AP a respective hopping sequence that maximizes each APs throughput. Preferably, the step of determining comprises for each slot AP choosing a channel that minimizes the number of edges which violate a k-coloring property. In an exemplary embodiment, the method further includes the step of the AP selecting one of (a) a channel uniformly at random from all such channels and (b) selecting a channel that distributes interference evenly as possible among neighboring APs.Type: ApplicationFiled: September 24, 2007Publication date: March 26, 2009Applicant: NEC LABORATORIES AMERICA, INC.Inventors: Samrat Ganguly, Vishnu Navda, Aniruddha Bohra, Daniel S. Rubenstein
-
Publication number: 20090043823Abstract: According to various embodiments of the invention, a system and method for controlling a file system. In some embodiments, a control plane interposes between a data plane user and a data plane, intercepts file system operations, and performs control plane operations upon the file system operations. In one such embodiment, the system and method is implemented between a data plane user that is a local file system user and a data plane that is a local file system. In another such embodiment, the system and method is implemented between a data plane user that is a client and a data plane that is a file server. Furthermore, for an embodiment where the control plane that interposes between a client and a file server, the control plane can be implemented as a file system proxy. Control plane operations include, but are not limited to, observation, verification, and transformation of a file system operation.Type: ApplicationFiled: April 14, 2008Publication date: February 12, 2009Inventors: LIVIU IFTODE, Stephen Smaldone, Aniruddha Bohra
-
Publication number: 20070230352Abstract: A multipath routing architecture for large data transfers is disclosed. The architecture employs an overlay network that provides diverse paths for packets from communicating end hosts to utilize as much capacity as available across multiple paths while ensuring network-wide fair allocation of resources across competing data transfers. A set of transit nodes are interposed between the end-hosts and for each end-to-end connection, a transit node can logically operate as an entry gateway, a relay or exit gateway. Packets from the sender enter the entry node and go to the exit node either directly or through one of a plurality of relay nodes. The exit node delivers the packets to the receiver. A multipath congestion control protocol is executed on the entry node to harness network capacity for large data transfers.Type: ApplicationFiled: March 26, 2007Publication date: October 4, 2007Applicant: NEC Laboratories America, Inc.Inventors: Ravindranath Kokku, Aniruddha Bohra, Samrat Ganguly, Rauf Izmailov
-
Publication number: 20070177739Abstract: Disclosed is a data replication technique for providing erasure encoded replication of large data sets over a geographically distributed replica set. The technique utilizes a multicast tree to store, forward, and erasure encode the data set. The erasure encoding of data may be performed at various locations within the multicast tree, including the source, intermediate nodes, and destination nodes. In one embodiment, the system comprises a source node for storing the original data set, a plurality of intermediate nodes, and a plurality of leaf nodes for storing the unique replica fragments. The nodes are configured as a multicast tree to convert the original data into the unique replica fragments by performing distributed erasure encoding at a plurality of levels of the multicast tree.Type: ApplicationFiled: January 27, 2006Publication date: August 2, 2007Applicant: NEC LABORATORIES AMERICA, INC.Inventors: Samrat Ganguly, Aniruddha Bohra, Rauf Izmailov, Yoshihide Kikuchi
-
Publication number: 20070162462Abstract: Traditional networked file systems like NFS do not extend to wide-area due to network latency and dynamics introduced in the WAN environment. To address that problem, a wide-area networked file system is based on a traditional networked file system (NFS/CIFS) and extends to the WAN environment by introducing a file redirector infrastructure residing between the central file server and clients. The file redirector infrastructure is invisible to both the central server and clients so that the change to NFS is minimal. That minimizes the interruption to the existing file service when deploying WireFS on top of NFS. The system includes an architecture for an enterprise-wide read/write wide area network file system, protocols and data structures for metadata and data management in this system, algorithms for history based prefetching for access latency minimization in metadata operations, and a distributed randomized algorithm for the implementation of global LRU cache replacement scheme.Type: ApplicationFiled: December 28, 2006Publication date: July 12, 2007Applicant: NEC LABORATORIES AMERICA, INC.Inventors: Hui Zhang, Aniruddha Bohra, Samrat Ganguly, Rauf Izmailov, Jian Liang