Patents by Inventor Srinivasa Aditya Akella
Srinivasa Aditya Akella has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11895177Abstract: A method of automatically identifying state information in different middlebox programs first identifies relevant program portions by expanding outward from a packet processing loop to statements dependent either by control or data dependency on that packet processing loop. Persistent variables in the statements are then collected and optionally winnowed by whether they are “used” or modified by those statements. The identified state variables may be segregated according to flow-spaces and/or output function so that a request for state data may be tailored precisely to the necessary state data greatly reducing network burden in state data transfer.Type: GrantFiled: September 30, 2016Date of Patent: February 6, 2024Assignee: Wisconsin Alumni Research FoundationInventors: Srinivasa Aditya Akella, Junaid Khalid, Aaron Robert Gember-Jacobson
-
Patent number: 10637797Abstract: Latency in responding to queries directed to geographically distributed data can be reduced by allocating individual steps, of a multi-step compute operation requested by the query, among the geographically distributed computing devices so as to reduce the duration of shuffling of intermediate data among such devices, and, additionally, by pre-moving, prior to the receipt of the query, portions of the distributed data that are input to a first step of the multistep compute operation, to, again, reduce the duration of the exchange of intermediate data. The pre-moving of input data occurring, and the adaptive allocation of intermediate steps, are prioritized for high-value data sets. Additionally, a threshold increase in a quantity of data exchanged across network communications can be established to avoid incurring network communication usage without an attendant gain in latency reduction.Type: GrantFiled: May 22, 2019Date of Patent: April 28, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Paramvir Bahl, Ganesh Ananthanarayanan, Srikanth Kandula, Peter Bodik, Qifan Pu, Srinivasa Aditya Akella
-
Publication number: 20190273695Abstract: Latency in responding to queries directed to geographically distributed data can be reduced by allocating individual steps, of a multi-step compute operation requested by the query, among the geographically distributed computing devices so as to reduce the duration of shuffling of intermediate data among such devices, and, additionally, by pre-moving, prior to the receipt of the query, portions of the distributed data that are input to a first step of the multistep compute operation, to, again, reduce the duration of the exchange of intermediate data. The pre-moving of input data occurring, and the adaptive allocation of intermediate steps, are prioritized for high-value data sets. Additionally, a threshold increase in a quantity of data exchanged across network communications can be established to avoid incurring network communication usage without an attendant gain in latency reduction.Type: ApplicationFiled: May 22, 2019Publication date: September 5, 2019Inventors: Paramvir Bahl, Ganesh Ananthanarayanan, Srikanth Kandula, Peter Bodik, Qifan Pu, Srinivasa Aditya Akella
-
Patent number: 10320708Abstract: Latency in responding to queries directed to geographically distributed data can be reduced by allocating individual steps, of a multi-step compute operation requested by the query, among the geographically distributed computing devices so as to reduce the duration of shuffling of intermediate data among such devices, and, additionally, by pre-moving, prior to the receipt of the query, portions of the distributed data that are input to a first step of the multistep compute operation, to, again, reduce the duration of the exchange of intermediate data. The pre-moving of input data occurring, and the adaptive allocation of intermediate steps, are prioritized for high-value data sets. Additionally, a threshold increase in a quantity of data exchanged across network communications can be established to avoid incurring network communication usage without an attendant gain in latency reduction.Type: GrantFiled: November 20, 2015Date of Patent: June 11, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Paramvir Bahl, Ganesh Ananthanarayanan, Srikanth Kandula, Peter Bodik, Qifan Pu, Srinivasa Aditya Akella
-
Publication number: 20180095773Abstract: A method of automatically identifying state information in different middlebox programs first identifies relevant program portions by expanding outward from a packet processing loop to statements dependent either by control or data dependency on that packet processing loop. Persistent variables in the statements are then collected and optionally winnowed by whether they are “used” or modified by those statements. The identified state variables may be segregated according to flow-spaces and/or output function so that a request for state data may be tailored precisely to the necessary state data greatly reducing network burden in state data transfer.Type: ApplicationFiled: September 30, 2016Publication date: April 5, 2018Inventors: Srinivasa Aditya Akella, Junaid Khalid, Aaron Robert Gember-Jacobson
-
Patent number: 9705785Abstract: An enterprise computer system efficiently adjusts the number of middleboxes associated with the the enterprise, for example, with changes in demand, by transferring not only flows of instructions but also middlebox states associated with those flows. Loss-less transfer preventing the loss of packets and its state, and order-preserving transfer preserving packet ordering may be provided by a two-step transfer process in which packets are buffered during the transfer and are marked to be processed by a receiving middlebox before processing by that middlebox of ongoing packets for the given flow.Type: GrantFiled: December 19, 2014Date of Patent: July 11, 2017Assignee: Wisconsin Alumni Research FoundationInventors: Aaron Robert Gember-Jacobson, Srinivasa Aditya Akella, Chaithan M. Prakash, Raajay Viswanathan
-
Publication number: 20170149691Abstract: Latency in responding to queries directed to geographically distributed data can be reduced by allocating individual steps, of a multi-step compute operation requested by the query, among the geographically distributed computing devices so as to reduce the duration of shuffling of intermediate data among such devices, and, additionally, by pre-moving, prior to the receipt of the query, portions of the distributed data that are input to a first step of the multistep compute operation, to, again, reduce the duration of the exchange of intermediate data. The pre-moving of input data occurring, and the adaptive allocation of intermediate steps, are prioritized for high-value data sets. Additionally, a threshold increase in a quantity of data exchanged across network communications can be established to avoid incurring network communication usage without an attendant gain in latency reduction.Type: ApplicationFiled: November 20, 2015Publication date: May 25, 2017Inventors: Paramvir Bahl, Ganesh Ananthanarayanan, Srikanth Kandula, Peter Bodik, Qifan Pu, Srinivasa Aditya Akella
-
Publication number: 20160344597Abstract: A facility for managing distributed system for delivering online services is described. For each of a plurality of distributed system components of the first type, the facility receives operating statistics for the infrastructure component of the first type. For each of a plurality of distributed system components of a second type, the facility receives operating statistics for the infrastructure component of the second type. The facility uses the received operating statistics for distributed system components of the first and second types to generate a model predicting operating statistics for the distributed system for a future period of time.Type: ApplicationFiled: May 22, 2015Publication date: November 24, 2016Inventors: Ming Zhang, Hongqiang Liu, Jitendra D. Padhye, Ratul Mahajan, Srinivasa Aditya Akella, Raajay Viswanathan, Matthew J. Calder
-
Patent number: 9438504Abstract: A network employing multiple redundancy-aware routers that can eliminate the transmission of redundant data is greatly improved by steering redundant data preferentially into common data paths possibly contrary to other routing paradigms. By collecting redundant data in certain pathways, the effectiveness of the redundancy-aware routers is substantially increased.Type: GrantFiled: June 8, 2009Date of Patent: September 6, 2016Assignee: Wisconsin Alumni Research FoundationInventors: Srinivasa Aditya Akella, Ashok Anand, Srinivasan Seshan
-
Publication number: 20160182360Abstract: An enterprise computer system efficiently adjusts the number of middleboxes associated with the the enterprise, for example, with changes in demand, by transferring not only flows of instructions but also middlebox states associated with those flows. Loss-less transfer preventing the loss of packets and its state, and order-preserving transfer preserving packet ordering may be provided by a two-step transfer process in which packets are buffered during the transfer and are marked to be processed by a receiving middlebox before processing by that middlebox of ongoing packets for the given flow.Type: ApplicationFiled: December 19, 2014Publication date: June 23, 2016Inventors: Aaron Robert Gember-Jacobson, Srinivasa Aditya Akella, Chaithan M. Prakash, Raajay Viswanathan, Robert Grandl, Junaid Khalid, Sourav Das
-
Patent number: 9104492Abstract: A virtual network virtual machine may be implemented on a cloud computing facility to control communication among virtual machines executing applications and virtual machines executing middlebox functions. This virtual network virtual machine may provide for automatic scaling of middleboxes according to a heuristic algorithm that monitors the effectiveness of each middlebox on the network performance as application virtual machines are scaled. The virtual machine virtual network may also locate virtual machines in actual hardware to further optimize performance.Type: GrantFiled: September 4, 2012Date of Patent: August 11, 2015Assignee: Wisconsin Alumni Research FoundationInventors: Aaron Robert Gember, Robert Daniel Grandl, Theophilus Aderemi Benson, Ashok Anand, Srinivasa Aditya Akella
-
Publication number: 20140068602Abstract: A virtual network virtual machine may be implemented on a cloud computing facility to control communication among virtual machines executing applications and virtual machines executing middlebox functions. This virtual network virtual machine may provide for automatic scaling of middleboxes according to a heuristic algorithm that monitors the effectiveness of each middlebox on the network performance as application virtual machines are scaled. The virtual machine virtual network may also locate virtual machines in actual hardware to further optimize performance.Type: ApplicationFiled: September 4, 2012Publication date: March 6, 2014Inventors: Aaron Robert Gember, Robert Daniel Grandl, Theophilus Aderemi Benson, Ashok Anand, Srinivasa Aditya Akella
-
Patent number: 8509237Abstract: A network employing redundancy-aware hardware may actively allocate decompression tasks among different devices along a single path to improve data throughput. The allocation can be performed by a hash or similar process operating on a header of the packets to distribute caching according to predefined ranges of hash values without significant additional communication overhead. Decompression of packets may be similarly distributed by marking shim values to match the earlier caching of antecedent packets. Nodes may use coordinated cache sizes and organizations to eliminate the need for separate cache protocol communications.Type: GrantFiled: June 26, 2009Date of Patent: August 13, 2013Assignee: Wisconsin Alumni Research FoundationInventors: Srinivasa Aditya Akella, Ashok Anand, Vyas Sekar
-
Patent number: 7933745Abstract: Disclosed is a method and system for determining one or more performance characteristics of a target server. A command is transmitted from a coordinator to a plurality of clients. The command instructs the plurality of clients to each transmit a request targeting a sub-system of said target server. A response time is then received from each client and a performance characteristic is determined from the received response times.Type: GrantFiled: April 8, 2008Date of Patent: April 26, 2011Assignee: AT&T Intellectual Property I, L.P.Inventors: Balachander Krishnamurthy, Srinivasa Aditya Akella, Pratap Ramamurthy, Vyas Sekar, Anees Shaikh
-
Publication number: 20100329256Abstract: A network employing redundancy-aware hardware may actively allocate decompression tasks among different devices along a single path to improve data throughput. The allocation can be performed by a hash or similar process operating on a header of the packets to distribute caching according to predefined ranges of hash values without significant additional communication overhead. Decompression of packets may be similarly distributed by marking shim values to match the earlier caching of antecedent packets. Nodes may use coordinated cache sizes and organizations to eliminate the need for separate cache protocol communications.Type: ApplicationFiled: June 26, 2009Publication date: December 30, 2010Inventors: Srinivasa Aditya Akella, Ashok Anand, Vyas Sekar
-
Publication number: 20100254378Abstract: A network employing multiple redundancy-aware routers that can eliminate the transmission of redundant data is greatly improved by steering redundant data preferentially into common data paths possibly contrary to other routing paradigms. By collecting redundant data in certain pathways, the effectiveness of the redundancy-aware routers is substantially increased.Type: ApplicationFiled: June 8, 2009Publication date: October 7, 2010Inventors: Srinivasa Aditya Akella, Ashok Anand, Srinivasan Seshan
-
Publication number: 20100254377Abstract: A network employing multiple redundancy-aware routers that can eliminate the transmission of redundant data is greatly improved by steering redundant data preferentially into common data paths possibly contrary to other routing paradigms. By collecting redundant data in certain pathways, the effectiveness of the redundancy-aware routers is substantially increased.Type: ApplicationFiled: April 3, 2009Publication date: October 7, 2010Inventors: Srinivasa Aditya Akella, Ashok Anand, Srinivasan Seshan
-
Publication number: 20090094000Abstract: Disclosed is a method and system for determining one or more performance characteristics of a target server. A command is transmitted from a coordinator to a plurality of clients. The command instructs the plurality of clients to each transmit a request targeting a sub-system of said target server. A response time is then received from each client and a performance characteristic is determined from the received response times.Type: ApplicationFiled: April 8, 2008Publication date: April 9, 2009Inventors: Balachander Krishnamurthy, Srinivasa Aditya Akella, Pratap Ramamurthy, Vyas Sekar, Anees Shaikh