Patents by Inventor Praneeth Vepakomma

Praneeth Vepakomma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11843586
    Abstract: Disclosed is a method that includes training, at a client, a part of a deep learning network up to a split layer of the client. Based on an output of the split layer, the method includes completing, at a server, training of the deep learning network by forward propagating the output received at a split layer of the server to a last layer of the server. The server calculates a weighted loss function for the client at the last layer and stores the calculated loss function. After each respective client of a plurality of clients has a respective loss function stored, the server averages the plurality of respective weighted client loss functions and back propagates gradients based on the average loss value from the last layer of the server to the split layer of the server and transmits just the server split layer gradients to the respective clients.
    Type: Grant
    Filed: August 29, 2022
    Date of Patent: December 12, 2023
    Assignee: TRIPLEBLIND, INC.
    Inventors: Gharib Gharibi, Ravi Patel, Babak Poorebrahim Gilkalaye, Praneeth Vepakomma, Greg Storm, Riddhiman Das
  • Publication number: 20220417225
    Abstract: Disclosed is a method that includes training, at a client, a part of a deep learning network up to a split layer of the client. Based on an output of the split layer, the method includes completing, at a server, training of the deep learning network by forward propagating the output received at a split layer of the server to a last layer of the server. The server calculates a weighted loss function for the client at the last layer and stores the calculated loss function. After each respective client of a plurality of clients has a respective loss function stored, the server averages the plurality of respective weighted client loss functions and back propagates gradients based on the average loss value from the last layer of the server to the split layer of the server and transmits just the server split layer gradients to the respective clients.
    Type: Application
    Filed: August 29, 2022
    Publication date: December 29, 2022
    Inventors: Gharib GHARIBI, Ravi PATEL, Babak Poorebrahim GILKALAYE, Praneeth VEPAKOMMA, Greg STORM, Riddhiman DAS
  • Patent number: 11481635
    Abstract: A distributed deep learning network may prevent an attacker from reconstructing raw data from activation outputs of an intermediate layer of the network. To achieve this, the loss function of the network may tend to reduce distance correlation between raw data and the activation outputs. For instance, the loss function may be the sum of two terms, where the first term is weighted distance correlation between raw data and activation outputs of a split layer of the network, and the second term is weighted categorical cross entropy of actual labels and label predictions. Distance correlation with the entire raw data may be minimized. Alternatively, distance correlation with only with certain features of the raw data may be minimized, in order to ensure attribute-level privacy. In some cases, a client computer calculates decorrelated representations of raw data before sharing information about the data with external computers.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: October 25, 2022
    Assignee: Massachusetts Institute of Technology
    Inventors: Praneeth Vepakomma, Abhishek Singh, Otkrist Gupta, Ramesh Raskar
  • Patent number: 11431688
    Abstract: Disclosed is a method that includes training, at a client, a part of a deep learning network up to a split layer of the client. Based on an output of the split layer, the method includes completing, at a server, training of the deep learning network by forward propagating the output received at a split layer of the server to a last layer of the server. The server calculates a weighted loss function for the client at the last layer and stores the calculated loss function. After each respective client of a plurality of clients has a respective loss function stored, the server averages the plurality of respective weighted client loss functions and back propagates gradients based on the average loss value from the last layer of the server to the split layer of the server and transmits just the server split layer gradients to the respective clients.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: August 30, 2022
    Assignee: TripleBlind, Inc.
    Inventors: Gharib Gharibi, Ravi Patel, Babak Poorebrahim Gilkalaye, Praneeth Vepakomma, Greg Storm, Riddhiman Das
  • Publication number: 20220029971
    Abstract: Disclosed is a method that includes training, at a client, a part of a deep learning network up to a split layer of the client. Based on an output of the split layer, the method includes completing, at a server, training of the deep learning network by forward propagating the output received at a split layer of the server to a last layer of the server. The server calculates a weighted loss function for the client at the last layer and stores the calculated loss function. After each respective client of a plurality of clients has a respective loss function stored, the server averages the plurality of respective weighted client loss functions and back propagates gradients based on the average loss value from the last layer of the server to the split layer of the server and transmits just the server split layer gradients to the respective clients.
    Type: Application
    Filed: October 12, 2021
    Publication date: January 27, 2022
    Inventors: Gharib GHARIBI, Ravi PATEL, Babak Poorebrahim GILKALAYE, Praneeth VEPAKOMMA, Greg STORM, Riddhiman DAS
  • Publication number: 20200349443
    Abstract: A distributed deep learning network may prevent an attacker from reconstructing raw data from activation outputs of an intermediate layer of the network. To achieve this, the loss function of the network may tend to reduce distance correlation between raw data and the activation outputs. For instance, the loss function may be the sum of two terms, where the first term is weighted distance correlation between raw data and activation outputs of a split layer of the network, and the second term is weighted categorical cross entropy of actual labels and label predictions. Distance correlation with the entire raw data may be minimized. Alternatively, distance correlation with only with certain features of the raw data may be minimized, in order to ensure attribute-level privacy. In some cases, a client computer calculates decorrelated representations of raw data before sharing information about the data with external computers.
    Type: Application
    Filed: April 29, 2020
    Publication date: November 5, 2020
    Inventors: Praneeth Vepakomma, Abhishek Singh, Otkrist Gupta, Ramesh Raskar
  • Publication number: 20180082202
    Abstract: A device and method for generating a crime type combination based on historical incident data. The device includes a memory and an electronic processor. The memory includes historical incident data, which includes a plurality of incidents, each having a crime type. The electronic processor is configured to obtain a sample set of incidents from the historical incident data for each crime type of a plurality of unique crime type combinations. The electronic processor is configured to for each crime type combination, compute a distance correlation between the sample sets of incidents of crime types forming the crime type combination. The electronic processor is configured to select a crime type combination from the plurality of crime type combinations based on the distance correlations of the plurality of crime type combinations, and generate a crime prediction geographic area for the selected crime type combination.
    Type: Application
    Filed: September 20, 2016
    Publication date: March 22, 2018
    Inventor: Praneeth Vepakomma
  • Publication number: 20150073759
    Abstract: An apparatus, method, and computer program product are disclosed for improving prediction accuracy in a spatio-temporal prediction system. A data module receives spatio-temporal data comprising a one or more of a time and location. An estimation module generates one or more prediction probabilities for the spatio-temporal data. A sampling module generates one or more resamples of the prediction probabilities.
    Type: Application
    Filed: September 8, 2014
    Publication date: March 12, 2015
    Inventors: Praneeth Vepakomma, Eric Copp, Andrew Reynolds
  • Publication number: 20150066828
    Abstract: An apparatus, method, and computer program product are disclosed for correcting inconsistencies in a spatio-temporal prediction system. A data module receives event-prediction data comprising a plurality of prediction probabilities. The plurality of prediction probabilities includes one or more ordering inconsistencies. A ranking module calculates one or more event-prediction rankings based on the event-prediction data while adjusting for the one or more ordering inconsistencies. A probability-ordering module orders the prediction probabilities based on the one or more event-prediction rankings.
    Type: Application
    Filed: August 27, 2014
    Publication date: March 5, 2015
    Inventor: Praneeth Vepakomma