Patents by Inventor Otkrist Gupta

Otkrist Gupta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240112318
    Abstract: The present disclosure generally relates to systems that include an artificial intelligence (AI) architecture for determining whether an image is manipulated. The architecture can include a constrained convolutional layer, separable convolutional layers, maximum-pooling layers, a global average-pooling layer, and a fully connected layer. In one specific example, the constrained convolutional layer can detect one or more image-manipulation fingerprints with respect to an image and can generate feature maps corresponding to the image. The global average-pooling layer can generate a vector of feature values by averaging the feature maps. The fully connected layer can then generate, based on the vector of feature values, an indication of whether the image was manipulated or not manipulated.
    Type: Application
    Filed: December 5, 2023
    Publication date: April 4, 2024
    Applicant: Lendbuzz, Inc.
    Inventors: Otkrist Gupta, Dan Raviv, Hailey James
  • Patent number: 11875494
    Abstract: The present disclosure generally relates to systems that include an artificial intelligence (AI) architecture for determining whether an image is manipulated. The architecture can include a constrained convolutional layer, separable convolutional layers, maximum-pooling layers, a global average-pooling layer, and a fully connected layer. In one specific example, the constrained convolutional layer can detect one or more image-manipulation fingerprints with respect to an image and can generate feature maps corresponding to the image. The global average-pooling layer can generate a vector of feature values by averaging the feature maps. The fully connected layer can then generate, based on the vector of feature values, an indication of whether the image was manipulated or not manipulated.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: January 16, 2024
    Assignee: Lendbuzz, Inc.
    Inventors: Otkrist Gupta, Dan Raviv, Hailey James
  • Patent number: 11669737
    Abstract: A deep neural network may be trained on the data of one or more entities, also know as Alices. An outside computing entity, also known as a Bob, may assist in these computations, without receiving access to Alices' data. Data privacy may be preserved by employing a “split” neural network. The network may comprise an Alice part and a Bob part. The Alice part may comprise at least three neural layers, and the Bob part may comprise at least two neural layers. When training on data of an Alice, that Alice may input her data into the Alice part, perform forward propagation though the Alice part, and then pass output activations for the final layer of the Alice part to Bob. Bob may then forward propagate through the Bob part. Similarly, backpropagation may proceed backwards through the Bob part, and then through the Alice part of the network.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: June 6, 2023
    Assignee: Massachusetts Institute of Technology
    Inventors: Otkrist Gupta, Ramesh Raskar
  • Publication number: 20220414854
    Abstract: The present disclosure generally relates to systems that include an artificial intelligence (AI) architecture for determining whether an image is manipulated. The architecture can include a constrained convolutional layer, separable convolutional layers, maximum-pooling layers, a global average-pooling layer, and a fully connected layer. In one specific example, the constrained convolutional layer can detect one or more image-manipulation fingerprints with respect to an image and can generate feature maps corresponding to the image. The global average-pooling layer can generate a vector of feature values by averaging the feature maps. The fully connected layer can then generate, based on the vector of feature values, an indication of whether the image was manipulated or not manipulated.
    Type: Application
    Filed: June 23, 2021
    Publication date: December 29, 2022
    Inventors: Otkrist Gupta, Dan Raviv, Hailey James
  • Patent number: 11481635
    Abstract: A distributed deep learning network may prevent an attacker from reconstructing raw data from activation outputs of an intermediate layer of the network. To achieve this, the loss function of the network may tend to reduce distance correlation between raw data and the activation outputs. For instance, the loss function may be the sum of two terms, where the first term is weighted distance correlation between raw data and activation outputs of a split layer of the network, and the second term is weighted categorical cross entropy of actual labels and label predictions. Distance correlation with the entire raw data may be minimized. Alternatively, distance correlation with only with certain features of the raw data may be minimized, in order to ensure attribute-level privacy. In some cases, a client computer calculates decorrelated representations of raw data before sharing information about the data with external computers.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: October 25, 2022
    Assignee: Massachusetts Institute of Technology
    Inventors: Praneeth Vepakomma, Abhishek Singh, Otkrist Gupta, Ramesh Raskar
  • Publication number: 20220309365
    Abstract: The present disclosure generally relates to techniques for constructing an artificial-intelligence (AI) architecture. The present disclosure relates to techniques for executing the AI architecture to detect whether or not characters in a digital document have been manipulated. The AI architecture can be configured to classify each character in a digital document as manipulated or not manipulated by constructing a graph for each character, generating features for each node of the graph, and inputting a vector representation of the graph into a trained machine-learning model to generate the character classification.
    Type: Application
    Filed: March 29, 2021
    Publication date: September 29, 2022
    Inventors: Hailey James, Otkrist Gupta, Dan Raviv
  • Publication number: 20200349435
    Abstract: A deep neural network may be trained on the data of one or more entities, also know as Alices. An outside computing entity, also known as a Bob, may assist in these computations, without receiving access to Alices' data. Data privacy may be preserved by employing a “split” neural network. The network may comprise an Alice part and a Bob part. The Alice part may comprise at least three neural layers, and the Bob part may comprise at least two neural layers. When training on data of an Alice, that Alice may input her data into the Alice part, perform forward propagation though the Alice part, and then pass output activations for the final layer of the Alice part to Bob. Bob may then forward propagate through the Bob part. Similarly, backpropagation may proceed backwards through the Bob part, and then through the Alice part of the network.
    Type: Application
    Filed: July 21, 2020
    Publication date: November 5, 2020
    Inventors: Otkrist Gupta, Ramesh Raskar
  • Publication number: 20200349443
    Abstract: A distributed deep learning network may prevent an attacker from reconstructing raw data from activation outputs of an intermediate layer of the network. To achieve this, the loss function of the network may tend to reduce distance correlation between raw data and the activation outputs. For instance, the loss function may be the sum of two terms, where the first term is weighted distance correlation between raw data and activation outputs of a split layer of the network, and the second term is weighted categorical cross entropy of actual labels and label predictions. Distance correlation with the entire raw data may be minimized. Alternatively, distance correlation with only with certain features of the raw data may be minimized, in order to ensure attribute-level privacy. In some cases, a client computer calculates decorrelated representations of raw data before sharing information about the data with external computers.
    Type: Application
    Filed: April 29, 2020
    Publication date: November 5, 2020
    Inventors: Praneeth Vepakomma, Abhishek Singh, Otkrist Gupta, Ramesh Raskar
  • Patent number: 10755172
    Abstract: A deep neural network may be trained on the data of one or more entities, also know as Alices. An outside computing entity, also known as a Bob, may assist in these computations, without receiving access to Alices' data. Data privacy may be preserved by employing a “split” neural network. The network may comprise an Alice part and a Bob part. The Alice part may comprise at least three neural layers, and the Bob part may comprise at least two neural layers. When training on data of an Alice, that Alice may input her data into the Alice part, perform forward propagation though the Alice part, and then pass output activations for the final layer of the Alice part to Bob. Bob may then forward propagate through the Bob part. Similarly, backpropagation may proceed backwards through the Bob part, and then through the Alice part of the network.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: August 25, 2020
    Assignee: Massachusetts Institute of Technology
    Inventors: Otkrist Gupta, Ramesh Raskar
  • Publication number: 20170372201
    Abstract: A deep neural network may be trained on the data of one or more entities, also know as Alices. An outside computing entity, also known as a Bob, may assist in these computations, without receiving access to Alices' data. Data privacy may be preserved by employing a “split” neural network. The network may comprise an Alice part and a Bob part. The Alice part may comprise at least three neural layers, and the Bob part may comprise at least two neural layers. When training on data of an Alice, that Alice may input her data into the Alice part, perform forward propagation though the Alice part, and then pass output activations for the final layer of the Alice part to Bob. Bob may then forward propagate through the Bob part. Similarly, backpropagation may proceed backwards through the Bob part, and then through the Alice part of the network.
    Type: Application
    Filed: June 22, 2017
    Publication date: December 28, 2017
    Inventors: Otkrist Gupta, Ramesh Raskar