Patents by Inventor Otkrist Gupta
Otkrist Gupta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240112318Abstract: The present disclosure generally relates to systems that include an artificial intelligence (AI) architecture for determining whether an image is manipulated. The architecture can include a constrained convolutional layer, separable convolutional layers, maximum-pooling layers, a global average-pooling layer, and a fully connected layer. In one specific example, the constrained convolutional layer can detect one or more image-manipulation fingerprints with respect to an image and can generate feature maps corresponding to the image. The global average-pooling layer can generate a vector of feature values by averaging the feature maps. The fully connected layer can then generate, based on the vector of feature values, an indication of whether the image was manipulated or not manipulated.Type: ApplicationFiled: December 5, 2023Publication date: April 4, 2024Applicant: Lendbuzz, Inc.Inventors: Otkrist Gupta, Dan Raviv, Hailey James
-
Patent number: 11875494Abstract: The present disclosure generally relates to systems that include an artificial intelligence (AI) architecture for determining whether an image is manipulated. The architecture can include a constrained convolutional layer, separable convolutional layers, maximum-pooling layers, a global average-pooling layer, and a fully connected layer. In one specific example, the constrained convolutional layer can detect one or more image-manipulation fingerprints with respect to an image and can generate feature maps corresponding to the image. The global average-pooling layer can generate a vector of feature values by averaging the feature maps. The fully connected layer can then generate, based on the vector of feature values, an indication of whether the image was manipulated or not manipulated.Type: GrantFiled: June 23, 2021Date of Patent: January 16, 2024Assignee: Lendbuzz, Inc.Inventors: Otkrist Gupta, Dan Raviv, Hailey James
-
Patent number: 11669737Abstract: A deep neural network may be trained on the data of one or more entities, also know as Alices. An outside computing entity, also known as a Bob, may assist in these computations, without receiving access to Alices' data. Data privacy may be preserved by employing a “split” neural network. The network may comprise an Alice part and a Bob part. The Alice part may comprise at least three neural layers, and the Bob part may comprise at least two neural layers. When training on data of an Alice, that Alice may input her data into the Alice part, perform forward propagation though the Alice part, and then pass output activations for the final layer of the Alice part to Bob. Bob may then forward propagate through the Bob part. Similarly, backpropagation may proceed backwards through the Bob part, and then through the Alice part of the network.Type: GrantFiled: July 21, 2020Date of Patent: June 6, 2023Assignee: Massachusetts Institute of TechnologyInventors: Otkrist Gupta, Ramesh Raskar
-
Publication number: 20220414854Abstract: The present disclosure generally relates to systems that include an artificial intelligence (AI) architecture for determining whether an image is manipulated. The architecture can include a constrained convolutional layer, separable convolutional layers, maximum-pooling layers, a global average-pooling layer, and a fully connected layer. In one specific example, the constrained convolutional layer can detect one or more image-manipulation fingerprints with respect to an image and can generate feature maps corresponding to the image. The global average-pooling layer can generate a vector of feature values by averaging the feature maps. The fully connected layer can then generate, based on the vector of feature values, an indication of whether the image was manipulated or not manipulated.Type: ApplicationFiled: June 23, 2021Publication date: December 29, 2022Inventors: Otkrist Gupta, Dan Raviv, Hailey James
-
Patent number: 11481635Abstract: A distributed deep learning network may prevent an attacker from reconstructing raw data from activation outputs of an intermediate layer of the network. To achieve this, the loss function of the network may tend to reduce distance correlation between raw data and the activation outputs. For instance, the loss function may be the sum of two terms, where the first term is weighted distance correlation between raw data and activation outputs of a split layer of the network, and the second term is weighted categorical cross entropy of actual labels and label predictions. Distance correlation with the entire raw data may be minimized. Alternatively, distance correlation with only with certain features of the raw data may be minimized, in order to ensure attribute-level privacy. In some cases, a client computer calculates decorrelated representations of raw data before sharing information about the data with external computers.Type: GrantFiled: April 29, 2020Date of Patent: October 25, 2022Assignee: Massachusetts Institute of TechnologyInventors: Praneeth Vepakomma, Abhishek Singh, Otkrist Gupta, Ramesh Raskar
-
Publication number: 20220309365Abstract: The present disclosure generally relates to techniques for constructing an artificial-intelligence (AI) architecture. The present disclosure relates to techniques for executing the AI architecture to detect whether or not characters in a digital document have been manipulated. The AI architecture can be configured to classify each character in a digital document as manipulated or not manipulated by constructing a graph for each character, generating features for each node of the graph, and inputting a vector representation of the graph into a trained machine-learning model to generate the character classification.Type: ApplicationFiled: March 29, 2021Publication date: September 29, 2022Inventors: Hailey James, Otkrist Gupta, Dan Raviv
-
Publication number: 20200349435Abstract: A deep neural network may be trained on the data of one or more entities, also know as Alices. An outside computing entity, also known as a Bob, may assist in these computations, without receiving access to Alices' data. Data privacy may be preserved by employing a “split” neural network. The network may comprise an Alice part and a Bob part. The Alice part may comprise at least three neural layers, and the Bob part may comprise at least two neural layers. When training on data of an Alice, that Alice may input her data into the Alice part, perform forward propagation though the Alice part, and then pass output activations for the final layer of the Alice part to Bob. Bob may then forward propagate through the Bob part. Similarly, backpropagation may proceed backwards through the Bob part, and then through the Alice part of the network.Type: ApplicationFiled: July 21, 2020Publication date: November 5, 2020Inventors: Otkrist Gupta, Ramesh Raskar
-
Publication number: 20200349443Abstract: A distributed deep learning network may prevent an attacker from reconstructing raw data from activation outputs of an intermediate layer of the network. To achieve this, the loss function of the network may tend to reduce distance correlation between raw data and the activation outputs. For instance, the loss function may be the sum of two terms, where the first term is weighted distance correlation between raw data and activation outputs of a split layer of the network, and the second term is weighted categorical cross entropy of actual labels and label predictions. Distance correlation with the entire raw data may be minimized. Alternatively, distance correlation with only with certain features of the raw data may be minimized, in order to ensure attribute-level privacy. In some cases, a client computer calculates decorrelated representations of raw data before sharing information about the data with external computers.Type: ApplicationFiled: April 29, 2020Publication date: November 5, 2020Inventors: Praneeth Vepakomma, Abhishek Singh, Otkrist Gupta, Ramesh Raskar
-
Patent number: 10755172Abstract: A deep neural network may be trained on the data of one or more entities, also know as Alices. An outside computing entity, also known as a Bob, may assist in these computations, without receiving access to Alices' data. Data privacy may be preserved by employing a “split” neural network. The network may comprise an Alice part and a Bob part. The Alice part may comprise at least three neural layers, and the Bob part may comprise at least two neural layers. When training on data of an Alice, that Alice may input her data into the Alice part, perform forward propagation though the Alice part, and then pass output activations for the final layer of the Alice part to Bob. Bob may then forward propagate through the Bob part. Similarly, backpropagation may proceed backwards through the Bob part, and then through the Alice part of the network.Type: GrantFiled: June 22, 2017Date of Patent: August 25, 2020Assignee: Massachusetts Institute of TechnologyInventors: Otkrist Gupta, Ramesh Raskar
-
Publication number: 20170372201Abstract: A deep neural network may be trained on the data of one or more entities, also know as Alices. An outside computing entity, also known as a Bob, may assist in these computations, without receiving access to Alices' data. Data privacy may be preserved by employing a “split” neural network. The network may comprise an Alice part and a Bob part. The Alice part may comprise at least three neural layers, and the Bob part may comprise at least two neural layers. When training on data of an Alice, that Alice may input her data into the Alice part, perform forward propagation though the Alice part, and then pass output activations for the final layer of the Alice part to Bob. Bob may then forward propagate through the Bob part. Similarly, backpropagation may proceed backwards through the Bob part, and then through the Alice part of the network.Type: ApplicationFiled: June 22, 2017Publication date: December 28, 2017Inventors: Otkrist Gupta, Ramesh Raskar