Patents by Inventor Masoud Faraki

Masoud Faraki has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240160938
    Abstract: Methods and systems of training a model include determining a dropout mask based on gradient signal to noise ratio of parameters of a neural network model. The neural network model is trained with parameters zeroed-out according to the dropout mask. The dropout mask is iteratively updated and the training is performed iteratively based on the updated dropout mask.
    Type: Application
    Filed: November 6, 2023
    Publication date: May 16, 2024
    Inventors: Masoud Faraki, Xiang Yu, Mateusz Michalkiewicz
  • Patent number: 11977602
    Abstract: A method for training a model for face recognition is provided. The method forward trains a training batch of samples to form a face recognition model w(t), and calculates sample weights for the batch. The method obtains a training batch gradient with respect to model weights thereof and updates, using the gradient, the model w(t) to a face recognition model what(t). The method forwards a validation batch of samples to the face recognition model what(t). The method obtains a validation batch gradient, and updates, using the validation batch gradient and what(t), a sample-level importance weight of samples in the training batch to obtain an updated sample-level importance weight. The method obtains a training batch upgraded gradient based on the updated sample-level importance weight of the training batch samples, and updates, using the upgraded gradient, the model w(t) to a trained model w(t+1) corresponding to a next iteration.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: May 7, 2024
    Assignee: NEC Corporation
    Inventors: Xiang Yu, Yi-Hsuan Tsai, Masoud Faraki, Ramin Moslemi, Manmohan Chandraker, Chang Liu
  • Patent number: 11947626
    Abstract: A method for improving face recognition from unseen domains by learning semantically meaningful representations is presented. The method includes obtaining face images with associated identities from a plurality of datasets, randomly selecting two datasets of the plurality of datasets to train a model, sampling batch face images and their corresponding labels, sampling triplet samples including one anchor face image, a sample face image from a same identity, and a sample face image from a different identity than that of the one anchor face image, performing a forward pass by using the samples of the selected two datasets, finding representations of the face images by using a backbone convolutional neural network (CNN), generating covariances from the representations of the face images and the backbone CNN, the covariances made in different spaces by using positive pairs and negative pairs, and employing the covariances to compute a cross-domain similarity loss function.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: April 2, 2024
    Assignee: NEC Corporation
    Inventors: Masoud Faraki, Xiang Yu, Yi-Hsuan Tsai, Yumin Suh, Manmohan Chandraker
  • Patent number: 11710346
    Abstract: Methods and systems for training a neural network include generate an image of a mask. A copy of an image is generated from an original set of training data. The copy is altered to add the image of a mask to a face detected within the copy. An augmented set of training data is generated that includes the original set of training data and the altered copy. A neural network model is trained to recognize masked faces using the augmented set of training data.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: July 25, 2023
    Inventors: Manmohan Chandraker, Ting Wang, Xiang Xu, Francesco Pittaluga, Gaurav Sharma, Yi-Hsuan Tsai, Masoud Faraki, Yuheng Chen, Yue Tian, Ming-Fang Huang, Jian Fang
  • Publication number: 20230196122
    Abstract: Systems and methods for generating a hypernetwork configured to be trained for a plurality of tasks; receiving a task preference vector identifying a hierarchical priority for the plurality of tasks, and a resource constraint as a tuple; finding tree sub-structures and the corresponding modulation of features for every tuple within an N-stream anchor network; optimizing a branching regularized loss function to train an edge hypernet; and training a weight hypernet, keeping the anchor net and the edge hypernet fixed.
    Type: Application
    Filed: August 31, 2022
    Publication date: June 22, 2023
    Inventors: Yumin Suh, Samuel Schulter, Xiang Yu, Masoud Faraki, Manmohan Chandraker, Dripta Raychaudhuri
  • Publication number: 20230153572
    Abstract: A computer-implemented method for model training is provided. The method includes receiving, by a hardware processor, sets of images, each set corresponding to a respective task. The method further includes training, by the hardware processor, a task-based neural network classifier having a center and a covariance matrix for each of a plurality of classes in a last layer of the task-based neural network classifier and a plurality of convolutional layers preceding the last layer, by using a similarity between an image feature of a last convolutional layer from among the plurality of convolutional layers and the center and the covariance matrix for a given one of the plurality of classes, the similarity minimizing an impact of a data model forgetting problem.
    Type: Application
    Filed: October 21, 2022
    Publication date: May 18, 2023
    Inventors: Masoud Faraki, Yi-Hsuan Tsai, Xiang Yu, Samuel Schulter, Yumin Suh, Christian Simon
  • Publication number: 20220147735
    Abstract: A method for employing facial information in unsupervised person re-identification is presented. The method includes extracting, by a body feature extractor, body features from a first data stream, extracting, by a head feature extractor, head features from a second data stream, outputting a body descriptor vector from the body feature extractor, outputting a head descriptor vector from the head feature extractor, and concatenating the body descriptor vector and the head descriptor vector to enable a model to generate a descriptor vector.
    Type: Application
    Filed: November 5, 2021
    Publication date: May 12, 2022
    Inventors: Yumin Suh, Xiang Yu, Yi-Hsuan Tsai, Masoud Faraki, Manmohan Chandraker
  • Publication number: 20220148189
    Abstract: Methods and systems for training a model include combining data from multiple datasets, the datasets having different respective label spaces. Relationships between labels in the different label spaces are identified. A unified neural network model is trained, using the combined data and the identified relationships to generate a unified model, with a class relational binary cross-entropy loss.
    Type: Application
    Filed: November 5, 2021
    Publication date: May 12, 2022
    Inventors: Yi-Hsuan Tsai, Masoud Faraki, Yumin Suh, Sparsh Garg, Manmohan Chandraker, Dongwan Kim
  • Publication number: 20220147765
    Abstract: A method for improving face recognition from unseen domains by learning semantically meaningful representations is presented. The method includes obtaining face images with associated identities from a plurality of datasets, randomly selecting two datasets of the plurality of datasets to train a model, sampling batch face images and their corresponding labels, sampling triplet samples including one anchor face image, a sample face image from a same identity, and a sample face image from a different identity than that of the one anchor face image, performing a forward pass by using the samples of the selected two datasets, finding representations of the face images by using a backbone convolutional neural network (CNN), generating covariances from the representations of the face images and the backbone CNN, the covariances made in different spaces by using positive pairs and negative pairs, and employing the covariances to compute a cross-domain similarity loss function.
    Type: Application
    Filed: November 5, 2021
    Publication date: May 12, 2022
    Inventors: Masoud Faraki, Xiang Yu, Yi-Hsuan Tsai, Yumin Suh, Manmohan Chandraker
  • Publication number: 20220147767
    Abstract: A method for training a model for face recognition is provided. The method forward trains a training batch of samples to form a face recognition model w(t), and calculates sample weights for the batch. The method obtains a training batch gradient with respect to model weights thereof and updates, using the gradient, the model w(t) to a face recognition model what(t). The method forwards a validation batch of samples to the face recognition model what(t). The method obtains a validation batch gradient, and updates, using the validation batch gradient and what(t), a sample-level importance weight of samples in the training batch to obtain an updated sample-level importance weight. The method obtains a training batch upgraded gradient based on the updated sample-level importance weight of the training batch samples, and updates, using the upgraded gradient, the model w(t) to a trained model w(t+1) corresponding to a next iteration.
    Type: Application
    Filed: November 8, 2021
    Publication date: May 12, 2022
    Inventors: Xiang Yu, Yi-Hsuan Tsai, Masoud Faraki, Ramin Moslemi, Manmohan Chandraker, Chang Liu
  • Publication number: 20220121953
    Abstract: A method for multi-task learning via gradient split for rich human analysis is presented. The method includes extracting images from training data having a plurality of datasets, each dataset associated with one task, feeding the training data into a neural network model including a feature extractor and task-specific heads, wherein the feature extractor has a feature extractor shared component and a feature extractor task-specific component, dividing filters of deeper layers of convolutional layers of the feature extractor into N groups, N being a number of tasks, assigning one task to each group of the N groups, and manipulating gradients so that each task loss updates only one subset of filters.
    Type: Application
    Filed: October 7, 2021
    Publication date: April 21, 2022
    Inventors: Yumin Suh, Xiang Yu, Masoud Faraki, Manmohan Chandraker, Weijian Deng
  • Publication number: 20220108226
    Abstract: A method for employing a general label space voting-based differentially private federated learning (DPFL) framework is presented. The method includes labeling a first subset of unlabeled data from a first global server, to generate first pseudo-labeled data, by employing a first voting-based DPFL computation where each agent trains a local agent model by using private local data associated with the agent, labeling a second subset of unlabeled data from a second global server, to generate second pseudo-labeled data, by employing a second voting-based DPFL computation where each agent maintains a data-independent feature extractor, and training a global model by using the first and second pseudo-labeled data to provide provable differential privacy (DP) guarantees for both instance-level and agent-level privacy regimes.
    Type: Application
    Filed: October 1, 2021
    Publication date: April 7, 2022
    Inventors: Xiang Yu, Yi-Hsuan Tsai, Francesco Pittaluga, Masoud Faraki, Manmohan Chandraker, Yuqing Zhu
  • Publication number: 20210374468
    Abstract: Methods and systems for training a neural network include generate an image of a mask. A copy of an image is generated from an original set of training data. The copy is altered to add the image of a mask to a face detected within the copy. An augmented set of training data is generated that includes the original set of training data and the altered copy. A neural network model is trained to recognize masked faces using the augmented set of training data.
    Type: Application
    Filed: May 26, 2021
    Publication date: December 2, 2021
    Inventors: Manmohan Chandraker, Ting Wang, Xiang Xu, Francesco Pittaluga, Gaurav Sharma, Yi-Hsuan Tsai, Masoud Faraki, Yuheng Chen, Yue Tian, Ming-Fang Huang, Jian Fang