Patents by Inventor Renqiang Min

Renqiang Min has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230154167
    Abstract: A method for implementing source-free domain adaptive detection is presented. The method includes, in a pretraining phase, applying strong data augmentation to labeled source images to produce perturbed labeled source images and training an object detection model by using the perturbed labeled source images to generate a source-only model. The method further includes, in an adaptation phase, training a self-trained mean teacher model by generating a weakly augmented image and multiple strongly augmented images from unlabeled target images, generating a plurality of region proposals from the weakly augmented image, selecting a region proposal from the plurality of region proposals as a pseudo ground truth, detecting, by the self-trained mean teacher model, object boxes and selecting pseudo ground truth boxes by employing a confidence constraint and a consistency constraint, and training a student model by using one of the multiple strongly augmented images jointly with an object detection loss.
    Type: Application
    Filed: October 14, 2022
    Publication date: May 18, 2023
    Inventors: Kai Li, Renqiang Min, Hans Peter Graf
  • Publication number: 20230153606
    Abstract: A method is provided that includes training a CLIP model to learn embeddings of images and text from matched image-text pairs. The text represents image attributes. The method trains a StyleGAN on images in a training dataset of matched image-text pairs. The method also trains, using a CLIP model guided contrastive loss which attracts matched text embedding pairs and repels unmatched pairs, a text-to-direction model to predict a text direction that is semantically aligned with an input text responsive to the input text and a random latent code. A triplet loss is used to learn text directions using the embeddings learned by the trained CLIP model. The method generates, by the trained StyleGAN, positive and negative synthesized images by respectively adding and subtracting the text direction in the latent space of the trained StyleGAN corresponding to a word for each of the words in the training dataset.
    Type: Application
    Filed: October 19, 2022
    Publication date: May 18, 2023
    Inventors: Renqiang Min, Kai Li, Hans Peter Graf, Zhiheng Li
  • Publication number: 20230129568
    Abstract: Systems and methods for predicting T-Cell receptor (TCR)-peptide interaction, including training a deep learning model for the prediction of TCR-peptide interaction by determining a multiple sequence alignment (MSA) for TCR-peptide pair sequences from a dataset of TCR-peptide pair sequences using a sequence analyzer, building TCR structures and peptide structures using the MSA and corresponding structures from a Protein Data Bank (PDB) using a MODELLER, and generating an extended TCR-peptide training dataset based on docking energy scores determined by docking peptides to TCRs using physical modeling based on the TCR structures and peptide structures built using the MODELLER. TCR-peptide pairs are classified and labeled as positive or negative pairs using pseudo-labels based on the docking energy scores, and the deep learning model is iteratively retrained based on the extended TCR-peptide training dataset and the pseudo-labels until convergence.
    Type: Application
    Filed: October 20, 2022
    Publication date: April 27, 2023
    Inventors: Renqiang Min, Hans Peter Graf, Erik Kruus, Yiren Jian
  • Publication number: 20230085160
    Abstract: A method for generating binding peptides presented by any given Major Histocompatibility Complex (MHC) protein is presented. The method includes, given a peptide and an MHC protein pair, enabling a Reinforcement Learning (RL) agent to interact with and exploit a peptide mutation environment by repeatedly mutating the peptide and observing an observation score of the peptide, learning to form a mutation policy, via a mutation policy network, to iteratively mutate amino acids of the peptide to obtain desired presentation scores, and generating, based on the desired presentation scores, qualified peptides and binding motifs of MHC Class I proteins.
    Type: Application
    Filed: August 30, 2022
    Publication date: March 16, 2023
    Inventors: Renqiang Min, Hans Peter Graf, Ziqi Chen
  • Publication number: 20230083313
    Abstract: A system for binding peptide search for immunotherapy is presented. The system includes employing a deep neural network to predict a peptide presentation given Major Histocompatibility Complex allele sequences and peptide sequences, training a Variational Autoencoder (VAE) to reconstruct peptides by converting the peptide sequences into continuous embedding vectors, running a Monte Carlo Tree Search to generate a first set of positive peptide vaccine candidates, running a Bayesian Optimization search with the trained VAE and a Backpropagation search with the trained VAE to generate a second set of positive peptide vaccine candidates, using a sampling from a Position Weight Matrix (sPWM) to generate a third set of positive peptide vaccine candidates, screening and merging the first, second, and third sets of positive peptide vaccine candidates, and outputting qualified peptides for immunotherapy from the screened and merged sets of positive peptide vaccine candidates.
    Type: Application
    Filed: August 30, 2022
    Publication date: March 16, 2023
    Inventors: Renqiang Min, Hans Peter Graf, Ziqi Chen
  • Publication number: 20220327425
    Abstract: Methods and systems for training a machine learning model include embedding a state, including a peptide sequence and a protein, as a vector. An action, including a modification to an amino acid in the peptide sequence, is predicted using a presentation score of the peptide sequence by the protein as a reward. A mutation policy model is trained, using the state and the reward, to generate modifications that increase the presentation score.
    Type: Application
    Filed: April 1, 2022
    Publication date: October 13, 2022
    Inventors: Renqiang Min, Hans Peter Graf, Ligong Han
  • Publication number: 20220328127
    Abstract: A computer-implemented method is provided for generating new binding peptides to Major Histocompatibility Complex (MHC) proteins. The method includes training, by a processor device, a Generative Adversarial Network GAN having a generator and a discriminator only on a set of binding peptide sequences given training data comprising the set of binding peptide sequences and a set of non-binding peptide sequences. A GAN training objective includes the discriminator being iteratively updated to distinguish generated peptide sequences from sampled binding peptide sequences as fake or real and the generator being iteratively updated to fool the discriminator. The training includes optimizing the GAN training objective while learning two projection vectors for a binding class with two cross-entropy losses. A first loss discriminating binding peptide sequences in the training data from non-binding peptide sequences in the training data.
    Type: Application
    Filed: April 1, 2022
    Publication date: October 13, 2022
    Inventors: Renqiang Min, Hans Peter Graf, Ligong Han
  • Publication number: 20220327814
    Abstract: A reinforcement learning based approach to the problem of query object localization, where an agent is trained to localize objects of interest specified by a small exemplary set. We learn a transferable reward signal formulated using the exemplary set by ordinal metric learning. It enables test-time policy adaptation to new environments where the reward signals are not readily available, and thus outperforms fine-tuning approaches that are limited to annotated images. In addition, the transferable reward allows repurposing of the trained agent for new tasks, such as annotation refinement, or selective localization from multiple common objects across a set of images.
    Type: Application
    Filed: April 7, 2022
    Publication date: October 13, 2022
    Applicant: NEC LABORATORIES AMERICA, INC
    Inventors: Shaobo HAN, Renqiang MIN, Tingfeng LI
  • Publication number: 20220327489
    Abstract: Systems and methods for matching job descriptions with job applicants is provided. The method includes allocating each of one or more job applicants' curriculum vitae (CV) into sections; applying max pooled word embedding to each section of the job applicants' CVs; using concatenated max-pooling and average-pooling to compose the section embeddings into an applicant's CV representation; allocating each of one or more job position descriptions into specified sections; applying max pooled word embedding to each section of the job position descriptions; using concatenated max-pooling and average-pooling to compose the section embeddings into a job representation; calculating a cosine similarity between each of the job representations and each of the CV representations to perform job-to-applicant matching; and presenting an ordered list of the one or more job applicants or an ordered list of the one or more job position descriptions to a user.
    Type: Application
    Filed: April 6, 2022
    Publication date: October 13, 2022
    Inventors: Renqiang Min, Iain Melvin, Christopher A White, Christopher Malon, Hans Peter Graf
  • Publication number: 20220319635
    Abstract: Methods and systems for training a model include encoding training peptide sequences using an encoder model. A new peptide sequence is generated using a generator model. The encoder model, the generator model, and the discriminator model are trained to cause the generator model to generate new peptides that the discriminator mistakes for the training peptide sequences, including learning projection vectors with respective cross-entropy losses for binding sequences and non-binding sequences.
    Type: Application
    Filed: April 1, 2022
    Publication date: October 6, 2022
    Inventors: Renqiang Min, Hans Peter Graf, Ligong Han
  • Patent number: 11423655
    Abstract: A computer-implemented method is provided for disentangled data generation. The method includes accessing, by a variational autoencoder, a plurality of supervision signals. The method further includes accessing, by the variational autoencoder, a plurality of auxiliary tasks that utilize the supervision signals as reward signals to learn a disentangled representation. The method also includes training the variational autoencoder to disentangle a sequential data input into a time-invariant factor and a time-varying factor using a self-supervised training approach which is based on outputs of the auxiliary tasks obtained by using the supervision signals to accomplish the plurality of auxiliary tasks.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: August 23, 2022
    Inventors: Renqiang Min, Yizhe Zhu, Asim Kadav, Hans Peter Graf
  • Publication number: 20220254152
    Abstract: A method for learning disentangled representations of videos is presented. The method includes feeding each frame of video data into an encoder to produce a sequence of visual features, passing the sequence of visual features through a deep convolutional network to obtain a posterior of a dynamic latent variable and a posterior of a static latent variable, sampling static and dynamic representations from the posterior of the static latent variable and the posterior of the dynamic latent variable, respectively, concatenating the static and dynamic representations to be fed into a decoder to generate reconstructed sequences, and applying three regularizers to the dynamic and static latent variables to trigger representation disentanglement. To facilitate the disentangled sequential representation learning, orthogonal factorization in generative adversarial network (GAN) latent space is leveraged to pre-train a generator as a decoder in the method.
    Type: Application
    Filed: January 27, 2022
    Publication date: August 11, 2022
    Inventors: Renqiang Min, Hans Peter Graf, Ligong Han
  • Publication number: 20220171989
    Abstract: A computer-implemented method for representation disentanglement is provided. The method includes encoding an input vector into an embedding. The method further includes learning, by a hardware processor, disentangled representations of the input vector including a style embedding and a content embedding by performing sample-based mutual information minimization on the embedding under a Wasserstein distance regularization and a Kullback-Leibler (KL) divergence. The method also includes decoding the style and content embeddings to obtain a reconstructed vector.
    Type: Application
    Filed: November 18, 2021
    Publication date: June 2, 2022
    Inventors: Renqiang Min, Asim Kadav, Hans Peter Graf, Ligong Han
  • Publication number: 20220130490
    Abstract: Methods and systems for generating a peptide sequence include transforming an input peptide sequence into disentangled representations, including a structural representation and an attribute representation, using an autoencoder model. One of the disentangled representations is modified. The disentangled representations, including the modified disentangled representation, are transformed to generate a new peptide sequence using the autoencoder model.
    Type: Application
    Filed: October 26, 2021
    Publication date: April 28, 2022
    Inventors: Renqiang Min, Igor Durdanovic, Hans Peter Graf
  • Publication number: 20220101007
    Abstract: A method for using a multi-hop reasoning framework to perform multi-step compositional long-term reasoning is presented. The method includes extracting feature maps and frame-level representations from a video stream by using a convolutional neural network (CNN), performing object representation learning and detection, linking objects through time via tracking to generate object tracks and image feature tracks, feeding the object tracks and the image feature tracks to a multi-hop transformer that hops over frames in the video stream while concurrently attending to one or more of the objects in the video stream until the multi-hop transformer arrives at a correct answer, and employing video representation learning and recognition from the objects and image context to locate a target object within the video stream.
    Type: Application
    Filed: September 1, 2021
    Publication date: March 31, 2022
    Inventors: Asim Kadav, Farley Lai, Hans Peter Graf, Alexandru Niculescu-Mizil, Renqiang Min, Honglu Zhou
  • Patent number: 11227108
    Abstract: A computer-implemented method for employing input-conditioned filters to perform natural language processing tasks using a convolutional neural network architecture includes receiving one or more inputs, generating one or more sets of filters conditioned on respective ones of the one or more inputs by implementing one or more encoders to encode the one or more inputs into one or more respective hidden vectors, and implementing one or more decoders to determine the one or more sets of filters based on the one or more hidden vectors, and performing adaptive convolution by applying the one or more sets of filters to respective ones of the one or more inputs to generate one or more representations.
    Type: Grant
    Filed: July 18, 2018
    Date of Patent: January 18, 2022
    Inventors: Renqiang Min, Dinghan Shen, Yitong Li
  • Patent number: 11170256
    Abstract: Systems and methods for processing video are provided. The method includes receiving a text-based description of active scenes and representing the text-based description as a word embedding matrix. The method includes using a text encoder implemented by neural network to output frame level textual representation and video level representation of the word embedding matrix. The method also includes generating, by a shared generator, frame by frame video based on the frame level textual representation, the video level representation and noise vectors. A frame level and a video level convolutional filter of a video discriminator are generated to classify frames and video of the frame by frame video as true or false. The method also includes training a conditional video generator that includes the text encoder, the video discriminator, and the shared generator in a generative adversarial network to convergence.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: November 9, 2021
    Inventors: Renqiang Min, Bing Bai, Yogesh Balaji
  • Publication number: 20210319847
    Abstract: A method is provided for peptide-based vaccine generation. The method receives a dataset of positive and negative binding peptide sequences. The method pre-trains a set of peptide binding property predictors on the dataset to generate training data. The method trains a Wasserstein Generative Adversarial Network (WGAN) only on the positive binding peptide sequences, in which a discriminator of the WGAN is updated to distinguish generated peptide sequences from sampled positive peptide sequences from the training data, and a generator of the WGAN is updated to fool the discriminator. The method trains the WGAN only on the positive binding peptide sequences while simultaneously updating the generator to minimize a kernel Maximum Mean Discrepancy (MMD) loss between the generated peptide sequences and the sampled peptide sequences and maximize prediction accuracies of a set of pre-trained peptide binding property predictors with parameters of the set of pre-trained peptide binding property predictors being fixed.
    Type: Application
    Filed: March 10, 2021
    Publication date: October 14, 2021
    Inventors: Renqiang Min, Wenchao Yu, Hans Peter Graf, Igor Durdanovic
  • Patent number: 11087174
    Abstract: A method is provided for visual inspection. The method includes learning, by a processor, group disentangled visual feature embedding vectors of input images. The input images include defective objects and defect-free objects. The method further includes generating, by the processor using a weight generation network, classification weights from visual features and semantic descriptions. Both the visual features and the semantic descriptions are for predicting defective and defect-free labels. The method also includes calculating, by the processor, a cosine similarity score between the classification weights and the group disentangled visual feature embedding vectors. The method additionally includes episodically training, by the processor, the weight generation network on the input images to update parameters of the weight generation network.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: August 10, 2021
    Inventors: Renqiang Min, Kai Li, Bing Bai, Hans Peter Graf
  • Patent number: 11087452
    Abstract: A false alarm reduction system and method are provided for reducing false alarms in an automatic defect detection system. The false alarm reduction system includes a defect detection system, generating a list of image boxes marking detected potential defects in an input image. The false alarm reduction system further includes a feature extractor, transforming each of the image boxes in the list into a respective set of numerical features. The false alarm reduction system also includes a classifier, computing as a classification outcome for the each of the image boxes whether the detected potential defect is a true defect or a false alarm responsive to the respective set of numerical features for each of the image boxes.
    Type: Grant
    Filed: January 16, 2019
    Date of Patent: August 10, 2021
    Inventors: Alexandru Niculescu-Mizil, Renqiang Min, Eric Cosatto, Farley Lai, Hans Peter Graf, Xavier Fontaine