Patents by Inventor Ayush Chopra

Ayush Chopra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11972466
    Abstract: A search system provides search results with images of products based on associations of primary products and secondary products from product image sets. The search system analyzes a product image set containing multiple images to determine a primary product and secondary products. Information associating the primary and secondary products are stored in a search index. When the search system receives a query image containing a search product, the search index is queried using the search product to identify search result images based on associations of products in the search index, and the result images are provided as a response to the query image.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: April 30, 2024
    Assignee: ADOBE INC
    Inventors: Jonas Dahl, Mausoom Sarkar, Hiresh Gupta, Balaji Krishnamurthy, Ayush Chopra, Abhishek Sinha
  • Patent number: 11907816
    Abstract: A data classification system is trained to classify input data into multiple classes. The system is initially trained by adjusting weights within the system based on a set of training data that includes multiple tuples, each being a training instance and corresponding training label. Two training instances, one from a minority class and one from a majority class, are selected from the set of training data based on entropies for the training instances. A synthetic training instance is generated by combining the two selected training instances and a corresponding training label is generated. A tuple including the synthetic training instance and the synthetic training label is added to the set of training data, resulting in an augmented training data set. One or more such synthetic training instances can be added to the augmented training data set and the system is then re-trained on the augmented training data set.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: February 20, 2024
    Assignee: Adobe Inc.
    Inventors: Pinkesh Badjatiya, Nikaash Puri, Ayush Chopra, Anubha Kabra
  • Patent number: 11874902
    Abstract: Techniques are disclosed for text conditioned image searching. A methodology implementing the techniques according to an embodiment includes receiving a source image and a text query defining a target image attribute. The method also includes decomposing the source image into image content and style feature vectors and decomposing the text query into text content and style feature vectors, wherein image style is descriptive of image content and text style is descriptive of text content. The method further includes composing a global content feature vector based on the text content feature vector and the image content feature vector and composing a global style feature vector based on the text style feature vector and the image style feature vector. The method further includes identifying a target image that relates to the global content feature vector and the global style feature vector so that the target image relates to the target image attribute.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Pinkesh Badjatiya, Surgan Jandial, Pranit Chawla, Mausoom Sarkar, Ayush Chopra
  • Patent number: 11861772
    Abstract: In implementations of systems for generating images for virtual try-on and pose transfer, a computing device implements a generator system to receive input data describing a first digital image that depicts a person in a pose and a second digital image that depicts a garment. Candidate appearance flow maps are computed that warp the garment based on the pose at different pixel-block sizes using a first machine learning model. The generator system generates a warped garment image by combining the candidate appearance flow maps as an aggregate per-pixel displacement map using a convolutional gated recurrent network. A conditional segment mask is predicted that segments portions of a geometry of the person using a second machine learning model. The generator system outputs a digital image that depicts the person in the pose wearing the garment based on the warped garment image and the conditional segmentation mask using a third machine learning model.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: January 2, 2024
    Assignee: Adobe Inc.
    Inventors: Ayush Chopra, Rishabh Jain, Mayur Hemani, Balaji Krishnamurthy
  • Patent number: 11797823
    Abstract: Generating a machine learning model that is trained using retrospective loss is described. A retrospective loss system receives an untrained machine learning model and a task for training the model. The retrospective loss system initially trains the model over warm-up iterations using task-specific loss that is determined based on a difference between predictions output by the model during training on input data and a ground truth dataset for the input data. Following the warm-up training iterations, the retrospective loss system continues to train the model using retrospective loss, which is model-agnostic and constrains the model such that a subsequently output prediction is more similar to the ground truth dataset than the previously output prediction. After determining that the model's outputs are within a threshold similarity to the ground truth dataset, the model is output with its current parameters as a trained model.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: October 24, 2023
    Assignee: Adobe Inc.
    Inventors: Ayush Chopra, Balaji Krishnamurthy, Mausoom Sarkar, Surgan Jandial
  • Publication number: 20230316379
    Abstract: Systems, methods, and computer storage media are disclosed for predicting visual compatibility between a bundle of catalog items (e.g., a partial outfit) and a candidate catalog item to add to the bundle. Visual compatibility prediction may be jointly conditioned on item type, context, and style by determining a first compatibility score jointly conditioned on type (e.g., category) and context, determining a second compatibility score conditioned on outfit style, and combining the first and second compatibility scores into a unified visual compatibility score. A unified visual compatibility score may be determined for each of a plurality of candidate items, and the candidate item with the highest unified visual compatibility score may be selected to add to the bundle (e.g., fill the in blank for the partial outfit).
    Type: Application
    Filed: March 20, 2023
    Publication date: October 5, 2023
    Inventors: Kumar AYUSH, Ayush Chopra, Patel U. Govind, Balaji Krishnamurthy, Anirudh Singhal
  • Publication number: 20230267663
    Abstract: In implementations of systems for generating images for virtual try-on and pose transfer, a computing device implements a generator system to receive input data describing a first digital image that depicts a person in a pose and a second digital image that depicts a garment. Candidate appearance flow maps are computed that warp the garment based on the pose at different pixel-block sizes using a first machine learning model. The generator system generates a warped garment image by combining the candidate appearance flow maps as an aggregate per-pixel displacement map using a convolutional gated recurrent network. A conditional segment mask is predicted that segments portions of a geometry of the person using a second machine learning model. The generator system outputs a digital image that depicts the person in the pose wearing the garment based on the warped garment image and the conditional segmentation mask using a third machine learning model.
    Type: Application
    Filed: February 23, 2022
    Publication date: August 24, 2023
    Applicant: Adobe Inc.
    Inventors: Ayush Chopra, Rishabh Jain, Mayur Hemani, Balaji Krishnamurthy
  • Patent number: 11734337
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating tags for an object portrayed in a digital image based on predicted attributes of the object. For example, the disclosed systems can utilize interleaved neural network layers of alternating inception layers and dilated convolution layers to generate a localization feature vector. Based on the localization feature vector, the disclosed systems can generate attribute localization feature embeddings, for example, using some pooling layer such as a global average pooling layer. The disclosed systems can then apply the attribute localization feature embeddings to corresponding attribute group classifiers to generate tags based on predicted attributes. In particular, attribute group classifiers can predict attributes as associated with a query image (e.g., based on a scoring comparison with other potential attributes of an attribute group).
    Type: Grant
    Filed: June 14, 2022
    Date of Patent: August 22, 2023
    Assignee: Adobe Inc.
    Inventors: Ayush Chopra, Mausoom Sarkar, Jonas Dahl, Hiresh Gupta, Balaji Krishnamurthy, Abhishek Sinha
  • Patent number: 11720651
    Abstract: Techniques are disclosed for text-conditioned image searching. A methodology implementing the techniques includes decomposing a source image into visual feature vectors associated with different levels of granularity. The method also includes decomposing a text query (defining a target image attribute) into feature vectors associated with different levels of granularity including a global text feature vector. The method further includes generating image-text embeddings based on the visual feature vectors and the text feature vectors to encode information from visual and textual features. The method further includes composing a visio-linguistic representation based on a hierarchical aggregation of the image-text embeddings to encode visual and textual information at multiple levels of granularity.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: August 8, 2023
    Assignee: Adobe Inc.
    Inventors: Pinkesh Badjatiya, Surgan Jandial, Pranit Chawla, Mausoom Sarkar, Ayush Chopra
  • Publication number: 20230196191
    Abstract: A data classification system is trained to classify input data into multiple classes. The system is initially trained by adjusting weights within the system based on a set of training data that includes multiple tuples, each being a training instance and corresponding training label. Two training instances, one from a minority class and one from a majority class, are selected from the set of training data based on entropies for the training instances. A synthetic training instance is generated by combining the two selected training instances and a corresponding training label is generated. A tuple including the synthetic training instance and the synthetic training label is added to the set of training data, resulting in an augmented training data set. One or more such synthetic training instances can be added to the augmented training data set and the system is then re-trained on the augmented training data set.
    Type: Application
    Filed: August 22, 2022
    Publication date: June 22, 2023
    Applicant: Adobe Inc.
    Inventors: Pinkesh Badjatiya, Nikaash Puri, Ayush Chopra, Anubha Kabra
  • Patent number: 11640634
    Abstract: Systems, methods, and computer storage media are disclosed for predicting visual compatibility between a bundle of catalog items (e.g., a partial outfit) and a candidate catalog item to add to the bundle. Visual compatibility prediction may be jointly conditioned on item type, context, and style by determining a first compatibility score jointly conditioned on type (e.g., category) and context, determining a second compatibility score conditioned on outfit style, and combining the first and second compatibility scores into a unified visual compatibility score. A unified visual compatibility score may be determined for each of a plurality of candidate items, and the candidate item with the highest unified visual compatibility score may be selected to add to the bundle (e.g., fill the in blank for the partial outfit).
    Type: Grant
    Filed: May 4, 2020
    Date of Patent: May 2, 2023
    Inventors: Kumar Ayush, Ayush Chopra, Patel Utkarsh Govind, Balaji Krishnamurthy, Anirudh Singhal
  • Patent number: 11631029
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for generating combined feature embeddings for minority class upsampling in training machine learning models with imbalanced training samples. For example, the disclosed systems can select training sample values from a set of training samples and a combination ratio value from a continuous probability distribution. Additionally, the disclosed systems can generate a combined synthetic training sample value by modifying the selected training sample values using the combination ratio value and combining the modified training sample values. Moreover, the disclosed systems can generate a combined synthetic ground truth label based on the combination ratio value. In addition, the disclosed systems can utilize the combined synthetic training sample value and the combined synthetic ground truth label to generate a combined synthetic training sample and utilize the combined synthetic training sample to train a machine learning model.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: April 18, 2023
    Assignee: Adobe Inc.
    Inventors: Nikaash Puri, Balaji Krishnamurthy, Ayush Chopra
  • Publication number: 20220309093
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating tags for an object portrayed in a digital image based on predicted attributes of the object. For example, the disclosed systems can utilize interleaved neural network layers of alternating inception layers and dilated convolution layers to generate a localization feature vector. Based on the localization feature vector, the disclosed systems can generate attribute localization feature embeddings, for example, using some pooling layer such as a global average pooling layer. The disclosed systems can then apply the attribute localization feature embeddings to corresponding attribute group classifiers to generate tags based on predicted attributes. In particular, attribute group classifiers can predict attributes as associated with a query image (e.g., based on a scoring comparison with other potential attributes of an attribute group).
    Type: Application
    Filed: June 14, 2022
    Publication date: September 29, 2022
    Inventors: Ayush Chopra, Mausoom Sarkar, Jonas Dahl, Hiresh Gupta, Balaji Krishnamurthy, Abhishek Sinha
  • Patent number: 11423264
    Abstract: A data classification system is trained to classify input data into multiple classes. The system is initially trained by adjusting weights within the system based on a set of training data that includes multiple tuples, each being a training instance and corresponding training label. Two training instances, one from a minority class and one from a majority class, are selected from the set of training data based on entropies for the training instances. A synthetic training instance is generated by combining the two selected training instances and a corresponding training label is generated. A tuple including the synthetic training instance and the synthetic training label is added to the set of training data, resulting in an augmented training data set. One or more such synthetic training instances can be added to the augmented training data set and the system is then re-trained on the augmented training data set.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: August 23, 2022
    Assignee: Adobe Inc.
    Inventors: Pinkesh Badjatiya, Nikaash Puri, Ayush Chopra, Anubha Kabra
  • Publication number: 20220245391
    Abstract: Techniques are disclosed for text-conditioned image searching. A methodology implementing the techniques includes decomposing a source image into visual feature vectors associated with different levels of granularity. The method also includes decomposing a text query (defining a target image attribute) into feature vectors associated with different levels of granularity including a global text feature vector. The method further includes generating image-text embeddings based on the visual feature vectors and the text feature vectors to encode information from visual and textual features. The method further includes composing a visio-linguistic representation based on a hierarchical aggregation of the image-text embeddings to encode visual and textual information at multiple levels of granularity.
    Type: Application
    Filed: January 28, 2021
    Publication date: August 4, 2022
    Applicant: Adobe Inc.
    Inventors: Pinkesh Badjatiya, Surgan Jandial, Pranit Chawla, Mausoom Sarkar, Ayush Chopra
  • Publication number: 20220237406
    Abstract: Techniques are disclosed for text conditioned image searching. A methodology implementing the techniques according to an embodiment includes receiving a source image and a text query defining a target image attribute. The method also includes decomposing the source image into image content and style feature vectors and decomposing the text query into text content and style feature vectors, wherein image style is descriptive of image content and text style is descriptive of text content. The method further includes composing a global content feature vector based on the text content feature vector and the image content feature vector and composing a global style feature vector based on the text style feature vector and the image style feature vector. The method further includes identifying a target image that relates to the global content feature vector and the global style feature vector so that the target image relates to the target image attribute.
    Type: Application
    Filed: January 28, 2021
    Publication date: July 28, 2022
    Applicant: Adobe Inc.
    Inventors: Pinkesh Badjatiya, Surgan Jandial, Pranit Chawla, Mausoom Sarkar, Ayush Chopra
  • Patent number: 11386144
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating tags for an object portrayed in a digital image based on predicted attributes of the object. For example, the disclosed systems can utilize interleaved neural network layers of alternating inception layers and dilated convolution layers to generate a localization feature vector. Based on the localization feature vector, the disclosed systems can generate attribute localization feature embeddings, for example, using some pooling layer such as a global average pooling layer. The disclosed systems can then apply the attribute localization feature embeddings to corresponding attribute group classifiers to generate tags based on predicted attributes. In particular, attribute group classifiers can predict attributes as associated with a query image (e.g., based on a scoring comparison with other potential attributes of an attribute group).
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: July 12, 2022
    Assignee: Adobe Inc.
    Inventors: Ayush Chopra, Mausoom Sarkar, Jonas Dahl, Hiresh Gupta, Balaji Krishnamurthy, Abhishek Sinha
  • Patent number: 11367271
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for one-shot and few-shot image segmentation on classes of objects that were not represented during training. In some embodiments, a dual prediction scheme may be applied in which query and support masks are jointly predicted using a shared decoder, which aids in similarity propagation between the query and support features. Additionally or alternatively, foreground and background attentive fusion may be applied to utilize cues from foreground and background feature similarities between the query and support images. Finally, to prevent overfitting on class-conditional similarities across training classes, input channel averaging may be applied for the query image during training. Accordingly, the techniques described herein may be used to achieve state-of-the-art performance for both one-shot and few-shot segmentation tasks.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: June 21, 2022
    Assignee: Adobe Inc.
    Inventors: Mayur Hemani, Siddhartha Gairola, Ayush Chopra, Balaji Krishnamurthy, Jonas Dahl
  • Publication number: 20210397876
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for one-shot and few-shot image segmentation on classes of objects that were not represented during training. In some embodiments, a dual prediction scheme may be applied in which query and support masks are jointly predicted using a shared decoder, which aids in similarity propagation between the query and support features. Additionally or alternatively, foreground and background attentive fusion may be applied to utilize cues from foreground and background feature similarities between the query and support images. Finally, to prevent overfitting on class-conditional similarities across training classes, input channel averaging may be applied for the query image during training. Accordingly, the techniques described herein may be used to achieve state-of-the-art performance for both one-shot and few-shot segmentation tasks.
    Type: Application
    Filed: June 19, 2020
    Publication date: December 23, 2021
    Inventors: Mayur Hemani, Siddhartha Gairola, Ayush Chopra, Balaji Krishnamurthy, Jonas Dahl
  • Publication number: 20210342701
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for predicting visual compatibility between a bundle of catalog items (e.g., a partial outfit) and a candidate catalog item to add to the bundle. Visual compatibility prediction may be jointly conditioned on item type, context, and style by determining a first compatibility score jointly conditioned on type (e.g., category) and context, determining a second compatibility score conditioned on outfit style, and combining the first and second compatibility scores into a unified visual compatibility score. A unified visual compatibility score may be determined for each of a plurality of candidate items, and the candidate item with the highest unified visual compatibility score may be selected to add to the bundle (e.g., fill the in blank for the partial outfit).
    Type: Application
    Filed: May 4, 2020
    Publication date: November 4, 2021
    Inventors: Kumar AYUSH, Ayush CHOPRA, Patel Utkarsh GOVIND, Balaji KRISHNAMURTHY, Anirudh SINGHAL