Patents by Inventor Surgan Jandial

Surgan Jandial has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240119122
    Abstract: Systems and methods for data augmentation are provided. One aspect of the systems and methods include receiving an image that is misclassified by a classification network; computing an augmentation image based on the image using an augmentation network; and generating an augmented image by combining the image and the augmentation image, wherein the augmented image is correctly classified by the classification network.
    Type: Application
    Filed: October 11, 2022
    Publication date: April 11, 2024
    Inventors: Shripad Vilasrao Deshmukh, Surgan Jandial, Abhinav Java, Milan Aggarwal, Mausoom Sarkar, Arneh Jain, Balaji Krishnamurthy
  • Publication number: 20240070816
    Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure receive a reference image depicting a reference object with a target spatial attribute; generate object saliency noise based on the reference image by updating random noise to resemble the reference image; and generate an output image based on the object saliency noise, wherein the output image depicts an output object with the target spatial attribute.
    Type: Application
    Filed: August 31, 2022
    Publication date: February 29, 2024
    Inventors: Surgan Jandial, Siddarth Ramesh, Shripad Vilasrao Deshmukh, Balaji Krishnamurthy
  • Publication number: 20240062057
    Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that regularize learning targets for a student network by leveraging past state outputs of the student network with outputs of a teacher network to determine a retrospective knowledge distillation loss. For example, the disclosed systems utilize past outputs from a past state of a student network with outputs of a teacher network to compose student-regularized teacher outputs that regularize training targets by making the training targets similar to student outputs while preserving semantics from the teacher training targets. Additionally, the disclosed systems utilize the student-regularized teacher outputs with student outputs of the present states to generate retrospective knowledge distillation losses.
    Type: Application
    Filed: August 9, 2022
    Publication date: February 22, 2024
    Inventors: Surgan Jandial, Nikaash Puri, Balaji Krishnamurthy
  • Patent number: 11874902
    Abstract: Techniques are disclosed for text conditioned image searching. A methodology implementing the techniques according to an embodiment includes receiving a source image and a text query defining a target image attribute. The method also includes decomposing the source image into image content and style feature vectors and decomposing the text query into text content and style feature vectors, wherein image style is descriptive of image content and text style is descriptive of text content. The method further includes composing a global content feature vector based on the text content feature vector and the image content feature vector and composing a global style feature vector based on the text style feature vector and the image style feature vector. The method further includes identifying a target image that relates to the global content feature vector and the global style feature vector so that the target image relates to the target image attribute.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Pinkesh Badjatiya, Surgan Jandial, Pranit Chawla, Mausoom Sarkar, Ayush Chopra
  • Patent number: 11797823
    Abstract: Generating a machine learning model that is trained using retrospective loss is described. A retrospective loss system receives an untrained machine learning model and a task for training the model. The retrospective loss system initially trains the model over warm-up iterations using task-specific loss that is determined based on a difference between predictions output by the model during training on input data and a ground truth dataset for the input data. Following the warm-up training iterations, the retrospective loss system continues to train the model using retrospective loss, which is model-agnostic and constrains the model such that a subsequently output prediction is more similar to the ground truth dataset than the previously output prediction. After determining that the model's outputs are within a threshold similarity to the ground truth dataset, the model is output with its current parameters as a trained model.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: October 24, 2023
    Assignee: Adobe Inc.
    Inventors: Ayush Chopra, Balaji Krishnamurthy, Mausoom Sarkar, Surgan Jandial
  • Patent number: 11720651
    Abstract: Techniques are disclosed for text-conditioned image searching. A methodology implementing the techniques includes decomposing a source image into visual feature vectors associated with different levels of granularity. The method also includes decomposing a text query (defining a target image attribute) into feature vectors associated with different levels of granularity including a global text feature vector. The method further includes generating image-text embeddings based on the visual feature vectors and the text feature vectors to encode information from visual and textual features. The method further includes composing a visio-linguistic representation based on a hierarchical aggregation of the image-text embeddings to encode visual and textual information at multiple levels of granularity.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: August 8, 2023
    Assignee: Adobe Inc.
    Inventors: Pinkesh Badjatiya, Surgan Jandial, Pranit Chawla, Mausoom Sarkar, Ayush Chopra
  • Publication number: 20220245391
    Abstract: Techniques are disclosed for text-conditioned image searching. A methodology implementing the techniques includes decomposing a source image into visual feature vectors associated with different levels of granularity. The method also includes decomposing a text query (defining a target image attribute) into feature vectors associated with different levels of granularity including a global text feature vector. The method further includes generating image-text embeddings based on the visual feature vectors and the text feature vectors to encode information from visual and textual features. The method further includes composing a visio-linguistic representation based on a hierarchical aggregation of the image-text embeddings to encode visual and textual information at multiple levels of granularity.
    Type: Application
    Filed: January 28, 2021
    Publication date: August 4, 2022
    Applicant: Adobe Inc.
    Inventors: Pinkesh Badjatiya, Surgan Jandial, Pranit Chawla, Mausoom Sarkar, Ayush Chopra
  • Publication number: 20220237406
    Abstract: Techniques are disclosed for text conditioned image searching. A methodology implementing the techniques according to an embodiment includes receiving a source image and a text query defining a target image attribute. The method also includes decomposing the source image into image content and style feature vectors and decomposing the text query into text content and style feature vectors, wherein image style is descriptive of image content and text style is descriptive of text content. The method further includes composing a global content feature vector based on the text content feature vector and the image content feature vector and composing a global style feature vector based on the text style feature vector and the image style feature vector. The method further includes identifying a target image that relates to the global content feature vector and the global style feature vector so that the target image relates to the target image attribute.
    Type: Application
    Filed: January 28, 2021
    Publication date: July 28, 2022
    Applicant: Adobe Inc.
    Inventors: Pinkesh Badjatiya, Surgan Jandial, Pranit Chawla, Mausoom Sarkar, Ayush Chopra
  • Publication number: 20210256387
    Abstract: Generating a machine learning model that is trained using retrospective loss is described. A retrospective loss system receives an untrained machine learning model and a task for training the model. The retrospective loss system initially trains the model over warm-up iterations using task-specific loss that is determined based on a difference between predictions output by the model during training on input data and a ground truth dataset for the input data. Following the warm-up training iterations, the retrospective loss system continues to train the model using retrospective loss, which is model-agnostic and constrains the model such that a subsequently output prediction is more similar to the ground truth dataset than the previously output prediction. After determining that the model's outputs are within a threshold similarity to the ground truth dataset, the model is output with its current parameters as a trained model.
    Type: Application
    Filed: February 18, 2020
    Publication date: August 19, 2021
    Applicant: Adobe Inc.
    Inventors: Ayush Chopra, Balaji Krishnamurthy, Mausoom Sarkar, Surgan Jandial
  • Patent number: 11080817
    Abstract: Generating a synthesized image of a person wearing clothing is described. A two-dimensional reference image depicting a person wearing an article of clothing and a two-dimensional image of target clothing in which the person is to be depicted as wearing are received. To generate the synthesized image, a warped image of the target clothing is generated via a geometric matching module, which implements a machine learning model trained to recognize similarities between warped and non-warped clothing images using multi-scale patch adversarial loss. The multi-scale patch adversarial loss is determined by sampling patches of different sizes from corresponding locations of warped and non-warped clothing images. The synthesized image is generated on a per-person basis, such that the target clothing fits the particular body shape, pose, and unique characteristics of the person.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: August 3, 2021
    Assignee: Adobe Inc.
    Inventors: Kumar Ayush, Surgan Jandial, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
  • Patent number: 11030782
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a virtual try-on digital image utilizing a unified neural network framework. For example, the disclosed systems can utilize a coarse-to-fine warping process to generate a warped version of a product digital image to fit a model digital image. In addition, the disclosed systems can utilize a texture transfer process to generate a corrected segmentation mask indicating portions of a model digital image to replace with a warped product digital image. The disclosed systems can further generate a virtual try-on digital image based on a warped product digital image, a model digital image, and a corrected segmentation mask. In some embodiments, the disclosed systems can train one or more neural networks to generate accurate outputs for various stages of generating a virtual try-on digital image.
    Type: Grant
    Filed: November 9, 2019
    Date of Patent: June 8, 2021
    Assignee: ADOBE INC.
    Inventors: Kumar Ayush, Surgan Jandial, Abhijeet Kumar, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
  • Publication number: 20210142539
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a virtual try-on digital image utilizing a unified neural network framework. For example, the disclosed systems can utilize a coarse-to-fine warping process to generate a warped version of a product digital image to fit a model digital image. In addition, the disclosed systems can utilize a texture transfer process to generate a corrected segmentation mask indicating portions of a model digital image to replace with a warped product digital image. The disclosed systems can further generate a virtual try-on digital image based on a warped product digital image, a model digital image, and a corrected segmentation mask. In some embodiments, the disclosed systems can train one or more neural networks to generate accurate outputs for various stages of generating a virtual try-on digital image.
    Type: Application
    Filed: November 9, 2019
    Publication date: May 13, 2021
    Inventors: Kumar Ayush, Surgan Jandial, Abhijeet Kumar, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
  • Publication number: 20210133919
    Abstract: Generating a synthesized image of a person wearing clothing is described. A two-dimensional reference image depicting a person wearing an article of clothing and a two-dimensional image of target clothing in which the person is to be depicted as wearing are received. To generate the synthesized image, a warped image of the target clothing is generated via a geometric matching module, which implements a machine learning model trained to recognize similarities between warped and non-warped clothing images using multi-scale patch adversarial loss. The multi-scale patch adversarial loss is determined by sampling patches of different sizes from corresponding locations of warped and non-warped clothing images. The synthesized image is generated on a per-person basis, such that the target clothing fits the particular body shape, pose, and unique characteristics of the person.
    Type: Application
    Filed: November 4, 2019
    Publication date: May 6, 2021
    Applicant: Adobe Inc.
    Inventors: Kumar Ayush, Surgan Jandial, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra