Patents by Inventor Surgan Jandial
Surgan Jandial has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240119122Abstract: Systems and methods for data augmentation are provided. One aspect of the systems and methods include receiving an image that is misclassified by a classification network; computing an augmentation image based on the image using an augmentation network; and generating an augmented image by combining the image and the augmentation image, wherein the augmented image is correctly classified by the classification network.Type: ApplicationFiled: October 11, 2022Publication date: April 11, 2024Inventors: Shripad Vilasrao Deshmukh, Surgan Jandial, Abhinav Java, Milan Aggarwal, Mausoom Sarkar, Arneh Jain, Balaji Krishnamurthy
-
Publication number: 20240070816Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure receive a reference image depicting a reference object with a target spatial attribute; generate object saliency noise based on the reference image by updating random noise to resemble the reference image; and generate an output image based on the object saliency noise, wherein the output image depicts an output object with the target spatial attribute.Type: ApplicationFiled: August 31, 2022Publication date: February 29, 2024Inventors: Surgan Jandial, Siddarth Ramesh, Shripad Vilasrao Deshmukh, Balaji Krishnamurthy
-
Publication number: 20240062057Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that regularize learning targets for a student network by leveraging past state outputs of the student network with outputs of a teacher network to determine a retrospective knowledge distillation loss. For example, the disclosed systems utilize past outputs from a past state of a student network with outputs of a teacher network to compose student-regularized teacher outputs that regularize training targets by making the training targets similar to student outputs while preserving semantics from the teacher training targets. Additionally, the disclosed systems utilize the student-regularized teacher outputs with student outputs of the present states to generate retrospective knowledge distillation losses.Type: ApplicationFiled: August 9, 2022Publication date: February 22, 2024Inventors: Surgan Jandial, Nikaash Puri, Balaji Krishnamurthy
-
Patent number: 11874902Abstract: Techniques are disclosed for text conditioned image searching. A methodology implementing the techniques according to an embodiment includes receiving a source image and a text query defining a target image attribute. The method also includes decomposing the source image into image content and style feature vectors and decomposing the text query into text content and style feature vectors, wherein image style is descriptive of image content and text style is descriptive of text content. The method further includes composing a global content feature vector based on the text content feature vector and the image content feature vector and composing a global style feature vector based on the text style feature vector and the image style feature vector. The method further includes identifying a target image that relates to the global content feature vector and the global style feature vector so that the target image relates to the target image attribute.Type: GrantFiled: January 28, 2021Date of Patent: January 16, 2024Assignee: Adobe Inc.Inventors: Pinkesh Badjatiya, Surgan Jandial, Pranit Chawla, Mausoom Sarkar, Ayush Chopra
-
Patent number: 11797823Abstract: Generating a machine learning model that is trained using retrospective loss is described. A retrospective loss system receives an untrained machine learning model and a task for training the model. The retrospective loss system initially trains the model over warm-up iterations using task-specific loss that is determined based on a difference between predictions output by the model during training on input data and a ground truth dataset for the input data. Following the warm-up training iterations, the retrospective loss system continues to train the model using retrospective loss, which is model-agnostic and constrains the model such that a subsequently output prediction is more similar to the ground truth dataset than the previously output prediction. After determining that the model's outputs are within a threshold similarity to the ground truth dataset, the model is output with its current parameters as a trained model.Type: GrantFiled: February 18, 2020Date of Patent: October 24, 2023Assignee: Adobe Inc.Inventors: Ayush Chopra, Balaji Krishnamurthy, Mausoom Sarkar, Surgan Jandial
-
Patent number: 11720651Abstract: Techniques are disclosed for text-conditioned image searching. A methodology implementing the techniques includes decomposing a source image into visual feature vectors associated with different levels of granularity. The method also includes decomposing a text query (defining a target image attribute) into feature vectors associated with different levels of granularity including a global text feature vector. The method further includes generating image-text embeddings based on the visual feature vectors and the text feature vectors to encode information from visual and textual features. The method further includes composing a visio-linguistic representation based on a hierarchical aggregation of the image-text embeddings to encode visual and textual information at multiple levels of granularity.Type: GrantFiled: January 28, 2021Date of Patent: August 8, 2023Assignee: Adobe Inc.Inventors: Pinkesh Badjatiya, Surgan Jandial, Pranit Chawla, Mausoom Sarkar, Ayush Chopra
-
Publication number: 20220245391Abstract: Techniques are disclosed for text-conditioned image searching. A methodology implementing the techniques includes decomposing a source image into visual feature vectors associated with different levels of granularity. The method also includes decomposing a text query (defining a target image attribute) into feature vectors associated with different levels of granularity including a global text feature vector. The method further includes generating image-text embeddings based on the visual feature vectors and the text feature vectors to encode information from visual and textual features. The method further includes composing a visio-linguistic representation based on a hierarchical aggregation of the image-text embeddings to encode visual and textual information at multiple levels of granularity.Type: ApplicationFiled: January 28, 2021Publication date: August 4, 2022Applicant: Adobe Inc.Inventors: Pinkesh Badjatiya, Surgan Jandial, Pranit Chawla, Mausoom Sarkar, Ayush Chopra
-
Publication number: 20220237406Abstract: Techniques are disclosed for text conditioned image searching. A methodology implementing the techniques according to an embodiment includes receiving a source image and a text query defining a target image attribute. The method also includes decomposing the source image into image content and style feature vectors and decomposing the text query into text content and style feature vectors, wherein image style is descriptive of image content and text style is descriptive of text content. The method further includes composing a global content feature vector based on the text content feature vector and the image content feature vector and composing a global style feature vector based on the text style feature vector and the image style feature vector. The method further includes identifying a target image that relates to the global content feature vector and the global style feature vector so that the target image relates to the target image attribute.Type: ApplicationFiled: January 28, 2021Publication date: July 28, 2022Applicant: Adobe Inc.Inventors: Pinkesh Badjatiya, Surgan Jandial, Pranit Chawla, Mausoom Sarkar, Ayush Chopra
-
Publication number: 20210256387Abstract: Generating a machine learning model that is trained using retrospective loss is described. A retrospective loss system receives an untrained machine learning model and a task for training the model. The retrospective loss system initially trains the model over warm-up iterations using task-specific loss that is determined based on a difference between predictions output by the model during training on input data and a ground truth dataset for the input data. Following the warm-up training iterations, the retrospective loss system continues to train the model using retrospective loss, which is model-agnostic and constrains the model such that a subsequently output prediction is more similar to the ground truth dataset than the previously output prediction. After determining that the model's outputs are within a threshold similarity to the ground truth dataset, the model is output with its current parameters as a trained model.Type: ApplicationFiled: February 18, 2020Publication date: August 19, 2021Applicant: Adobe Inc.Inventors: Ayush Chopra, Balaji Krishnamurthy, Mausoom Sarkar, Surgan Jandial
-
Patent number: 11080817Abstract: Generating a synthesized image of a person wearing clothing is described. A two-dimensional reference image depicting a person wearing an article of clothing and a two-dimensional image of target clothing in which the person is to be depicted as wearing are received. To generate the synthesized image, a warped image of the target clothing is generated via a geometric matching module, which implements a machine learning model trained to recognize similarities between warped and non-warped clothing images using multi-scale patch adversarial loss. The multi-scale patch adversarial loss is determined by sampling patches of different sizes from corresponding locations of warped and non-warped clothing images. The synthesized image is generated on a per-person basis, such that the target clothing fits the particular body shape, pose, and unique characteristics of the person.Type: GrantFiled: November 4, 2019Date of Patent: August 3, 2021Assignee: Adobe Inc.Inventors: Kumar Ayush, Surgan Jandial, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
-
Patent number: 11030782Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a virtual try-on digital image utilizing a unified neural network framework. For example, the disclosed systems can utilize a coarse-to-fine warping process to generate a warped version of a product digital image to fit a model digital image. In addition, the disclosed systems can utilize a texture transfer process to generate a corrected segmentation mask indicating portions of a model digital image to replace with a warped product digital image. The disclosed systems can further generate a virtual try-on digital image based on a warped product digital image, a model digital image, and a corrected segmentation mask. In some embodiments, the disclosed systems can train one or more neural networks to generate accurate outputs for various stages of generating a virtual try-on digital image.Type: GrantFiled: November 9, 2019Date of Patent: June 8, 2021Assignee: ADOBE INC.Inventors: Kumar Ayush, Surgan Jandial, Abhijeet Kumar, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
-
Publication number: 20210142539Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a virtual try-on digital image utilizing a unified neural network framework. For example, the disclosed systems can utilize a coarse-to-fine warping process to generate a warped version of a product digital image to fit a model digital image. In addition, the disclosed systems can utilize a texture transfer process to generate a corrected segmentation mask indicating portions of a model digital image to replace with a warped product digital image. The disclosed systems can further generate a virtual try-on digital image based on a warped product digital image, a model digital image, and a corrected segmentation mask. In some embodiments, the disclosed systems can train one or more neural networks to generate accurate outputs for various stages of generating a virtual try-on digital image.Type: ApplicationFiled: November 9, 2019Publication date: May 13, 2021Inventors: Kumar Ayush, Surgan Jandial, Abhijeet Kumar, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
-
Publication number: 20210133919Abstract: Generating a synthesized image of a person wearing clothing is described. A two-dimensional reference image depicting a person wearing an article of clothing and a two-dimensional image of target clothing in which the person is to be depicted as wearing are received. To generate the synthesized image, a warped image of the target clothing is generated via a geometric matching module, which implements a machine learning model trained to recognize similarities between warped and non-warped clothing images using multi-scale patch adversarial loss. The multi-scale patch adversarial loss is determined by sampling patches of different sizes from corresponding locations of warped and non-warped clothing images. The synthesized image is generated on a per-person basis, such that the target clothing fits the particular body shape, pose, and unique characteristics of the person.Type: ApplicationFiled: November 4, 2019Publication date: May 6, 2021Applicant: Adobe Inc.Inventors: Kumar Ayush, Surgan Jandial, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra