Patents by Inventor Kumar AYUSH

Kumar AYUSH has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230316379
    Abstract: Systems, methods, and computer storage media are disclosed for predicting visual compatibility between a bundle of catalog items (e.g., a partial outfit) and a candidate catalog item to add to the bundle. Visual compatibility prediction may be jointly conditioned on item type, context, and style by determining a first compatibility score jointly conditioned on type (e.g., category) and context, determining a second compatibility score conditioned on outfit style, and combining the first and second compatibility scores into a unified visual compatibility score. A unified visual compatibility score may be determined for each of a plurality of candidate items, and the candidate item with the highest unified visual compatibility score may be selected to add to the bundle (e.g., fill the in blank for the partial outfit).
    Type: Application
    Filed: March 20, 2023
    Publication date: October 5, 2023
    Inventors: Kumar AYUSH, Ayush Chopra, Patel U. Govind, Balaji Krishnamurthy, Anirudh Singhal
  • Patent number: 11663463
    Abstract: A location-sensitive saliency prediction neural network generates location-sensitive saliency data for an image. The location-sensitive saliency prediction neural network includes, at least, a filter module, an inception module, and a location-bias module. The filter module extracts visual features at multiple contextual levels, and generates a feature map of the image. The inception module generates a multi-scale semantic structure, based on multiple scales of semantic content depicted in the image. In some cases, the inception block performs parallel analysis of the feature map, such as by parallel multiple layers, to determine the multiple scales of semantic content. The location-bias module generates a location-sensitive saliency map of location-dependent context of the image based on the multi-scale semantic structure and on a bias map. In some cases, the bias map indicates location-specific weights for one or more regions of the image.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: May 30, 2023
    Assignee: Adobe Inc.
    Inventors: Kumar Ayush, Atishay Jain
  • Patent number: 11640634
    Abstract: Systems, methods, and computer storage media are disclosed for predicting visual compatibility between a bundle of catalog items (e.g., a partial outfit) and a candidate catalog item to add to the bundle. Visual compatibility prediction may be jointly conditioned on item type, context, and style by determining a first compatibility score jointly conditioned on type (e.g., category) and context, determining a second compatibility score conditioned on outfit style, and combining the first and second compatibility scores into a unified visual compatibility score. A unified visual compatibility score may be determined for each of a plurality of candidate items, and the candidate item with the highest unified visual compatibility score may be selected to add to the bundle (e.g., fill the in blank for the partial outfit).
    Type: Grant
    Filed: May 4, 2020
    Date of Patent: May 2, 2023
    Inventors: Kumar Ayush, Ayush Chopra, Patel Utkarsh Govind, Balaji Krishnamurthy, Anirudh Singhal
  • Patent number: 11238093
    Abstract: Systems and methods for content-based video retrieval are described. The systems and methods may break a video into multiple frames, generate a feature vector from the frames based on the temporal relationship between them, and then embed the feature vector into a vector space along with a vector representing a search query. In some embodiments, the video feature vector is converted into a text caption prior to the embedding. In other embodiments, the video feature vector and a sentence vector are each embedded into a common space using a join video sentence embedding model. Once the video and the search query are embedded into a common vector space, a distance between them may be calculated. After calculating the distance between the search query and set of videos, the distances may be used to select a subset of the videos to present as the result of the search.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: February 1, 2022
    Assignee: ADOBE INC.
    Inventors: Kumar Ayush, Harnish Lakhani, Atishay Jain
  • Publication number: 20210342701
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for predicting visual compatibility between a bundle of catalog items (e.g., a partial outfit) and a candidate catalog item to add to the bundle. Visual compatibility prediction may be jointly conditioned on item type, context, and style by determining a first compatibility score jointly conditioned on type (e.g., category) and context, determining a second compatibility score conditioned on outfit style, and combining the first and second compatibility scores into a unified visual compatibility score. A unified visual compatibility score may be determined for each of a plurality of candidate items, and the candidate item with the highest unified visual compatibility score may be selected to add to the bundle (e.g., fill the in blank for the partial outfit).
    Type: Application
    Filed: May 4, 2020
    Publication date: November 4, 2021
    Inventors: Kumar AYUSH, Ayush CHOPRA, Patel Utkarsh GOVIND, Balaji KRISHNAMURTHY, Anirudh SINGHAL
  • Patent number: 11158100
    Abstract: The present invention enables the automatic generation and recommendation of embedded images. An embedded image includes a visual representation of a context-appropriate object embedded within a scene image. The context and aesthetic properties (e.g., the colors, textures, lighting, position, orientation, and size) of the visual representation of the object may be automatically varied to increase an associated objective compatibility score that is based on the context and aesthetics of the scene image. The scene image may depict a visual representation of a scene, e.g., a background scene. Thus, a scene image may be a background image that depicts a background and/or scene to automatically pair with the object. The object may be a three-dimensional (3D) physical or virtual object. The automatically generated embedded image may be a composite image that includes at least a partially optimized visual representation of a context-appropriate object composited within the scene image.
    Type: Grant
    Filed: February 13, 2019
    Date of Patent: October 26, 2021
    Assignee: Adobe Inc.
    Inventors: Kumar Ayush, Harsh Vardhan Chopra
  • Patent number: 11080817
    Abstract: Generating a synthesized image of a person wearing clothing is described. A two-dimensional reference image depicting a person wearing an article of clothing and a two-dimensional image of target clothing in which the person is to be depicted as wearing are received. To generate the synthesized image, a warped image of the target clothing is generated via a geometric matching module, which implements a machine learning model trained to recognize similarities between warped and non-warped clothing images using multi-scale patch adversarial loss. The multi-scale patch adversarial loss is determined by sampling patches of different sizes from corresponding locations of warped and non-warped clothing images. The synthesized image is generated on a per-person basis, such that the target clothing fits the particular body shape, pose, and unique characteristics of the person.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: August 3, 2021
    Assignee: Adobe Inc.
    Inventors: Kumar Ayush, Surgan Jandial, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
  • Patent number: 11030782
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a virtual try-on digital image utilizing a unified neural network framework. For example, the disclosed systems can utilize a coarse-to-fine warping process to generate a warped version of a product digital image to fit a model digital image. In addition, the disclosed systems can utilize a texture transfer process to generate a corrected segmentation mask indicating portions of a model digital image to replace with a warped product digital image. The disclosed systems can further generate a virtual try-on digital image based on a warped product digital image, a model digital image, and a corrected segmentation mask. In some embodiments, the disclosed systems can train one or more neural networks to generate accurate outputs for various stages of generating a virtual try-on digital image.
    Type: Grant
    Filed: November 9, 2019
    Date of Patent: June 8, 2021
    Assignee: ADOBE INC.
    Inventors: Kumar Ayush, Surgan Jandial, Abhijeet Kumar, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
  • Publication number: 20210142539
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a virtual try-on digital image utilizing a unified neural network framework. For example, the disclosed systems can utilize a coarse-to-fine warping process to generate a warped version of a product digital image to fit a model digital image. In addition, the disclosed systems can utilize a texture transfer process to generate a corrected segmentation mask indicating portions of a model digital image to replace with a warped product digital image. The disclosed systems can further generate a virtual try-on digital image based on a warped product digital image, a model digital image, and a corrected segmentation mask. In some embodiments, the disclosed systems can train one or more neural networks to generate accurate outputs for various stages of generating a virtual try-on digital image.
    Type: Application
    Filed: November 9, 2019
    Publication date: May 13, 2021
    Inventors: Kumar Ayush, Surgan Jandial, Abhijeet Kumar, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
  • Publication number: 20210133850
    Abstract: Techniques for providing a machine learning prediction of a recommended product to a user using augmented reality include identifying at least one real-world object and a virtual product in an AR viewpoint of the user. The AR viewpoint includes a camera image of the real-world object(s) and an image of the virtual product. The image of the virtual product is inserted into the camera image of the real-world object. A candidate product is predicted from a set of recommendation images using a machine learning algorithm based on, for example, a type of the virtual product to provide a recommendation that includes both the virtual product and the candidate product. The recommendation can include different types of products that are complementary to each other, in an embodiment. An image of the selected candidate product is inserted into the AR viewpoint along with the image of the virtual product.
    Type: Application
    Filed: November 6, 2019
    Publication date: May 6, 2021
    Applicant: ADOBE INC.
    Inventors: Kumar Ayush, Harnish Naresh Lakhani, Atishay Jain
  • Publication number: 20210133919
    Abstract: Generating a synthesized image of a person wearing clothing is described. A two-dimensional reference image depicting a person wearing an article of clothing and a two-dimensional image of target clothing in which the person is to be depicted as wearing are received. To generate the synthesized image, a warped image of the target clothing is generated via a geometric matching module, which implements a machine learning model trained to recognize similarities between warped and non-warped clothing images using multi-scale patch adversarial loss. The multi-scale patch adversarial loss is determined by sampling patches of different sizes from corresponding locations of warped and non-warped clothing images. The synthesized image is generated on a per-person basis, such that the target clothing fits the particular body shape, pose, and unique characteristics of the person.
    Type: Application
    Filed: November 4, 2019
    Publication date: May 6, 2021
    Applicant: Adobe Inc.
    Inventors: Kumar Ayush, Surgan Jandial, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
  • Patent number: 10984467
    Abstract: The technology described herein is directed to object compatibility-based identification and replacement of objects in digital representations of real-world environments for contextualized content delivery. In some implementations, an object compatibility and retargeting service that selects and analyzes a viewpoint (received from a user's client device) to identify objects that are the least compatible with other surrounding real-world objects in terms of style compatibility with the surrounding real-world objects and color compatibility with the background is described. The object compatibility and retargeting service also generates recommendations for replacing the least compatible object with objects/products having more style/design compatibility with the surrounding real-world objects and color compatibility with the background.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: April 20, 2021
    Assignee: Adobe Inc.
    Inventors: Kumar Ayush, Harnish Lakhani, Atishay Jain
  • Publication number: 20210109966
    Abstract: Systems and methods for content-based video retrieval are described. The systems and methods may break a video into multiple frames, generate a feature vector from the frames based on the temporal relationship between them, and then embed the feature vector into a vector space along with a vector representing a search query. In some embodiments, the video feature vector is converted into a text caption prior to the embedding. In other embodiments, the video feature vector and a sentence vector are each embedded into a common space using a join video sentence embedding model. Once the video and the search query are embedded into a common vector space, a distance between them may be calculated. After calculating the distance between the search query and set of videos, the distances may be used to select a subset of the videos to present as the result of the search.
    Type: Application
    Filed: October 15, 2019
    Publication date: April 15, 2021
    Inventors: KUMAR AYUSH, HARNISH LAKHANI, ATISHAY JAIN
  • Patent number: 10956967
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating augmented reality representations of recommended products based on style similarity with real-world surroundings. For example, the disclosed systems can identify a real-world object within a camera feed and can utilize a 2D-3D alignment algorithm to identify a three-dimensional model that matches the real-world object. In addition, the disclosed systems can utilize a style similarity algorithm to generate style similarity scores for products in relation to the identified three-dimensional model. The disclosed systems can also utilize a color compatibility algorithm to generate color compatibility scores for products, and the systems can determine overall scores for products based on a combination of style similarity scores and color compatibility scores. The disclosed systems can further generate AR representations of recommended products based on the overall scores.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: March 23, 2021
    Assignee: ADOBE INC.
    Inventors: Kumar Ayush, Gaurush Hiranandani
  • Patent number: 10950060
    Abstract: Certain embodiments involve enhancing personalization of a virtual-commerce environment by identifying an augmented-reality visual of the virtual-commerce environment. For example, a system obtains a data set that indicates a plurality of augmented-reality visuals generated in a virtual-commerce environment and provided for view by a user. The system obtains data indicating a triggering user input that corresponds to a predetermined user input provideable by the user as the user views an augmented-reality visual of the plurality of augmented-reality visuals. The system obtains data indicating a user input provided by the user. The system compares the user input to the triggering user input to determine a correspondence (e.g., a similarity) between the user input and the triggering user input. The system identifies a particular augmented-reality visual of the plurality of augmented-reality visuals that is viewed by the user based on the correspondence and stores the identified augmented-reality visual.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: March 16, 2021
    Assignee: Adobe Inc.
    Inventors: Gaurush Hiranandani, Chinnaobireddy Varsha, Sai Varun Reddy Maram, Kumar Ayush, Atanu Ranjan Sinha
  • Patent number: 10922716
    Abstract: This disclosure generally covers systems and methods that identify objects within an augmented reality (“AR”) scene (received from a user) to gather information concerning the user's physical environment or physical features and to recommend products. In particular, the disclosed systems and methods detect characteristics of multiple objects shown within an AR scene received from a user and, based on the detected characteristics, select products to recommend to the user. When analyzing characteristics, in some embodiments, the disclosed systems and methods determine visual characteristics associated with the real object or virtual object, such as color or location of an object. The disclosed systems and methods, in some embodiments, then select an endorsed product to recommend for use with the real object—based on the determined visual characteristics—and create a product recommendation that recommends the endorsed product.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: February 16, 2021
    Assignee: ADOBE INC.
    Inventors: Gaurush Hiranandani, Kumar Ayush, Chinnaobireddy Varsha, Sai Varun Reddy Maram
  • Publication number: 20210042625
    Abstract: Methods and systems are provided for facilitating the creation and utilization of a transformation function system capable of providing network agnostic performance improvement. The transformation function system receives a representation from a task neural network. The representation can be input into a composite function neural network of the transformation function system. A learned composite function can be generated using the composite function neural network. The composite function can be specifically constructed for the task neural network based on the input representation. The learned composite function can be applied to a feature embedding of the task neural network to transform the feature embedding. Transforming the feature embedding can optimize the output of the task neural network.
    Type: Application
    Filed: August 7, 2019
    Publication date: February 11, 2021
    Inventors: Ayush CHOPRA, Abhishek SINHA, Hiresh GUPTA, Mausoom SARKAR, Kumar AYUSH, Balaji KRISHNAMURTHY
  • Publication number: 20210012201
    Abstract: A location-sensitive saliency prediction neural network generates location-sensitive saliency data for an image. The location-sensitive saliency prediction neural network includes, at least, a filter module, an inception module, and a location-bias module. The filter module extracts visual features at multiple contextual levels, and generates a feature map of the image. The inception module generates a multi-scale semantic structure, based on multiple scales of semantic content depicted in the image. In some cases, the inception block performs parallel analysis of the feature map, such as by parallel multiple layers, to determine the multiple scales of semantic content. The location-bias module generates a location-sensitive saliency map of location-dependent context of the image based on the multi-scale semantic structure and on a bias map. In some cases, the bias map indicates location-specific weights for one or more regions of the image.
    Type: Application
    Filed: July 10, 2019
    Publication date: January 14, 2021
    Inventors: Kumar Ayush, Atishay Jain
  • Publication number: 20200320797
    Abstract: Certain embodiments involve enhancing personalization of a virtual-commerce environment by identifying an augmented-reality visual of the virtual-commerce environment. For example, a system obtains a data set that indicates a plurality of augmented-reality visuals generated in a virtual-commerce environment and provided for view by a user. The system obtains data indicating a triggering user input that corresponds to a predetermined user input provideable by the user as the user views an augmented-reality visual of the plurality of augmented-reality visuals. The system obtains data indicating a user input provided by the user. The system compares the user input to the triggering user input to determine a correspondence (e.g., a similarity) between the user input and the triggering user input. The system identifies a particular augmented-reality visual of the plurality of augmented-reality visuals that is viewed by the user based on the correspondence and stores the identified augmented-reality visual.
    Type: Application
    Filed: June 22, 2020
    Publication date: October 8, 2020
    Inventors: Gaurush Hiranandani, Chinnaobireddy Varsha, Sai Varun Reddy Maram, Kumar Ayush, Atanu Ranjan Sinha
  • Patent number: 10789622
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating augmented reality representations of recommended products based on style compatibility with real-world surroundings. For example, the disclosed systems can identify a real-world object within a camera feed and can utilize a 2D-3D alignment algorithm to identify a three-dimensional model that matches the real-world object. In addition, the disclosed systems can utilize a style compatibility algorithm to generate recommended products based on style compatibility in relation to the identified three-dimensional model. The disclosed systems can further utilize a color compatibility algorithm to determine product textures which are color compatible with the real-world surroundings and generate augmented reality representations of recommended products to provide as an overlay of the real-world environment of the camera feed.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: September 29, 2020
    Assignee: ADOBE INC.
    Inventors: Kumar Ayush, Gaurush Hiranandani