Patents by Inventor Pranav Vineet Aggarwal

Pranav Vineet Aggarwal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240037881
    Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure receive a first image depicting a scene and a second image that includes a style; segment the first image to obtain a first segment and a second segment, wherein the first segment has a shape of an object in the scene; apply a style transfer network to the first segment and the second image to obtain a first image part, wherein the first image part has the shape of the object and the style from the second image; combine the first image part with a second image part corresponding to the second segment to obtain a combined image; and apply a lenticular effect to the combined image to obtain an output image.
    Type: Application
    Filed: July 26, 2022
    Publication date: February 1, 2024
    Inventors: Pranav Vineet Aggarwal, Alvin Ghouas, Ajinkya Gorakhnath Kale
  • Publication number: 20230360294
    Abstract: Systems and methods for image processing are configured. Embodiments of the present disclosure identify target style attributes and target structure attributes for a composite image; generate a matrix of composite feature tokens based on the target style attributes and the target structure attributes, wherein subsequent feature tokens of the matrix of composite feature tokens are sequentially generated based on previous feature tokens of the matrix of composite feature tokens according to a linear ordering of the matrix of composite feature tokens; and generate the composite image based on the matrix of composite feature tokens, wherein the composite image includes the target style attributes and the target structure attributes.
    Type: Application
    Filed: May 9, 2022
    Publication date: November 9, 2023
    Inventors: Pranav Vineet Aggarwal, Midhun Harikumar, Ajinkya Gorakhnath Kale
  • Patent number: 11775578
    Abstract: Text-to-visual machine learning embedding techniques are described that overcome the challenges of conventional techniques in a variety of ways. These techniques include use of query-based training data which may expand availability and types of training data usable to train a model. Generation of negative digital image samples is also described that may increase accuracy in training the model using machine learning. A loss function is also described that also supports increased accuracy and computational efficiency by losses separately, e.g., between positive or negative sample embeddings a text embedding.
    Type: Grant
    Filed: August 10, 2021
    Date of Patent: October 3, 2023
    Assignee: Adobe Inc.
    Inventors: Pranav Vineet Aggarwal, Zhe Lin, Baldo Antonio Faieta, Saeid Motiian
  • Publication number: 20230298224
    Abstract: A method and system for color optimization in generated images are described. The method and system include receiving an image generation prompt that includes a text description of target image content and color information describing a target color palette; encoding the image generation prompt to obtain image features that represent the target image content and the target color palette; and generating an image representing the target image content with the target color palette based on the image features.
    Type: Application
    Filed: March 16, 2022
    Publication date: September 21, 2023
    Inventors: Pranav Vineet Aggarwal, Midhun Harikumar, Ajinkya Gorakhnath Kale
  • Publication number: 20230185844
    Abstract: Visually guided machine-learning language model and embedding techniques are described that overcome the challenges of conventional techniques in a variety of ways. In one example, a model is trained to support a visually guided machine-learning embedding space that supports visual intuition as to “what” is represented by text. The visually guided language embedding space supported by the model, once trained, may then be used to support visual intuition as part of a variety of functionality. In one such example, the visually guided language embedding space as implemented by the model may be leveraged as part of a multi-modal differential search to support search of digital images and other digital content with real-time focus adaptation which overcomes the challenges of conventional techniques.
    Type: Application
    Filed: February 2, 2023
    Publication date: June 15, 2023
    Applicant: Adobe Inc.
    Inventors: Pranav Vineet Aggarwal, Zhe Lin, Baldo Antonio Faieta, Saeid Antonio Motiian
  • Publication number: 20230137774
    Abstract: Systems and methods for image retrieval are described. Embodiments of the present disclosure receive a search query from a user; extract an entity and a color phrase describing the entity from the search query; generate an entity color embedding in a color embedding space from the color phrase using a multi-modal color encoder; identify an image in a database based on metadata for the image including an object label corresponding to the extracted entity and an object color embedding in the color embedding space corresponding to the object label; and provide image information for the image to the user based on the metadata.
    Type: Application
    Filed: November 4, 2021
    Publication date: May 4, 2023
    Inventors: Baldo Faieta, Ajinkya Gorakhnath Kale, Pranav Vineet Aggarwal, Naveen Marri, Saeid Motiian, Tracy Holloway King, Alex Filipkowski, Shabnam Ghadar
  • Patent number: 11604822
    Abstract: Multi-modal differential search with real-time focus adaptation techniques are described that overcome the challenges of conventional techniques in a variety of ways. In one example, a model is trained to support a visually guided machine-learning embedding space that supports visual intuition as to “what” is represented by text. The visually guided language embedding space supported by the model, once trained, may then be used to support visual intuition as part of a variety of functionality. In one such example, the visually guided language embedding space as implemented by the model may be leveraged as part of a multi-modal differential search to support search of digital images and other digital content with real-time focus adaptation which overcomes the challenges of conventional techniques.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: March 14, 2023
    Assignee: Adobe Inc.
    Inventors: Pranav Vineet Aggarwal, Zhe Lin, Baldo Antonio Faieta, Saeid Motiian
  • Patent number: 11605019
    Abstract: Visually guided machine-learning language model and embedding techniques are described that overcome the challenges of conventional techniques in a variety of ways. In one example, a model is trained to support a visually guided machine-learning embedding space that supports visual intuition as to “what” is represented by text. The visually guided language embedding space supported by the model, once trained, may then be used to support visual intuition as part of a variety of functionality. In one such example, the visually guided language embedding space as implemented by the model may be leveraged as part of a multi-modal differential search to support search of digital images and other digital content with real-time focus adaptation which overcomes the challenges of conventional techniques.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: March 14, 2023
    Assignee: Adobe Inc.
    Inventors: Pranav Vineet Aggarwal, Zhe Lin, Baldo Antonio Faieta, Saeid Motiian
  • Publication number: 20220391450
    Abstract: Technology is disclosed herein for enhanced similarity search. In an implementation, a search environment includes one or more computing hardware, software, and/or firmware components in support of enhanced similarity search. The one or more components identify a modality for a similarity search with respect to a query object. The components generate an embedding for the query object based on the modality and based on connections between the query object and neighboring nodes in a graph.
    Type: Application
    Filed: August 15, 2022
    Publication date: December 8, 2022
    Inventors: Pranav Vineet Aggarwal, Ali Aminian, Ajinkya Gorakhnath Kale, Aashish Kumar Misraa
  • Patent number: 11500939
    Abstract: Technology is disclosed herein for enhanced similarity search. In an implementation, a search environment includes one or more computing hardware, software, and/or firmware components in support of enhanced similarity search. The one or more components identify a modality for a similarity search with respect to a query object. The components generate an embedding for the query object based on the modality and based on connections between the query object and neighboring nodes in a graph. The embedding for the query object provides the basis for the search for similar objects.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: November 15, 2022
    Assignee: Adobe Inc.
    Inventors: Pranav Vineet Aggarwal, Ali Aminian, Ajinkya Gorakhnath Kale, Aashish Kumar Misraa
  • Publication number: 20210365727
    Abstract: Text-to-visual machine learning embedding techniques are described that overcome the challenges of conventional techniques in a variety of ways. These techniques include use of query-based training data which may expand availability and types of training data usable to train a model. Generation of negative digital image samples is also described that may increase accuracy in training the model using machine learning. A loss function is also described that also supports increased accuracy and computational efficiency by losses separately, e.g., between positive or negative sample embeddings a text embedding.
    Type: Application
    Filed: August 10, 2021
    Publication date: November 25, 2021
    Applicant: Adobe Inc.
    Inventors: Pranav Vineet Aggarwal, Zhe Lin, Baldo Antonio Faieta, Saeid Motiian
  • Publication number: 20210326393
    Abstract: Technology is disclosed herein for enhanced similarity search. In an implementation, a search environment includes one or more computing hardware, software, and/or firmware components in support of enhanced similarity search. The one or more components identify a modality for a similarity search with respect to a query object. The components generate an embedding for the query object based on the modality and based on connections between the query object and neighboring nodes in a graph.
    Type: Application
    Filed: April 21, 2020
    Publication date: October 21, 2021
    Inventors: Pranav Vineet Aggarwal, Ali Aminian, Ajinkya Gorakhnath Kale, Aashish Kumar Misraa
  • Patent number: 11144784
    Abstract: Text-to-visual machine learning embedding techniques are described that overcome the challenges of conventional techniques in a variety of ways. These techniques include use of query-based training data which may expand availability and types of training data usable to train a model. Generation of negative digital image samples is also described that may increase accuracy in training the model using machine learning. A loss function is also described that also supports increased accuracy and computational efficiency by losses separately, e.g., between positive or negative sample embeddings a text embedding.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: October 12, 2021
    Assignee: Adobe Inc.
    Inventors: Pranav Vineet Aggarwal, Zhe Lin, Baldo Antonio Faieta, Saeid Motiian
  • Publication number: 20200380298
    Abstract: Text-to-visual machine learning embedding techniques are described that overcome the challenges of conventional techniques in a variety of ways. These techniques include use of query-based training data which may expand availability and types of training data usable to train a model. Generation of negative digital image samples is also described that may increase accuracy in training the model using machine learning. A loss function is also described that also supports increased accuracy and computational efficiency by losses separately, e.g., between positive or negative sample embeddings a text embedding.
    Type: Application
    Filed: May 30, 2019
    Publication date: December 3, 2020
    Applicant: Adobe Inc.
    Inventors: Pranav Vineet Aggarwal, Zhe Lin, Baldo Antonio Faieta, Saeid Motiian
  • Publication number: 20200380027
    Abstract: Multi-modal differential search with real-time focus adaptation techniques are described that overcome the challenges of conventional techniques in a variety of ways. In one example, a model is trained to support a visually guided machine-learning embedding space that supports visual intuition as to “what” is represented by text. The visually guided language embedding space supported by the model, once trained, may then be used to support visual intuition as part of a variety of functionality. In one such example, the visually guided language embedding space as implemented by the model may be leveraged as part of a multi-modal differential search to support search of digital images and other digital content with real-time focus adaptation which overcomes the challenges of conventional techniques.
    Type: Application
    Filed: May 30, 2019
    Publication date: December 3, 2020
    Applicant: Adobe Inc.
    Inventors: Pranav Vineet Aggarwal, Zhe Lin, Baldo Antonio Faieta, Saeid Motiian
  • Publication number: 20200380403
    Abstract: Visually guided machine-learning language model and embedding techniques are described that overcome the challenges of conventional techniques in a variety of ways. In one example, a model is trained to support a visually guided machine-learning embedding space that supports visual intuition as to “what” is represented by text. The visually guided language embedding space supported by the model, once trained, may then be used to support visual intuition as part of a variety of functionality. In one such example, the visually guided language embedding space as implemented by the model may be leveraged as part of a multi-modal differential search to support search of digital images and other digital content with real-time focus adaptation which overcomes the challenges of conventional techniques.
    Type: Application
    Filed: May 30, 2019
    Publication date: December 3, 2020
    Applicant: Adobe Inc.
    Inventors: Pranav Vineet Aggarwal, Zhe Lin, Baldo Antonio Faieta, Saeid Motiian