Patents by Inventor Nihal Jain
Nihal Jain has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12332935Abstract: Embodiments are disclosed for multichannel content recommendation. The method may include receiving an input collection comprising a plurality of images. The method may include extracting a set of feature channels from each of the images. The method may include generating, by a trained machine learning model, an intent channel of the input collection from the set of feature channels. The method may include retrieving, from a content library, a plurality of search result images that include a channel that matches the intent channel. The method may include generating a recommended set of images based on the intent channel and the set of feature channels.Type: GrantFiled: July 11, 2022Date of Patent: June 17, 2025Assignee: Adobe Inc.Inventors: Praneetha Vaddamanu, Nihal Jain, Paridhi Maheshwari, Kuldeep Kulkarni, Vishwa Vinay, Balaji Vasan Srinivasan, Niyati Chhaya, Harshit Agrawal, Prabhat Mahapatra, Rizurekh Saha
-
Publication number: 20240119646Abstract: Digital image text editing techniques as implemented by an image processing system are described that support increased user interaction in the creation and editing of digital images through understanding a content creator's intent as expressed using text. In one example, a text user input is received by a text input module. The text user input describes a visual object and a visual attribute, in which the visual object specifies a visual context of the visual attribute. A feature representation generated by a text-to-feature system using a machine-learning module based on the text user input. The feature representation is passed to an image editing system to edit a digital object in a digital image, e.g., by applying a texture to an outline of the digital object within the digital image.Type: ApplicationFiled: December 15, 2023Publication date: April 11, 2024Applicant: Adobe Inc.Inventors: Paridhi Maheshwari, Vishwa Vinay, Shraiysh Vaishay, Praneetha Vaddamanu, Nihal Jain, Dhananjay Bhausaheb Raut
-
Patent number: 11915343Abstract: Systems and methods for color representation are described. Embodiments of the inventive concept are configured to receive an attribute-object pair including a first term comprising an attribute label and a second term comprising an object label, encode the attribute-object pair to produce encoded features using a neural network that orders the first term and the second term based on the attribute label and the object label, and generate a color profile for the attribute-object pair based on the encoded features, wherein the color profile is based on a compositional relationship between the first term and the second term.Type: GrantFiled: December 4, 2020Date of Patent: February 27, 2024Assignee: ADOBE INC.Inventors: Paridhi Maheshwari, Vishwa Vinay, Dhananjay Raut, Nihal Jain, Praneetha Vaddamanu, Shraiysh Vaishay
-
Patent number: 11887217Abstract: Digital image text editing techniques as implemented by an image processing system are described that support increased user interaction in the creation and editing of digital images through understanding a content creator's intent as expressed using text. In one example, a text user input is received by a text input module. The text user input describes a visual object and a visual attribute, in which the visual object specifies a visual context of the visual attribute. A feature representation generated by a text-to-feature system using a machine-learning module based on the text user input. The feature representation is passed to an image editing system to edit a digital object in a digital image, e.g., by applying a texture to an outline of the digital object within the digital image.Type: GrantFiled: October 26, 2020Date of Patent: January 30, 2024Assignee: Adobe Inc.Inventors: Paridhi Maheshwari, Vishwa Vinay, Shraiysh Vaishay, Praneetha Vaddamanu, Nihal Jain, Dhananjay Bhausaheb Raut
-
Publication number: 20240012849Abstract: Embodiments are disclosed for multichannel content recommendation. The method may include receiving an input collection comprising a plurality of images. The method may include extracting a set of feature channels from each of the images. The method may include generating, by a trained machine learning model, an intent channel of the input collection from the set of feature channels. The method may include retrieving, from a content library, a plurality of search result images that include a channel that matches the intent channel. The method may include generating a recommended set of images based on the intent channel and the set of feature channels.Type: ApplicationFiled: July 11, 2022Publication date: January 11, 2024Applicant: Adobe Inc.Inventors: Praneetha VADDAMANU, Nihal JAIN, Paridhi MAHESHWARI, Kuldeep KULKARNI, Vishwa VINAY, Balaji Vasan SRINIVASAN, Niyati CHHAYA, Harshit AGRAWAL, Prabhat MAHAPATRA, Rizurekh SAHA
-
Publication number: 20220180572Abstract: Systems and methods for color representation are described. Embodiments of the inventive concept are configured to receive an attribute-object pair including a first term comprising an attribute label and a second term comprising an object label, encode the attribute-object pair to produce encoded features using a neural network that orders the first term and the second term based on the attribute label and the object label, and generate a color profile for the attribute-object pair based on the encoded features, wherein the color profile is based on a compositional relationship between the first term and the second term.Type: ApplicationFiled: December 4, 2020Publication date: June 9, 2022Inventors: PARIDHI MAHESHWARI, Vishwa VINAY, Dhananjay RAUT, Nihal JAIN, Praneetha VADDAMANU, Shraiysh VAISHAY
-
Publication number: 20220130078Abstract: Digital image text editing techniques as implemented by an image processing system are described that support increased user interaction in the creation and editing of digital images through understanding a content creator's intent as expressed using text. In one example, a text user input is received by a text input module. The text user input describes a visual object and a visual attribute, in which the visual object specifies a visual context of the visual attribute. A feature representation generated by a text-to-feature system using a machine-learning module based on the text user input. The feature representation is passed to an image editing system to edit the digital object in the digital image, e.g., by applying a texture to an outline of the digital object within the digital image.Type: ApplicationFiled: October 26, 2020Publication date: April 28, 2022Applicant: Adobe Inc.Inventors: Paridhi Maheshwari, Vishwa Vinay, Shraiysh Vaishay, Praneetha Vaddamanu, Nihal Jain, Dhananjay Bhausaheb Raut