Patents by Inventor Nihal Jain

Nihal Jain has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240119646
    Abstract: Digital image text editing techniques as implemented by an image processing system are described that support increased user interaction in the creation and editing of digital images through understanding a content creator's intent as expressed using text. In one example, a text user input is received by a text input module. The text user input describes a visual object and a visual attribute, in which the visual object specifies a visual context of the visual attribute. A feature representation generated by a text-to-feature system using a machine-learning module based on the text user input. The feature representation is passed to an image editing system to edit a digital object in a digital image, e.g., by applying a texture to an outline of the digital object within the digital image.
    Type: Application
    Filed: December 15, 2023
    Publication date: April 11, 2024
    Applicant: Adobe Inc.
    Inventors: Paridhi Maheshwari, Vishwa Vinay, Shraiysh Vaishay, Praneetha Vaddamanu, Nihal Jain, Dhananjay Bhausaheb Raut
  • Patent number: 11942082
    Abstract: Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: March 26, 2024
    Assignee: GOOGLE LLC
    Inventors: James Kuczmarski, Vibhor Jain, Amarnag Subramanya, Nimesh Ranjan, Melvin Jose Johnson Premkumar, Vladimir Vuskovic, Luna Dai, Daisuke Ikeda, Nihal Sandeep Balani, Jinna Lei, Mengmeng Niu, Hongjie Chai, Wangqing Yuan
  • Patent number: 11915343
    Abstract: Systems and methods for color representation are described. Embodiments of the inventive concept are configured to receive an attribute-object pair including a first term comprising an attribute label and a second term comprising an object label, encode the attribute-object pair to produce encoded features using a neural network that orders the first term and the second term based on the attribute label and the object label, and generate a color profile for the attribute-object pair based on the encoded features, wherein the color profile is based on a compositional relationship between the first term and the second term.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: February 27, 2024
    Assignee: ADOBE INC.
    Inventors: Paridhi Maheshwari, Vishwa Vinay, Dhananjay Raut, Nihal Jain, Praneetha Vaddamanu, Shraiysh Vaishay
  • Patent number: 11915692
    Abstract: Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: February 27, 2024
    Assignee: GOOGLE LLC
    Inventors: James Kuczmarski, Vibhor Jain, Amarnag Subramanya, Nimesh Ranjan, Melvin Jose Johnson Premkumar, Vladimir Vuskovic, Luna Dai, Daisuke Ikeda, Nihal Sandeep Balani, Jinna Lei, Mengmeng Niu
  • Patent number: 11887217
    Abstract: Digital image text editing techniques as implemented by an image processing system are described that support increased user interaction in the creation and editing of digital images through understanding a content creator's intent as expressed using text. In one example, a text user input is received by a text input module. The text user input describes a visual object and a visual attribute, in which the visual object specifies a visual context of the visual attribute. A feature representation generated by a text-to-feature system using a machine-learning module based on the text user input. The feature representation is passed to an image editing system to edit a digital object in a digital image, e.g., by applying a texture to an outline of the digital object within the digital image.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Paridhi Maheshwari, Vishwa Vinay, Shraiysh Vaishay, Praneetha Vaddamanu, Nihal Jain, Dhananjay Bhausaheb Raut
  • Publication number: 20240012849
    Abstract: Embodiments are disclosed for multichannel content recommendation. The method may include receiving an input collection comprising a plurality of images. The method may include extracting a set of feature channels from each of the images. The method may include generating, by a trained machine learning model, an intent channel of the input collection from the set of feature channels. The method may include retrieving, from a content library, a plurality of search result images that include a channel that matches the intent channel. The method may include generating a recommended set of images based on the intent channel and the set of feature channels.
    Type: Application
    Filed: July 11, 2022
    Publication date: January 11, 2024
    Applicant: Adobe Inc.
    Inventors: Praneetha VADDAMANU, Nihal JAIN, Paridhi MAHESHWARI, Kuldeep KULKARNI, Vishwa VINAY, Balaji Vasan SRINIVASAN, Niyati CHHAYA, Harshit AGRAWAL, Prabhat MAHAPATRA, Rizurekh SAHA
  • Publication number: 20220180572
    Abstract: Systems and methods for color representation are described. Embodiments of the inventive concept are configured to receive an attribute-object pair including a first term comprising an attribute label and a second term comprising an object label, encode the attribute-object pair to produce encoded features using a neural network that orders the first term and the second term based on the attribute label and the object label, and generate a color profile for the attribute-object pair based on the encoded features, wherein the color profile is based on a compositional relationship between the first term and the second term.
    Type: Application
    Filed: December 4, 2020
    Publication date: June 9, 2022
    Inventors: PARIDHI MAHESHWARI, Vishwa VINAY, Dhananjay RAUT, Nihal JAIN, Praneetha VADDAMANU, Shraiysh VAISHAY
  • Publication number: 20220130078
    Abstract: Digital image text editing techniques as implemented by an image processing system are described that support increased user interaction in the creation and editing of digital images through understanding a content creator's intent as expressed using text. In one example, a text user input is received by a text input module. The text user input describes a visual object and a visual attribute, in which the visual object specifies a visual context of the visual attribute. A feature representation generated by a text-to-feature system using a machine-learning module based on the text user input. The feature representation is passed to an image editing system to edit the digital object in the digital image, e.g., by applying a texture to an outline of the digital object within the digital image.
    Type: Application
    Filed: October 26, 2020
    Publication date: April 28, 2022
    Applicant: Adobe Inc.
    Inventors: Paridhi Maheshwari, Vishwa Vinay, Shraiysh Vaishay, Praneetha Vaddamanu, Nihal Jain, Dhananjay Bhausaheb Raut