Patents by Inventor Midhun Gundapuneni

Midhun Gundapuneni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10503738
    Abstract: Certain embodiments involve creating multimedia content with text and media assets that illustrate the text. Multiple sentences are ranked on various features. A sentence ranking is determined based on, for example, the presence of important phrases in the sentence, the degree to which informational content of the sentence can be represented through a media asset, the presence of one or more sentiments associated with the sentence, and the readability of the sentence. In some examples, the ranked sentences are analyzed to determine similar information content, and the sentences are re-ranked based on this analysis. A subset of the ranked sentences with higher ranks are analyzed to determine similarities between content in the sentence and text descriptions of media assets. This analysis can be used to select appropriate images or other media assets. Multimedia content is generated in which the selected media assets are positioned near the set of sentences.
    Type: Grant
    Filed: May 24, 2016
    Date of Patent: December 10, 2019
    Assignee: Adobe Inc.
    Inventors: Harsh Jhamtani, Siddhartha Kumar Dutta, Midhun Gundapuneni, Shubham Varma, Cedric Huesler
  • Patent number: 9858264
    Abstract: A text sentence is automatically converted to an image sentence that conveys semantic roles of the text sentence. This is accomplished by identifying semantic roles associated with each verb of a sentence, any associated verb adjunctions, and identifying the grammatical dependencies between words and phrases in a sentence, in some embodiments. An image database, in which each image is tagged with descriptive information corresponding to the image depicted, is queried for images corresponding to the semantic roles of the identified verbs. Unless a single image is found to depict every semantic role, the text sentence is split into two smaller fragments. This process is the repeated and performed recursively until a number of images have been identified that describe each semantic role of each sentence fragment.
    Type: Grant
    Filed: March 22, 2017
    Date of Patent: January 2, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Siddhartha Kumar Dutta, Midhun Gundapuneni, Harsh Jhamtani, Shubham Varma
  • Publication number: 20170270123
    Abstract: Certain embodiments involve creating multimedia content with text and media assets that illustrate the text. Multiple sentences are ranked on various features. A sentence ranking is determined based on, for example, the presence of important phrases in the sentence, the degree to which informational content of the sentence can be represented through a media asset, the presence of one or more sentiments associated with the sentence, and the readability of the sentence. In some examples, the ranked sentences are analyzed to determine similar information content, and the sentences are re-ranked based on this analysis. A subset of the ranked sentences with higher ranks are analyzed to determine similarities between content in the sentence and text descriptions of media assets. This analysis can be used to select appropriate images or other media assets. Multimedia content is generated in which the selected media assets are positioned near the set of sentences.
    Type: Application
    Filed: May 24, 2016
    Publication date: September 21, 2017
    Inventors: Harsh Jhamtani, Siddhartha Kumar Dutta, Midhun Gundapuneni, Shubham Varma, Cedric Huesler
  • Publication number: 20170192961
    Abstract: A text sentence is automatically converted to an image sentence that conveys semantic roles of the text sentence. This is accomplished by identifying semantic roles associated with each verb of a sentence, any associated verb adjunctions, and identifying the grammatical dependencies between words and phrases in a sentence, in some embodiments. An image database, in which each image is tagged with descriptive information corresponding to the image depicted, is queried for images corresponding to the semantic roles of the identified verbs. Unless a single image is found to depict every semantic role, the text sentence is split into two smaller fragments. This process is the repeated and performed recursively until a number of images have been identified that describe each semantic role of each sentence fragment.
    Type: Application
    Filed: March 22, 2017
    Publication date: July 6, 2017
    Applicant: Adobe Systems Incorporated
    Inventors: Siddhartha Kumar Dutta, Midhun Gundapuneni, Harsh Jhamtani, Shubham Varma
  • Publication number: 20170139955
    Abstract: A text sentence is automatically converted to an image sentence that conveys semantic roles of the text sentence. This is accomplished by identifying semantic roles associated with each verb of a sentence, any associated verb adjunctions, and identifying the grammatical dependencies between words and phrases in a sentence, in some embodiments. An image database, in which each image is tagged with descriptive information corresponding to the image depicted, is queried for images corresponding to the semantic roles of the identified verbs. Unless a single image is found to depict every semantic role, the text sentence is split into two smaller fragments. This process is the repeated and performed recursively until a number of images have been identified that describe each semantic role of each sentence fragment.
    Type: Application
    Filed: November 16, 2015
    Publication date: May 18, 2017
    Applicant: Adobe Systems Incorporated
    Inventors: Siddhartha Kumar Dutta, Midhun Gundapuneni, Harsh Jhamtani, Shubham Varma
  • Patent number: 9633048
    Abstract: A text sentence is automatically converted to an image sentence that conveys semantic roles of the text sentence. This is accomplished by identifying semantic roles associated with each verb of a sentence, any associated verb adjunctions, and identifying the grammatical dependencies between words and phrases in a sentence, in some embodiments. An image database, in which each image is tagged with descriptive information corresponding to the image depicted, is queried for images corresponding to the semantic roles of the identified verbs. Unless a single image is found to depict every semantic role, the text sentence is split into two smaller fragments. This process is the repeated and performed recursively until a number of images have been identified that describe each semantic role of each sentence fragment.
    Type: Grant
    Filed: November 16, 2015
    Date of Patent: April 25, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Siddhartha Kumar Dutta, Midhun Gundapuneni, Harsh Jhamtani, Shubham Varma