Patents by Inventor Ankur Aher

Ankur Aher has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210097977
    Abstract: Systems and methods for determining hint words that improve the accuracy of automated speech recognition (ASR) systems. Hint words are determined in the context of a user issuing voice commands in connection with a voice interface system. Terms are initially taken from most frequently occurring terms in operation of a voice interface system. For example, most frequently occurring terms that arise in electronic search queries or received commands are selected. Certain of these terms are selected as hint words, and the selected hint words are then transmitted to an ASR system to assist in translation of speech to text.
    Type: Application
    Filed: October 1, 2019
    Publication date: April 1, 2021
    Inventors: Ankur Aher, Jeffry Copps Robert Jose
  • Publication number: 20210063177
    Abstract: Systems and methods are disclosed herein for selecting alternate routes with fewer directions during vehicle navigation. The disclosed techniques herein determine directions from route data and direction timestamps for each of the directions. For each direction, a corresponding media asset from media assets in a playlist having a media asset duration that matches a direction duration is determined. The direction duration is the time difference between the direction timestamp and a subsequent direction timestamp.
    Type: Application
    Filed: August 30, 2019
    Publication date: March 4, 2021
    Inventors: Nishchit Mahajan, Ankur Aher
  • Publication number: 20210063193
    Abstract: Systems and methods are disclosed herein for providing uninterrupted media content during vehicle navigation. The disclosed techniques herein discuss determining directions from route data and navigation announcements for each of the directions. For each navigation announcement, a determination is made whether current playback of a media asset in a playlist ends within a predefined time threshold before the navigation announcement. In a positive determination, the playback of the playlist is paused until the navigation announcement has elapsed.
    Type: Application
    Filed: August 30, 2019
    Publication date: March 4, 2021
    Inventors: Nishchit Mahajan, Ankur Aher
  • Publication number: 20210063194
    Abstract: Systems and methods are disclosed herein for providing uninterrupted media content by reordering playlists during vehicle navigation. The disclosed techniques herein determine directions from route data and navigation announcements for each of the directions. For each direction, a corresponding media asset from media assets in a playlist having a media asset duration that matches a direction duration is determined. The direction duration is the time difference between the navigation announcement and a subsequent navigation announcement.
    Type: Application
    Filed: August 30, 2019
    Publication date: March 4, 2021
    Inventors: Nishchit Mahajan, Ankur Aher
  • Publication number: 20210034663
    Abstract: The system receives a voice query at an audio interface and converts the voice query to text. The system can determine pronunciation information during conversion and generate metadata the indicates a pronunciation of one or more words of the query, include phonetic information in the text query, or both. A query includes one or more entities, which may be more accurately identified based on pronunciation. The system searches for information, content, or both among one or more databases based on the generated text query, pronunciation information, user profile information, search histories or trends, and optionally other information. The system identifies one or more entities or content items that match the text query, and retrieves the identified information to provide to the user.
    Type: Application
    Filed: July 31, 2019
    Publication date: February 4, 2021
    Inventors: Ankur Aher, Indranil Coomar Doss, Aashish Goyal, Aman Puniyani, Kandala Reddy, Mithun Umesh
  • Publication number: 20210034662
    Abstract: The system receives a voice query at an audio interface and converts the voice query to text. The system can determine pronunciation information during conversion and generate metadata that indicates a pronunciation of one or more words of the query, include phonetic information in the text query, or both. A query includes one or more entities that may be more accurately identified based on pronunciation. The system searches for information, content, or both among one or more databases based on the generated text query, pronunciation information, user profile information, search histories or trends, and optionally other information. The system identifies one or more entities or content items that match the text query, and retrieves the identified information to provide to the user.
    Type: Application
    Filed: July 31, 2019
    Publication date: February 4, 2021
    Inventors: Ankur Aher, Indranil Coomar Doss, Aashish Goyal, Aman Puniyani, Kandala Reddy, Mithun Umesh
  • Publication number: 20210037293
    Abstract: Methods and systems are described for providing content, such as a movie, with dialogue including a quotation that was input. For example, using a voice search a viewer may input a quotation famous from a movie to find the original fil and related content. The methods and systems use a quotation engine in a digital device to receive an input including the quotation and access a plurality of content items that include dialogue. The quotation engine identifies a subset of content items that include dialogue similar to the input quotation. The quotation engine accesses metadata of each of the subset of content, ranks the subset based on predetermined criteria and the metadata, and provides the ranked subset of the plurality of content items for consumption. The quotation engine may use a graphical user interface to identify the earliest release, trending content, or the program best known for the quote.
    Type: Application
    Filed: July 31, 2019
    Publication date: February 4, 2021
    Inventors: Ankur Aher, Nikhil Gabhane, Raman Gupta, Aman Puniyani
  • Publication number: 20210035587
    Abstract: The system identifies one or more entities or content items among a plurality of stored information. The system generates an audio file based on a first text string that represents the entity or content item. Based on the first text string and at least one speech criterion, the system generating, using a speech-to-text module a second text string based on the audio file. The system then compares the text strings and stores the second text string if it is not identical to the first text string. The system generates metadata that includes results from text-speech-text conversions to forecast possible misidentifications when responding to voice queries during search operations.
    Type: Application
    Filed: July 31, 2019
    Publication date: February 4, 2021
    Inventors: Ankur Aher, Indranil Coomar Doss, Aashish Goyal, Aman Puniyani, Kandala Reddy, Mithun Umesh
  • Publication number: 20200413163
    Abstract: Systems and methods are provided for presenting an interactive content item matching a user-selected category to a user for a desired duration. A user selects a category and selects a first interactive content item on a media system. The system calculates a total duration of a storyline from the selected interactive content item that matches the selected category (e.g., a genre “comedy”) and compares the calculated duration to a desired predetermined duration for which the user wishes to watch the selected show. If the system determines, for instance, that the total duration of the selected storyline is less than the predetermined duration, the system identifies scenes from another show and interleaves them with scenes from the first interactive content item to generate a combined interactive content item that satisfies the user viewing preferences.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Ankur Aher, Sandeep Jangra, Aman Puniyani, Mohammed Yasir
  • Publication number: 20200410995
    Abstract: Systems and methods are described herein for disambiguating a voice search query by determining whether the user made a gesture while speaking a quotation from a content item and whether the user mimicked or approximated a gesture made by a character in the content item when the character spoke the words quoted by the user. If so, a search result comprising an identifier of the content item is generated. A search result representing the content item from which the quotation comes may be ranked highest among other search results returned and therefore presented first in a list of search results. If the user did not mimic or approximate a gesture made by a character in the content item when the quotation is spoken in the content item, then a search result may not be generated for the content item or may be ranked lowest among other search results.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Ankur Aher, Nishchit Mahajan, Narendra Purushothama, Sai Durga Venkat Reddy Pulikunta
  • Publication number: 20200342859
    Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.
    Type: Application
    Filed: April 29, 2019
    Publication date: October 29, 2020
    Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan