Patents by Inventor Sindhuja Chonat Sri

Sindhuja Chonat Sri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961507
    Abstract: A transcription of a query for content discovery is generated, and a context of the query is identified, as well as a first plurality of candidate entities to which the query refers. A search is performed based on the context of the query and the first plurality of candidate entities, and results are generated for output. A transcription of a second voice query is then generated, and it is determined whether the second transcription includes a trigger term indicating a corrective query. If so, the context of the first query is retrieved. A second term of the second query similar to a term of the first query is identified, and a second plurality of candidate entities to which the second term refers is determined. A second search is performed based on the second plurality of candidates and the context, and new search results are generated for output.
    Type: Grant
    Filed: March 2, 2023
    Date of Patent: April 16, 2024
    Assignee: Rovi Guides, Inc.
    Inventors: Jeffry Copps Robert Jose, Sindhuja Chonat Sri
  • Publication number: 20240005923
    Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.
    Type: Application
    Filed: September 14, 2023
    Publication date: January 4, 2024
    Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
  • Patent number: 11790915
    Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.
    Type: Grant
    Filed: March 7, 2023
    Date of Patent: October 17, 2023
    Assignee: Rovi Guides, Inc.
    Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
  • Publication number: 20230206920
    Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.
    Type: Application
    Filed: March 7, 2023
    Publication date: June 29, 2023
    Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
  • Publication number: 20230206904
    Abstract: A transcription of a query for content discovery is generated, and a context of the query is identified, as well as a first plurality of candidate entities to which the query refers. A search is performed based on the context of the query and the first plurality of candidate entities, and results are generated for output. A transcription of a second voice query is then generated, and it is determined whether the second transcription includes a trigger term indicating a corrective query. If so, the context of the first query is retrieved. A second term of the second query similar to a term of the first query is identified, and a second plurality of candidate entities to which the second term refers is determined. A second search is performed based on the second plurality of candidates and the context, and new search results are generated for output.
    Type: Application
    Filed: March 2, 2023
    Publication date: June 29, 2023
    Inventors: Jeffry Copps Robert Jose, Sindhuja Chonat Sri
  • Patent number: 11626113
    Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.
    Type: Grant
    Filed: August 26, 2021
    Date of Patent: April 11, 2023
    Assignee: Rovi Guides, Inc.
    Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
  • Patent number: 11620982
    Abstract: A transcription of a query for content discovery is generated, and a context of the query is identified, as well as a first plurality of candidate entities to which the query refers. A search is performed based on the context of the query and the first plurality of candidate entities, and results are generated for output. A transcription of a second voice query is then generated, and it is determined whether the second transcription includes a trigger term indicating a corrective query. If so, the context of the first query is retrieved. A second term of the second query similar to a term of the first query is identified, and a second plurality of candidate entities to which the second term refers is determined. A second search is performed based on the second plurality of candidates and the context, and new search results are generated for output.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: April 4, 2023
    Assignee: ROVI GUIDES, INC.
    Inventors: Jeffry Copps Robert Jose, Sindhuja Chonat Sri
  • Publication number: 20220157348
    Abstract: Systems and methods for generating individualized content trailers. Content such as a video is divided into segments each representing a set of common features. With reference to a set of stored user preferences, certain segments are selected as aligning with the user's interests. Each selected segment may then be assigned a label corresponding to the plot portion or element to which it belongs. A coherent trailer may then be assembled from the selected segments, ordered according to their plot elements. This allows a user to see not only segments containing subject matter that aligns with their interests, but also a set of such segments arranged to give the user an idea of the plot, and a sense of drama, increasing the likelihood of engagement with the content.
    Type: Application
    Filed: February 2, 2022
    Publication date: May 19, 2022
    Inventors: Jeffry Copps Robert Jose, Mithun Umesh, Sindhuja Chonat Sri
  • Publication number: 20220148600
    Abstract: Methods and systems are disclosed herein for training a network to detect mimicked voice input, so that it can be determined whether a voice input signal is a mimicked voice signal. First voice data is received. The first voice data comprises at least a voice signal of a first individual and another voice signal. The voice signal of the first individual and at least one other voice signal is combined to create a composite voice signal. Second voice data is received. The second voice data comprises at least a voice signal of the first individual. The network is trained using at least the composite voice signal and the second voice data to determine whether a voice input signal is a mimicked voice input signal.
    Type: Application
    Filed: November 11, 2020
    Publication date: May 12, 2022
    Inventors: Jeffry Copps Robert Jose, Sindhuja Chonat Sri, Mithun Umesh
  • Patent number: 11276434
    Abstract: Systems and methods for generating individualized content trailers. Content such as a video is divided into segments each representing a set of common features. With reference to a set of stored user preferences, certain segments are selected as aligning with the user's interests. Each selected segment may then be assigned a label corresponding to the plot portion or element to which it belongs. A coherent trailer may then be assembled from the selected segments, ordered according to their plot elements. This allows a user to see not only segments containing subject matter that aligns with their interests, but also a set of such segments arranged to give the user an idea of the plot, and a sense of drama, increasing the likelihood of engagement with the content.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: March 15, 2022
    Assignee: ROVI GUIDES, INC.
    Inventors: Jeffry Copps Robert Jose, Mithun Umesh, Sindhuja Chonat Sri
  • Publication number: 20210390954
    Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.
    Type: Application
    Filed: August 26, 2021
    Publication date: December 16, 2021
    Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
  • Publication number: 20210375263
    Abstract: A transcription of a query for content discovery is generated, and a context of the query is identified, as well as a first plurality of candidate entities to which the query refers. A search is performed based on the context of the query and the first plurality of candidate entities, and results are generated for output. A transcription of a second voice query is then generated, and it is determined whether the second transcription includes a trigger term indicating a corrective query. If so, the context of the first query is retrieved. A second term of the second query similar to a term of the first query is identified, and a second plurality of candidate entities to which the second term refers is determined. A second search is performed based on the second plurality of candidates and the context, and new search results are generated for output.
    Type: Application
    Filed: June 1, 2020
    Publication date: December 2, 2021
    Inventors: Jeffry Copps Robert Jose, Sindhuja Chonat Sri
  • Patent number: 11133005
    Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: September 28, 2021
    Assignee: Rovi Guides, Inc.
    Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
  • Publication number: 20200342859
    Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.
    Type: Application
    Filed: April 29, 2019
    Publication date: October 29, 2020
    Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan