Patents by Inventor Sindhuja Chonat Sri
Sindhuja Chonat Sri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11961507Abstract: A transcription of a query for content discovery is generated, and a context of the query is identified, as well as a first plurality of candidate entities to which the query refers. A search is performed based on the context of the query and the first plurality of candidate entities, and results are generated for output. A transcription of a second voice query is then generated, and it is determined whether the second transcription includes a trigger term indicating a corrective query. If so, the context of the first query is retrieved. A second term of the second query similar to a term of the first query is identified, and a second plurality of candidate entities to which the second term refers is determined. A second search is performed based on the second plurality of candidates and the context, and new search results are generated for output.Type: GrantFiled: March 2, 2023Date of Patent: April 16, 2024Assignee: Rovi Guides, Inc.Inventors: Jeffry Copps Robert Jose, Sindhuja Chonat Sri
-
Publication number: 20240005923Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.Type: ApplicationFiled: September 14, 2023Publication date: January 4, 2024Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
-
Patent number: 11790915Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.Type: GrantFiled: March 7, 2023Date of Patent: October 17, 2023Assignee: Rovi Guides, Inc.Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
-
Publication number: 20230206920Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.Type: ApplicationFiled: March 7, 2023Publication date: June 29, 2023Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
-
Publication number: 20230206904Abstract: A transcription of a query for content discovery is generated, and a context of the query is identified, as well as a first plurality of candidate entities to which the query refers. A search is performed based on the context of the query and the first plurality of candidate entities, and results are generated for output. A transcription of a second voice query is then generated, and it is determined whether the second transcription includes a trigger term indicating a corrective query. If so, the context of the first query is retrieved. A second term of the second query similar to a term of the first query is identified, and a second plurality of candidate entities to which the second term refers is determined. A second search is performed based on the second plurality of candidates and the context, and new search results are generated for output.Type: ApplicationFiled: March 2, 2023Publication date: June 29, 2023Inventors: Jeffry Copps Robert Jose, Sindhuja Chonat Sri
-
Patent number: 11626113Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.Type: GrantFiled: August 26, 2021Date of Patent: April 11, 2023Assignee: Rovi Guides, Inc.Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
-
Patent number: 11620982Abstract: A transcription of a query for content discovery is generated, and a context of the query is identified, as well as a first plurality of candidate entities to which the query refers. A search is performed based on the context of the query and the first plurality of candidate entities, and results are generated for output. A transcription of a second voice query is then generated, and it is determined whether the second transcription includes a trigger term indicating a corrective query. If so, the context of the first query is retrieved. A second term of the second query similar to a term of the first query is identified, and a second plurality of candidate entities to which the second term refers is determined. A second search is performed based on the second plurality of candidates and the context, and new search results are generated for output.Type: GrantFiled: June 1, 2020Date of Patent: April 4, 2023Assignee: ROVI GUIDES, INC.Inventors: Jeffry Copps Robert Jose, Sindhuja Chonat Sri
-
Publication number: 20220157348Abstract: Systems and methods for generating individualized content trailers. Content such as a video is divided into segments each representing a set of common features. With reference to a set of stored user preferences, certain segments are selected as aligning with the user's interests. Each selected segment may then be assigned a label corresponding to the plot portion or element to which it belongs. A coherent trailer may then be assembled from the selected segments, ordered according to their plot elements. This allows a user to see not only segments containing subject matter that aligns with their interests, but also a set of such segments arranged to give the user an idea of the plot, and a sense of drama, increasing the likelihood of engagement with the content.Type: ApplicationFiled: February 2, 2022Publication date: May 19, 2022Inventors: Jeffry Copps Robert Jose, Mithun Umesh, Sindhuja Chonat Sri
-
Publication number: 20220148600Abstract: Methods and systems are disclosed herein for training a network to detect mimicked voice input, so that it can be determined whether a voice input signal is a mimicked voice signal. First voice data is received. The first voice data comprises at least a voice signal of a first individual and another voice signal. The voice signal of the first individual and at least one other voice signal is combined to create a composite voice signal. Second voice data is received. The second voice data comprises at least a voice signal of the first individual. The network is trained using at least the composite voice signal and the second voice data to determine whether a voice input signal is a mimicked voice input signal.Type: ApplicationFiled: November 11, 2020Publication date: May 12, 2022Inventors: Jeffry Copps Robert Jose, Sindhuja Chonat Sri, Mithun Umesh
-
Patent number: 11276434Abstract: Systems and methods for generating individualized content trailers. Content such as a video is divided into segments each representing a set of common features. With reference to a set of stored user preferences, certain segments are selected as aligning with the user's interests. Each selected segment may then be assigned a label corresponding to the plot portion or element to which it belongs. A coherent trailer may then be assembled from the selected segments, ordered according to their plot elements. This allows a user to see not only segments containing subject matter that aligns with their interests, but also a set of such segments arranged to give the user an idea of the plot, and a sense of drama, increasing the likelihood of engagement with the content.Type: GrantFiled: November 17, 2020Date of Patent: March 15, 2022Assignee: ROVI GUIDES, INC.Inventors: Jeffry Copps Robert Jose, Mithun Umesh, Sindhuja Chonat Sri
-
Publication number: 20210390954Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.Type: ApplicationFiled: August 26, 2021Publication date: December 16, 2021Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
-
Publication number: 20210375263Abstract: A transcription of a query for content discovery is generated, and a context of the query is identified, as well as a first plurality of candidate entities to which the query refers. A search is performed based on the context of the query and the first plurality of candidate entities, and results are generated for output. A transcription of a second voice query is then generated, and it is determined whether the second transcription includes a trigger term indicating a corrective query. If so, the context of the first query is retrieved. A second term of the second query similar to a term of the first query is identified, and a second plurality of candidate entities to which the second term refers is determined. A second search is performed based on the second plurality of candidates and the context, and new search results are generated for output.Type: ApplicationFiled: June 1, 2020Publication date: December 2, 2021Inventors: Jeffry Copps Robert Jose, Sindhuja Chonat Sri
-
Patent number: 11133005Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.Type: GrantFiled: April 29, 2019Date of Patent: September 28, 2021Assignee: Rovi Guides, Inc.Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
-
Publication number: 20200342859Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.Type: ApplicationFiled: April 29, 2019Publication date: October 29, 2020Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan