Patents by Inventor Abhinav Sethy
Abhinav Sethy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11527237Abstract: Techniques for recommending a skill experience to a user after a user-system dialog session has ended are described. Upon a dialog session ending, the system uses a first machine learning model to determine potential intents to recommend to a user. The system then uses a second machine learning model to determine a particular skill and intent to recommend. The system then prompts the user to accept the recommended skill and intent. If the user accepts, the system calls the recommended skill to execute. As part of calling the skill, the system sends to the skill at least one entity provided in a natural language user input of the ended dialog session. This enables the skill to skip welcome prompts, and initiate processing to output a response based on the intent and the at least one entity of the ended dialog session.Type: GrantFiled: September 18, 2020Date of Patent: December 13, 2022Assignee: Amazon Technologies, Inc.Inventors: Ruhi Sarikaya, Hung Tuan Pham, Savas Parastatidis, Dean Curtis, Pushpendre Rastogi, Nitin Ashok Jain, John Arland Nave, Abhinav Sethy, Arpit Gupta, Mayank Kumar, Nakul Dahiwade, Arshdeep Singh, Nikhil Reddy Kortha, Rohit Prasad
-
Publication number: 20220059086Abstract: Techniques for decreasing (or eliminating) the possibility of a skill performing an action that is not responsive to a corresponding user input are described. A system may train one or more machine learning models with respect to user inputs, which resulted in incorrect actions being performed by skills, and corresponding user inputs, which resulted in the correct action being performed. The system may use the trained machine learning model(s) to rewrite user inputs that, if not rewritten, may result in incorrect actions being performed. The system may implement the trained machine learning model(s) with respect to ASR output text data to determine if the ASR output text data corresponds (or substantially corresponds) to previous ASR output text data that resulted in an incorrect action being performed.Type: ApplicationFiled: September 2, 2021Publication date: February 24, 2022Inventors: Bigyan Rajbhandari, Praveen Kumar Bodigutla, Zhenxiang Zhou, Karen Catelyn Stabile, Chenlei Guo, Abhinav Sethy, Alireza Roshan Ghias, Pragaash Ponnusamy, Kevin Quinn
-
Patent number: 11151986Abstract: Techniques for decreasing (or eliminating) the possibility of a skill performing an action that is not responsive to a corresponding user input are described. A system may train one or more machine learning models with respect to user inputs, which resulted in incorrect actions being performed by skills, and corresponding user inputs, which resulted in the correct action being performed. The system may use the trained machine learning model(s) to rewrite user inputs that, if not rewritten, may result in incorrect actions being performed. The system may implement the trained machine learning model(s) with respect to ASR output text data to determine if the ASR output text data corresponds (or substantially corresponds) to previous ASR output text data that resulted in an incorrect action being performed.Type: GrantFiled: September 21, 2018Date of Patent: October 19, 2021Assignee: Amazon Technologies, Inc.Inventors: Bigyan Rajbhandari, Praveen Kumar Bodigutla, Zhenxiang Zhou, Karen Catelyn Stabile, Chenlei Guo, Abhinav Sethy, Alireza Roshan Ghias, Pragaash Ponnusamy, Kevin Quinn
-
Patent number: 11145308Abstract: Symbol sequences are estimated using a computer-implemented method including detecting one or more candidates of a target symbol sequence from a speech-to-text data, extracting a related portion of each candidate from the speech-to-text data, detecting repetition of at least a partial sequence of each candidate within the related portion of the corresponding candidate, labeling the detected repetition with a repetition indication, and estimating whether each candidate is the target symbol sequence, using the corresponding related portion including the repetition indication of each of the candidates.Type: GrantFiled: September 20, 2019Date of Patent: October 12, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kenneth W. Church, Gakuto Kurata, Bhuvana Ramabhadran, Abhinav Sethy, Masayuki Suzuki, Ryuki Tachibana
-
Patent number: 11019306Abstract: A method of combining data streams from fixed audio-visual sensors with data streams from personal mobile devices including, forming a communication link with at least one of one or more personal mobile devices; receiving at least one of an audio data stream and/or a video data stream from the at least one of the one or more personal mobile devices; determining the quality of the at least one of the audio data stream and/or the video data stream, wherein the audio data stream and/or the video data stream having a quality above a threshold quality is retained; and combining the retained audio data stream and/or the video data stream with the data streams from the fixed audio-visual sensors.Type: GrantFiled: January 9, 2019Date of Patent: May 25, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Stanley Chen, Kenneth W. Church, Vaibhava Goel, Lidia L. Mangu, Etienne Marcheret, Bhuvana Ramabhadran, Laurence P. Sansone, Abhinav Sethy, Samuel Thomas
-
Patent number: 10692488Abstract: A computer selects a test set of sentences from among sentences applied to train a whole sentence recurrent neural network language model to estimate the probability of likelihood of each whole sentence processed by natural language processing being correct. The computer generates imposter sentences from among the test set of sentences by substituting one word in each sentence of the test set of sentences. The computer generates, through the whole sentence recurrent neural network language model, a first score for each sentence of the test set of sentences and at least one additional score for each of the imposter sentences. The computer evaluates an accuracy of the natural language processing system in performing sequential classification tasks based on an accuracy value of the first score in reflecting a correct sentence and the at least one additional score in reflecting an incorrect sentence.Type: GrantFiled: August 23, 2019Date of Patent: June 23, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Yinghui Huang, Abhinav Sethy, Kartik Audhkhasi, Bhuvana Ramabhadran
-
Publication number: 20200013393Abstract: A computer selects a test set of sentences from among sentences applied to train a whole sentence recurrent neural network language model to estimate the probability of likelihood of each whole sentence processed by natural language processing being correct. The computer generates imposter sentences from among the test set of sentences by substituting one word in each sentence of the test set of sentences. The computer generates, through the whole sentence recurrent neural network language model, a first score for each sentence of the test set of sentences and at least one additional score for each of the imposter sentences. The computer evaluates an accuracy of the natural language processing system in performing sequential classification tasks based on an accuracy value of the first score in reflecting a correct sentence and the at least one additional score in reflecting an incorrect sentence.Type: ApplicationFiled: August 23, 2019Publication date: January 9, 2020Inventors: YINGHUI HUANG, Abhinav Sethy, Kartik Audhkhasi, Bhuvana Ramabhadran
-
Publication number: 20200013408Abstract: Symbol sequences are estimated using a computer-implemented method including detecting one or more candidates of a target symbol sequence from a speech-to-text data, extracting a related portion of each candidate from the speech-to-text data, detecting repetition of at least a partial sequence of each candidate within the related portion of the corresponding candidate, labeling the detected repetition with a repetition indication, and estimating whether each candidate is the target symbol sequence, using the corresponding related portion including the repetition indication of each of the candidates.Type: ApplicationFiled: September 20, 2019Publication date: January 9, 2020Inventors: Kenneth W. Church, Gakuto Kurata, Bhuvana Ramabhadran, Abhinav Sethy, Masayuki Suzuki, Ryuki Tachibana
-
Patent number: 10529337Abstract: Symbol sequences are estimated using a computer-implemented method including detecting one or more candidates of a target symbol sequence from a speech-to-text data, extracting a related portion of each candidate from the speech-to-text data, detecting repetition of at least a partial sequence of each candidate within the related portion of the corresponding candidate, labeling the detected repetition with a repetition indication, and estimating whether each candidate is the target symbol sequence, using the corresponding related portion including the repetition indication of each of the candidates.Type: GrantFiled: January 7, 2019Date of Patent: January 7, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kenneth W. Church, Gakuto Kurata, Bhuvana Ramabhadran, Abhinav Sethy, Masayuki Suzuki, Ryuki Tachibana
-
Publication number: 20190318732Abstract: A whole sentence recurrent neural network (RNN) language model (LM) is provided for for estimating a probability of likelihood of each whole sentence processed by natural language processing being correct. A noise contrastive estimation sampler is applied against at least one entire sentence from a corpus of multiple sentences to generate at least one incorrect sentence. The whole sentence RNN LN is trained, using the at least one entire sentence from the corpus and the at least one incorrect sentence, to distinguish the at least one entire sentence as correct. The whole sentence recurrent neural network language model is applied to estimate the probability of likelihood of each whole sentence processed by natural language processing being correct.Type: ApplicationFiled: April 16, 2018Publication date: October 17, 2019Inventors: Yinghui Huang, Abhinav Sethy, Kartik Audhkhasi, Bhuvana Ramabhadran
-
Patent number: 10431210Abstract: A whole sentence recurrent neural network (RNN) language model (LM) is provided for for estimating a probability of likelihood of each whole sentence processed by natural language processing being correct. A noise contrastive estimation sampler is applied against at least one entire sentence from a corpus of multiple sentences to generate at least one incorrect sentence. The whole sentence RNN LN is trained, using the at least one entire sentence from the corpus and the at least one incorrect sentence, to distinguish the at least one entire sentence as correct. The whole sentence recurrent neural network language model is applied to estimate the probability of likelihood of each whole sentence processed by natural language processing being correct.Type: GrantFiled: April 16, 2018Date of Patent: October 1, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Yinghui Huang, Abhinav Sethy, Kartik Audhkhasi, Bhuvana Ramabhadran
-
Publication number: 20190149769Abstract: A method of combining data streams from fixed audio-visual sensors with data streams from personal mobile devices including, forming a communication link with at least one of one or more personal mobile devices; receiving at least one of an audio data stream and/or a video data stream from the at least one of the one or more personal mobile devices; determining the quality of the at least one of the audio data stream and/or the video data stream, wherein the audio data stream and/or the video data stream having a quality above a threshold quality is retained; and combining the retained audio data stream and/or the video data stream with the data streams from the fixed audio-visual sensors.Type: ApplicationFiled: January 9, 2019Publication date: May 16, 2019Inventors: STANLEY CHEN, KENNETH W. CHURCH, VAIBHAVA GOEL, LIDIA L. MANGU, ETIENNE MARCHERET, BHUVANA RAMABHADRAN, LAURENCE P. SANSONE, ABHINAV SETHY, SAMUEL THOMAS
-
Publication number: 20190139550Abstract: Symbol sequences are estimated using a computer-implemented method including detecting one or more candidates of a target symbol sequence from a speech-to-text data, extracting a related portion of each candidate from the speech-to-text data, detecting repetition of at least a partial sequence of each candidate within the related portion of the corresponding candidate, labeling the detected repetition with a repetition indication, and estimating whether each candidate is the target symbol sequence, using the corresponding related portion including the repetition indication of each of the candidates.Type: ApplicationFiled: January 7, 2019Publication date: May 9, 2019Inventors: Kenneth W. Church, Gakuto Kurata, Bhuvana Ramabhadran, Abhinav Sethy, Masayuki Suzuki, Ryuki Tachibana
-
Patent number: 10230922Abstract: A method of combining data streams from fixed audio-visual sensors with data streams from personal mobile devices including, forming a communication link with at least one of one or more personal mobile devices; receiving at least one of an audio data stream and/or a video data stream from the at least one of the one or more personal mobile devices; determining the quality of the at least one of the audio data stream and/or the video data stream, wherein the audio data stream and/or the video data stream having a quality above a threshold quality is retained; and combining the retained audio data stream and/or the video data stream with the data streams from the fixed audio-visual sensors.Type: GrantFiled: October 2, 2017Date of Patent: March 12, 2019Assignee: International Business Machines CorporationInventors: Stanley Chen, Kenneth W. Church, Vaibhava Goel, Lidia L. Mangu, Etienne Marcheret, Bhuvana Ramabhadran, Laurence P. Sansone, Abhinav Sethy, Samuel Thomas
-
Patent number: 10229685Abstract: Symbol sequences are estimated using a computer-implemented method including detecting one or more candidates of a target symbol sequence from a speech-to-text data, extracting a related portion of each candidate from the speech-to-text data, detecting repetition of at least a partial sequence of each candidate within the related portion of the corresponding candidate, labeling the detected repetition with a repetition indication, and estimating whether each candidate is the target symbol sequence, using the corresponding related portion including the repetition indication of each of the candidates.Type: GrantFiled: January 18, 2017Date of Patent: March 12, 2019Assignee: International Business Machines CorporationInventors: Kenneth W. Church, Gakuto Kurata, Bhuvana Ramabhadran, Abhinav Sethy, Masayuki Suzuki, Ryuki Tachibana
-
Publication number: 20180204567Abstract: Symbol sequences are estimated using a computer-implemented method including detecting one or more candidates of a target symbol sequence from a speech-to-text data, extracting a related portion of each candidate from the speech-to-text data, detecting repetition of at least a partial sequence of each candidate within the related portion of the corresponding candidate, labeling the detected repetition with a repetition indication, and estimating whether each candidate is the target symbol sequence, using the corresponding related portion including the repetition indication of each of the candidates.Type: ApplicationFiled: January 18, 2017Publication date: July 19, 2018Inventors: Kenneth W. Church, Gakuto Kurata, Bhuvana Ramabhadran, Abhinav Sethy, Masayuki Suzuki, Ryuki Tachibana
-
Patent number: 10019438Abstract: A mechanism is provided in a data processing system for external word embedding neural network language models. The mechanism configures the data processing system with an external word embedding neural network language model that accepts as input a sequence of words and predicts a current word based on the sequence of words. The external word embedding neural network language model combines an external embedding matrix to a history word embedding matrix and a prediction word embedding matrix of the external word embedding neural network language model. The mechanism receives a sequence of input words by the data processing system. The mechanism applies a plurality of previous words in the sequence of input words as inputs to the external word embedding neural network language model. The external word embedding neural network language model generates a predicted current word based on the plurality of previous words.Type: GrantFiled: March 18, 2016Date of Patent: July 10, 2018Assignee: International Business Machines CorporationInventors: Kartik Audhkhasi, Bhuvana Ramabhadran, Abhinav Sethy
-
Patent number: 9934778Abstract: Techniques for conversion of non-back-off language models for use in speech decoders. For example, an apparatus for conversion of non-back-off language models for use in speech decoders. For example, an apparatus is configured convert a non-back-off language model to a back-off language model. The converted back-off language model is pruned. The converted back-off language model is usable for decoding speech.Type: GrantFiled: August 1, 2016Date of Patent: April 3, 2018Assignee: International Business Machines CorporationInventors: Ebru Arisoy, Bhuvana Ramabhadran, Abhinav Sethy, Stanley Chen
-
Patent number: 9912909Abstract: A method of combining data streams from fixed audio-visual sensors with data streams from personal mobile devices including, forming a communication link with at least one of one or more personal mobile devices; receiving at least one of an audio data stream and/or a video data stream from the at least one of the one or more personal mobile devices; determining the quality of the at least one of the audio data stream and/or the video data stream, wherein the audio data stream and/or the video data stream having a quality above a threshold quality is retained; and combining the retained audio data stream and/or the video data stream with the data streams from the fixed audio-visual sensors.Type: GrantFiled: December 6, 2016Date of Patent: March 6, 2018Assignee: International Business Machines CorporationInventors: Stanley Chen, Kenneth W. Church, Vaibhava Goel, Lidia L. Mangu, Etienne Marcheret, Bhuvana Ramabhadran, Laurence P. Sansone, Abhinav Sethy, Samuel Thomas
-
Publication number: 20180027213Abstract: A method of combining data streams from fixed audio-visual sensors with data streams from personal mobile devices including, forming a communication link with at least one of one or more personal mobile devices; receiving at least one of an audio data stream and/or a video data stream from the at least one of the one or more personal mobile devices; determining the quality of the at least one of the audio data stream and/or the video data stream, wherein the audio data stream and/or the video data stream having a quality above a threshold quality is retained; and combining the retained audio data stream and/or the video data stream with the data streams from the fixed audio-visual sensors.Type: ApplicationFiled: October 2, 2017Publication date: January 25, 2018Inventors: STANLEY CHEN, KENNETH W. CHURCH, VAIBHAVA GOEL, LIDIA L. MANGU, ETIENNE MARCHERET, BHUVANA RAMABHADRAN, LAURENCE P. SANSONE, ABHINAV SETHY, SAMUEL THOMAS