Patents by Inventor Melvin Jose Johnson Premkumar

Melvin Jose Johnson Premkumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11942082
    Abstract: Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: March 26, 2024
    Assignee: GOOGLE LLC
    Inventors: James Kuczmarski, Vibhor Jain, Amarnag Subramanya, Nimesh Ranjan, Melvin Jose Johnson Premkumar, Vladimir Vuskovic, Luna Dai, Daisuke Ikeda, Nihal Sandeep Balani, Jinna Lei, Mengmeng Niu, Hongjie Chai, Wangqing Yuan
  • Patent number: 11915692
    Abstract: Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: February 27, 2024
    Assignee: GOOGLE LLC
    Inventors: James Kuczmarski, Vibhor Jain, Amarnag Subramanya, Nimesh Ranjan, Melvin Jose Johnson Premkumar, Vladimir Vuskovic, Luna Dai, Daisuke Ikeda, Nihal Sandeep Balani, Jinna Lei, Mengmeng Niu
  • Publication number: 20240020491
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for machine translation using neural networks. In some implementations, a text in one language is translated into a second language using a neural network model. The model can include an encoder neural network comprising a plurality of bidirectional recurrent neural network layers. The encoding vectors are processed using a multi-headed attention module configured to generate multiple attention context vectors for each encoding vector. A decoder neural network generates a sequence of decoder output vectors using the attention context vectors. The decoder output vectors can represent distributions over various language elements of the second language, allowing a translation of the text into the second language to be determined based on the sequence of decoder output vectors.
    Type: Application
    Filed: September 28, 2023
    Publication date: January 18, 2024
    Inventors: Zhifeng Chen, Macduff Richard Hughes, Yonghui Wu, Michael Schuster, Xu Chen, Llion Owen Jones, Niki J. Parmar, George Foster, Orhan Firat, Ankur Bapna, Wolfgang Macherey, Melvin Jose Johnson Premkumar
  • Patent number: 11875788
    Abstract: Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: January 16, 2024
    Assignee: GOOGLE LLC
    Inventors: James Kuczmarski, Vibhor Jain, Amarnag Subramanya, Nimesh Ranjan, Melvin Jose Johnson Premkumar, Vladimir Vuskovic, Luna Dai, Daisuke Ikeda, Nihal Sandeep Balani, Jinna Lei, Mengmeng Niu
  • Patent number: 11809834
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for machine translation using neural networks. In some implementations, a text in one language is translated into a second language using a neural network model. The model can include an encoder neural network comprising a plurality of bidirectional recurrent neural network layers. The encoding vectors are processed using a multi-headed attention module configured to generate multiple attention context vectors for each encoding vector. A decoder neural network generates a sequence of decoder output vectors using the attention context vectors. The decoder output vectors can represent distributions over various language elements of the second language, allowing a translation of the text into the second language to be determined based on the sequence of decoder output vectors.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: November 7, 2023
    Assignee: Google LLC
    Inventors: Zhifeng Chen, Macduff Richard Hughes, Yonghui Wu, Michael Schuster, Xu Chen, Llion Owen Jones, Niki J. Parmar, George Foster, Orhan Firat, Ankur Bapna, Wolfgang Macherey, Melvin Jose Johnson Premkumar
  • Publication number: 20220284198
    Abstract: Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
    Type: Application
    Filed: May 26, 2022
    Publication date: September 8, 2022
    Inventors: James Kuczmarski, Vibhor Jain, Amarnag Subramanya, Nimesh Ranjan, Melvin Jose Johnson Premkumar, Vladimir Vuskovic, Luna Dai, Daisuke Ikeda, Nihal Sandeep Balani, Jinna Lei, Mengmeng Niu, Hongjie Chai, Wangqing Yuan
  • Patent number: 11373049
    Abstract: Training and/or using a multilingual classification neural network model to perform a natural language processing classification task, where the model reuses an encoder portion of a multilingual neural machine translation model. In a variety of implementations, a client device can generate a natural language data stream from a spoken input from a user. The natural language data stream can be applied as input to an encoder portion of the multilingual classification model. The output generated by the encoder portion can be applied as input to a classifier portion of the multilingual classification model. The classifier portion can generate a predicted classification label of the natural language data stream. In many implementations, an output can be generated based on the predicted classification label, and a client device can present the output.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: June 28, 2022
    Assignee: GOOGLE LLC
    Inventors: Melvin Jose Johnson Premkumar, Akiko Eriguchi, Orhan Firat
  • Patent number: 11354521
    Abstract: Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
    Type: Grant
    Filed: February 17, 2020
    Date of Patent: June 7, 2022
    Assignee: GOOGLE LLC
    Inventors: James Kuczmarski, Vibhor Jain, Amarnag Subramanya, Nimesh Ranjan, Melvin Jose Johnson Premkumar, Vladimir Vuskovic, Luna Dai, Daisuke Ikeda, Nihal Sandeep Balani, Jinna Lei, Mengmeng Niu, Hongjie Chai, Wangqing Yuan
  • Publication number: 20220083746
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for machine translation using neural networks. In some implementations, a text in one language is translated into a second language using a neural network model. The model can include an encoder neural network comprising a plurality of bidirectional recurrent neural network layers. The encoding vectors are processed using a multi-headed attention module configured to generate multiple attention context vectors for each encoding vector. A decoder neural network generates a sequence of decoder output vectors using the attention context vectors. The decoder output vectors can represent distributions over various language elements of the second language, allowing a translation of the text into the second language to be determined based on the sequence of decoder output vectors.
    Type: Application
    Filed: August 27, 2021
    Publication date: March 17, 2022
    Inventors: Zhifeng Chen, Macduff Richard Hughes, Yonghui Wu, Michael Schuster, Xu Chen, Llion Owen Jones, Niki J. Parmar, George Foster, Orhan Firat, Ankur Bapna, Wolfgang Macherey, Melvin Jose Johnson Premkumar
  • Patent number: 11138392
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for machine translation using neural networks. In some implementations, a text in one language is translated into a second language using a neural network model. The model can include an encoder neural network comprising a plurality of bidirectional recurrent neural network layers. The encoding vectors are processed using a multi-headed attention module configured to generate multiple attention context vectors for each encoding vector. A decoder neural network generates a sequence of decoder output vectors using the attention context vectors. The decoder output vectors can represent distributions over various language elements of the second language, allowing a translation of the text into the second language to be determined based on the sequence of decoder output vectors.
    Type: Grant
    Filed: July 25, 2019
    Date of Patent: October 5, 2021
    Assignee: Google LLC
    Inventors: Zhifeng Chen, Macduff Richard Hughes, Yonghui Wu, Michael Schuster, Xu Chen, Llion Owen Jones, Niki J. Parmar, George Foster, Orhan Firat, Ankur Bapna, Wolfgang Macherey, Melvin Jose Johnson Premkumar
  • Patent number: 11113481
    Abstract: Techniques described herein may serve to increase the language coverage of an automated assistant system, i.e. they may serve to increase the number of queries in one or more non-native languages for which the automated assistant is able to deliver reasonable responses. For example, techniques are described herein for training and utilizing a machine translation model to map a plurality of semantically-related natural language inputs in one language to one or more canonical translations in another language. In various implementations, the canonical translations may be selected and/or optimized for determining an intent of the speaker by the automated assistant, so that one or more responsive actions can be performed based on the speaker's intent. Put another way, the canonical translations may be specifically formatted for indicating the intent of the speaker to the automated assistant.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: September 7, 2021
    Assignee: GOOGLE LLC
    Inventors: Melvin Jose Johnson Premkumar, Vladimir Vuskovic, James Kuczmarski, Hongjie Chai
  • Publication number: 20210210076
    Abstract: Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
    Type: Application
    Filed: March 24, 2021
    Publication date: July 8, 2021
    Inventors: James Kuczmarski, Vibhor Jain, Amarnag Subramanya, Nimesh Ranjan, Melvin Jose Johnson Premkumar, Vladimir Vuskovic, Luna Dai, Daisuke Ikeda, Nihal Sandeep Balani, Jinna Lei, Mengmeng Niu
  • Patent number: 10984784
    Abstract: Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
    Type: Grant
    Filed: April 16, 2018
    Date of Patent: April 20, 2021
    Assignee: GOOGLE LLC
    Inventors: James Kuczmarski, Vibhor Jain, Amarnag Subramanya, Nimesh Ranjan, Melvin Jose Johnson Premkumar, Vladimir Vuskovic, Luna Dai, Daisuke Ikeda, Nihal Sandeep Balani, Jinna Lei, Mengmeng Niu
  • Publication number: 20210064828
    Abstract: Techniques described herein may serve to increase the language coverage of an automated assistant system, i.e. they may serve to increase the number of queries in one or more non-native languages for which the automated assistant is able to deliver reasonable responses. For example, techniques are described herein for training and utilizing a machine translation model to map a plurality of semantically-related natural language inputs in one language to one or more canonical translations in another language. In various implementations, the canonical translations may be selected and/or optimized for determining an intent of the speaker by the automated assistant, so that one or more responsive actions can be performed based on the speaker's intent. Put another way, the canonical translations may be specifically formatted for indicating the intent of the speaker to the automated assistant.
    Type: Application
    Filed: May 2, 2019
    Publication date: March 4, 2021
    Inventors: Melvin Jose Johnson Premkumar, Vladimir Vuskovic, James Kuczmarski, Hongjie Chai
  • Publication number: 20200410396
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing machine learning tasks. One method includes receiving (i) a model input, and (ii) data identifying a first machine learning task to be performed on the model input to generate a first type of model output for the model input; augmenting the model input with an identifier for the first machine learning task to generate an augmented model input; and processing the augmented model input using a machine learning model, wherein the machine learning model has been trained on training data to perform a plurality of machine learning tasks including the first machine learning task, and wherein the machine learning model has been configured through training to process the augmented model input to generate a machine learning model output of the first type for the model input.
    Type: Application
    Filed: July 13, 2020
    Publication date: December 31, 2020
    Inventors: Zhifeng Chen, Michael Schuster, Melvin Jose Johnson Premkumar, Yonghui Wu, Quoc V. Le, Maxim Krikun, Thorsten Brants
  • Publication number: 20200342182
    Abstract: Training and/or using a multilingual classification neural network model to perform a natural language processing classification task, where the model reuses an encoder portion of a multilingual neural machine translation model. In a variety of implementations, a client device can generate a natural language data stream from a spoken input from a user. The natural language data stream can be applied as input to an encoder portion of the multilingual classification model. The output generated by the encoder portion can be applied as input to a classifier portion of the multilingual classification model. The classifier portion can generate a predicted classification label of the natural language data stream. In many implementations, an output can be generated based on the predicted classification label, and a client device can present the output.
    Type: Application
    Filed: August 26, 2019
    Publication date: October 29, 2020
    Inventors: Melvin Jose Johnson Premkumar, Akiko Eriguchi, Orhan Firat
  • Publication number: 20200320984
    Abstract: Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
    Type: Application
    Filed: April 16, 2018
    Publication date: October 8, 2020
    Inventors: James Kuczmarski, Vibhor Jain, Amarnag Subramanya, Nimesh Ranjan, Melvin Jose Johnson Premkumar, Vladimir Vuskovic, Luna Dai, Daisuke Ikeda, Nihal Sandeep Balani, Jinna Lei, Mengmeng Niu
  • Patent number: 10713593
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing machine learning tasks. One method includes receiving (i) a model input, and (ii) data identifying a first machine learning task to be performed on the model input to generate a first type of model output for the model input; augmenting the model input with an identifier for the first machine learning task to generate an augmented model input; and processing the augmented model input using a machine learning model, wherein the machine learning model has been trained on training data to perform a plurality of machine learning tasks including the first machine learning task, and wherein the machine learning model has been configured through training to process the augmented model input to generate a machine learning model output of the first type for the model input.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: July 14, 2020
    Assignee: Google LLC
    Inventors: Zhifeng Chen, Michael Schuster, Melvin Jose Johnson Premkumar, Yonghui Wu, Quoc V. Le, Maxim Krikun, Thorsten Brants
  • Publication number: 20200184158
    Abstract: Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
    Type: Application
    Filed: February 17, 2020
    Publication date: June 11, 2020
    Inventors: James Kuczmarski, Vibhor Jain, Amarnag Subramanya, Nimesh Ranjan, Melvin Jose Johnson Premkumar, Vladimir Vuskovic, Luna Dai, Daisuke Ikeda, Nihal Sandeep Balani, Jinna Lei, Mengmeng Niu, Hongjie Chai, Wangqing Yuan
  • Patent number: 10679148
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing machine learning tasks. One method includes receiving (i) a model input, and (ii) data identifying a first machine learning task to be performed on the model input to generate a first type of model output for the model input; augmenting the model input with an identifier for the first machine learning task to generate an augmented model input; and processing the augmented model input using a machine learning model. An exemplary system applying implicit bridging for machine learning tasks, as described in this specification, trains a machine learning model to perform certain types of machine learning tasks without requiring explicit training data for the certain types of machine learning tasks to be used during training.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: June 9, 2020
    Assignee: Google LLC
    Inventors: Zhifeng Chen, Michael Schuster, Melvin Jose Johnson Premkumar, Yonghui Wu, Quoc V. Le, Maxim Krikun, Thorsten Brants