Patents by Inventor Yun-hsuan Sung
Yun-hsuan Sung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240370487Abstract: Systems and methods of the present disclosure are directed to computer-implemented method for machine-learned multimodal search refinement. The method includes obtaining a query image embedding for a query image and a textual query refinement associated with the query image. The method includes processing the query image embedding and the textual query refinement with a machine-learned query refinement model to obtain a refined query image embedding that incorporates the textual query refinement. The method includes evaluating a loss function that evaluates a distance between the refined query image embedding and an embedding for a ground truth image within an image embedding space. The method includes modifying value(s) of parameter(s) of the machine-learned query refinement model based on the loss function.Type: ApplicationFiled: November 4, 2022Publication date: November 7, 2024Inventors: Severin Heiniger, Balint Miklos, Yun-Hsuan Sung, Zhen Li, Yinfei Yang, Chao Jia
-
Patent number: 12086720Abstract: Systems, methods, and computer readable media related to information retrieval. Some implementations are related to training and/or using a relevance model for information retrieval. The relevance model includes an input neural network model and a subsequent content neural network model. The input neural network model and the subsequent content neural network model can be separate, but trained and/or used cooperatively. The input neural network model and the subsequent content neural network model can be “separate” in that separate inputs are applied to the neural network models, and each of the neural network models is used to generate its own feature vector based on its applied input. A comparison of the feature vectors generated based on the separate network models can then be performed, where the comparison indicates relevance of the input applied to the input neural network model to the separate input applied to the subsequent content neural network model.Type: GrantFiled: October 15, 2021Date of Patent: September 10, 2024Assignee: GOOGLE LLCInventors: Brian Strope, Yun-hsuan Sung, Matthew Henderson, Rami Al-Rfou', Raymond Kurzweil
-
Publication number: 20240273294Abstract: The technology employs soft knowledge prompts (KPs) to inject relevant world knowledge into language models. This includes training KPs via self-supervised learning on data from one or more knowledge bases. KPs are task independent and can function as an external memory of the language models. KPs may be entity-centric, meaning that each prompt primarily encodes information about one entity from a given knowledge base. A method includes identifying a KP in response to a received input text, concatenating that KP to a sequence of word embeddings of the input text, applying the concatenated information to a trained language model, predicting an object entity name, computing a cross-entropy loss, and updating the identified KP based on the computed cross-entropy loss.Type: ApplicationFiled: February 9, 2023Publication date: August 15, 2024Inventors: Siamak Shakeri, Cicero Nogueira dos Santos, Daniel Matthew Cer, Zhe Dong, Jianmo Ni, Yun-Hsuan Sung, John Nham
-
Publication number: 20240062111Abstract: Systems, methods, and computer readable media related to: training an encoder model that can be utilized to determine semantic similarity of a natural language textual string to each of one or more additional natural language textual strings (directly and/or indirectly); and/or using a trained encoder model to determine one or more responsive actions to perform in response to a natural language query. The encoder model is a machine learning model, such as a neural network model. In some implementations of training the encoder model, the encoder model is trained as part of a larger network architecture trained based on one or more tasks that are distinct from a “semantic textual similarity” task for which the encoder model can be used.Type: ApplicationFiled: November 1, 2023Publication date: February 22, 2024Inventors: Brian Strope, Yun-Hsuan Sung, Wangqing Yuan
-
Patent number: 11842253Abstract: Systems, methods, and computer readable media related to: training an encoder model that can be utilized to determine semantic similarity of a natural language textual string to each of one or more additional natural language textual strings (directly and/or indirectly); and/or using a trained encoder model to determine one or more responsive actions to perform in response to a natural language query. The encoder model is a machine learning model, such as a neural network model. In some implementations of training the encoder model, the encoder model is trained as part of a larger network architecture trained based on one or more tasks that are distinct from a “semantic textual similarity” task for which the encoder model can be used.Type: GrantFiled: August 17, 2020Date of Patent: December 12, 2023Assignee: GOOGLE LLCInventors: Brian Strope, Yun-hsuan Sung, Wangqing Yuan
-
Publication number: 20230385543Abstract: A computing system is described that includes user interface components configured to receive typed user input; and one or more processors. The one or more processors are configured to: receive, by a computing system and at a first time, a first portion of text typed by a user in an electronic message being edited; predict, based on the first portion of text, a first candidate portion of text to follow the first portion of text; output, for display, the predicted first candidate portion of text for optional selection to append to the first portion of text; determine, at a second time that is after the first time, that the electronic message is directed to a sensitive topic; and responsive to determining that the electronic message is directed to a sensitive topic, refrain from outputting subsequent candidate portions of text for optional selection to append to text in the electronic message.Type: ApplicationFiled: August 9, 2023Publication date: November 30, 2023Inventors: Paul Roland Lambert, Timothy Youngjin Sohn, Jacqueline Amy Tsay, Gagan Bansal, Cole Austin Bevis, Kaushik Roy, Justin Tzi-jay LU, Katherine Anna Evans, Tobias Bosch, Yinan Wang, Matthew Vincent Dierker, Greg Russell Bullock, Ettore Randazzo, Tobias Kaufmann, Yonghui Wu, Benjamin N. Lee, Xu Chen, Brian Strope, Yun-hsuan Sung, Do Kook Choe, Rami Eid Sammour Al-Rfou'
-
Patent number: 11755834Abstract: A computing system is described that includes user interface components configured to receive typed user input; and one or more processors. The one or more processors are configured to: receive, by a computing system and at a first time, a first portion of text typed by a user in an electronic message being edited; predict, based on the first portion of text, a first candidate portion of text to follow the first portion of text; output, for display, the predicted first candidate portion of text for optional selection to append to the first portion of text; determine, at a second time that is after the first time, that the electronic message is directed to a sensitive topic; and responsive to determining that the electronic message is directed to a sensitive topic, refrain from outputting subsequent candidate portions of text for optional selection to append to text in the electronic message.Type: GrantFiled: December 22, 2017Date of Patent: September 12, 2023Assignee: Google LLCInventors: Paul Roland Lambert, Timothy Youngjin Sohn, Jacqueline Amy Tsay, Gagan Bansal, Cole Austin Bevis, Kaushik Roy, Justin Tzi-jay Lu, Katherine Anna Evans, Tobias Bosch, Yinan Wang, Matthew Vincent Dierker, Gregory Russell Bullock, Ettore Randazzo, Tobias Kaufmann, Yonghui Wu, Benjamin N. Lee, Xu Chen, Brian Strope, Yun-hsuan Sung, Do Kook Choe, Rami Eid Sammouf Al-Rfou'
-
Patent number: 11373086Abstract: Systems, methods, and computer readable media related to determining one or more responses to provide that are responsive to an electronic communication that is generated through interaction with a client computing device. For example, determining one or more responses to provide for presentation to a user as suggestions for inclusion in a reply to an electronic communication sent to the user. Some implementations are related to training and/or using separate input and response neural network models for determining responses for electronic communications. The input neural network model and the response neural network model can be separate, but trained and/or used cooperatively.Type: GrantFiled: March 31, 2017Date of Patent: June 28, 2022Assignee: GOOGLE LLCInventors: Brian Strope, Yun-hsuan Sung, Matthew Henderson, Rami Al-Rfou′, Raymond Kurzweil
-
Publication number: 20220036197Abstract: Systems, methods, and computer readable media related to information retrieval. Some implementations are related to training and/or using a relevance model for information retrieval. The relevance model includes an input neural network model and a subsequent content neural network model. The input neural network model and the subsequent content neural network model can be separate, but trained and/or used cooperatively. The input neural network model and the subsequent content neural network model can be “separate” in that separate inputs are applied to the neural network models, and each of the neural network models is used to generate its own feature vector based on its applied input. A comparison of the feature vectors generated based on the separate network models can then be performed, where the comparison indicates relevance of the input applied to the input neural network model to the separate input applied to the subsequent content neural network model.Type: ApplicationFiled: October 15, 2021Publication date: February 3, 2022Inventors: Brian Strope, Yun-hsuan Sung, Matthew Henderson, Rami Al-Rfou', Raymond Kurzweil
-
Patent number: 11238211Abstract: A system may use a machine-learned model to determine whether to classify a sequence of one or more words within a first document that is being edited as a candidate hyperlink based at least in part on context associated with the first document. In response to classifying the sequence of one or more words as the candidate hyperlink, the system may use the machine-learned model and based at least in part on the sequence of one or more words and the context to determine one or more candidate document to be hyperlinked from the sequence of one or more words. In response to receiving an indication of a second document being selected out of the one or more candidate documents, the system may modify the first document to associate the sequence of one or more words with a hyperlink to the second document.Type: GrantFiled: March 14, 2019Date of Patent: February 1, 2022Assignee: Google LLCInventors: Jan van de Kerkhof, Balint Miklos, Amr Abdelfattah, Tobias Kaufmann, László Lukacs, Bjarke Ebert, Victor Anchidin, Brian Strope, Heeyoung Lee, Yun-hsuan Sung, Noah Constant, Neil Smith
-
Patent number: 11188824Abstract: Systems, methods, and computer readable media related to information retrieval. Some implementations are related to training and/or using a relevance model for information retrieval. The relevance model includes an input neural network model and a subsequent content neural network model. The input neural network model and the subsequent content neural network model can be separate, but trained and/or used cooperatively. The input neural network model and the subsequent content neural network model can be “separate” in that separate inputs are applied to the neural network models, and each of the neural network models is used to generate its own feature vector based on its applied input. A comparison of the feature vectors generated based on the separate network models can then be performed, where the comparison indicates relevance of the input applied to the input neural network model to the separate input applied to the subsequent content neural network model.Type: GrantFiled: March 31, 2017Date of Patent: November 30, 2021Assignee: GOOGLE LLCInventors: Brian Strope, Yun-hsuan Sung, Matthew Henderson, Rami Al-Rfou', Raymond Kurzweil
-
Publication number: 20200410157Abstract: A system may use a machine-learned model to determine whether to classify a sequence of one or more words within a first document that is being edited as a candidate hyperlink based at least in part on context associated with the first document. In response to classifying the sequence of one or more words as the candidate hyperlink, the system may use the machine-learned model and based at least in part on the sequence of one or more words and the context to determine one or more candidate document to be hyperlinked from the sequence of one or more words. In response to receiving an indication of a second document being selected out of the one or more candidate documents, the system may modify the first document to associate the sequence of one or more words with a hyperlink to the second document.Type: ApplicationFiled: March 14, 2019Publication date: December 31, 2020Inventors: Jan van de Kerkhof, Balint Miklos, Amr Abdelfattah, Tobias Kaufmann, László Lukács, Bjarke Ebert, Victor Anchidin, Brian Strope, Heeyoung Lee, Yun-hsuan Sung, Noah Constant, Neil Smith
-
Publication number: 20200380418Abstract: Systems, methods, and computer readable media related to: training an encoder model that can be utilized to determine semantic similarity of a natural language textual string to each of one or more additional natural language textual strings (directly and/or indirectly); and/or using a trained encoder model to determine one or more responsive actions to perform in response to a natural language query. The encoder model is a machine learning model, such as a neural network model. In some implementations of training the encoder model, the encoder model is trained as part of a larger network architecture trained based on one or more tasks that are distinct from a “semantic textual similarity” task for which the encoder model can be used.Type: ApplicationFiled: August 17, 2020Publication date: December 3, 2020Inventors: Brian Strope, Yun-hsuan Sung, Wangqing Yuan
-
Patent number: 10783456Abstract: Systems, methods, and computer readable media related to: training an encoder model that can be utilized to determine semantic similarity of a natural language textual string to each of one or more additional natural language textual strings (directly and/or indirectly); and/or using a trained encoder model to determine one or more responsive actions to perform in response to a natural language query. The encoder model is a machine learning model, such as a neural network model. In some implementations of training the encoder model, the encoder model is trained as part of a larger network architecture trained based on one or more tasks that are distinct from a “semantic textual similarity” task for which the encoder model can be used.Type: GrantFiled: December 14, 2018Date of Patent: September 22, 2020Assignee: GOOGLE LLCInventors: Brian Strope, Yun-hsuan Sung, Wangqing Yuan
-
Publication number: 20200104746Abstract: Systems, methods, and computer readable media related to: training an encoder model that can be utilized to determine semantic similarity of a natural language textual string to each of one or more additional natural language textual strings (directly and/or indirectly); and/or using a trained encoder model to determine one or more responsive actions to perform in response to a natural language query. The encoder model is a machine learning model, such as a neural network model. In some implementations of training the encoder model, the encoder model is trained as part of a larger network architecture trained based on one or more tasks that are distinct from a “semantic textual similarity” task for which the encoder model can be used.Type: ApplicationFiled: December 14, 2018Publication date: April 2, 2020Inventors: Brian Strope, Yun-hsuan Sung, Wangqing Yuan
-
Publication number: 20190197101Abstract: A computing system is described that includes user interface components configured to receive typed user input; and one or more processors. The one or more processors are configured to: receive, by a computing system and at a first time, a first portion of text typed by a user in an electronic message being edited; predict, based on the first portion of text, a first candidate portion of text to follow the first portion of text; output, for display, the predicted first candidate portion of text for optional selection to append to the first portion of text; determine, at a second time that is after the first time, that the electronic message is directed to a sensitive topic; and responsive to determining that the electronic message is directed to a sensitive topic, refrain from outputting subsequent candidate portions of text for optional selection to append to text in the electronic message.Type: ApplicationFiled: December 22, 2017Publication date: June 27, 2019Inventors: Paul Roland Lambert, Timothy Youngjin Sohn, Jacqueline Amy Tsay, Gagan Bansal, Cole Austin Bevis, Kaushik Roy, Justin Tzi-jay LU, Katherine Anna Evans, Tobias Bosch, Yinan Wang, Matthew Vincent Dierker, Gregory Russell Bullock, Ettore Randazzo, Tobias Kaufmann, Yonghui Wu, Benjamin N. Lee, Xu Chen, Brian Strope, Yun-hsuan Sung, Do Kook Choe, Rami Eid Sammour Al-Rfou'
-
Publication number: 20180240013Abstract: Systems, methods, and computer readable media related to information retrieval. Some implementations are related to training and/or using a relevance model for information retrieval. The relevance model includes an input neural network model and a subsequent content neural network model. The input neural network model and the subsequent content neural network model can be separate, but trained and/or used cooperatively. The input neural network model and the subsequent content neural network model can be “separate” in that separate inputs are applied to the neural network models, and each of the neural network models is used to generate its own feature vector based on its applied input. A comparison of the feature vectors generated based on the separate network models can then be performed, where the comparison indicates relevance of the input applied to the input neural network model to the separate input applied to the subsequent content neural network model.Type: ApplicationFiled: March 31, 2017Publication date: August 23, 2018Inventors: Brian Strope, Yun-hsuan Sung, Matthew Henderson, Rami Al-Rfou', Raymond Kurzweil
-
Publication number: 20180240014Abstract: Systems, methods, and computer readable media related to determining one or more responses to provide that are responsive to an electronic communication that is generated through interaction with a client computing device. For example, determining one or more responses to provide for presentation to a user as suggestions for inclusion in a reply to an electronic communication sent to the user. Some implementations are related to training and/or using separate input and response neural network models for determining responses for electronic communications. The input neural network model and the response neural network model can be separate, but trained and/or used cooperatively.Type: ApplicationFiled: March 31, 2017Publication date: August 23, 2018Inventors: Brian Strope, Yun-hsuan Sung, Matthew Henderson, Rami Al-Rfou', Raymond Kurzweil
-
Patent number: 9460088Abstract: An automatic speech recognition system and method are provided for written-domain language modeling.Type: GrantFiled: May 31, 2013Date of Patent: October 4, 2016Assignee: Google Inc.Inventors: Hasim Sak, Yun-hsuan Sung, Cyril Georges Luc Allauzen
-
Patent number: 9275635Abstract: Speech recognition systems may perform the following operations: receiving audio at a computing device; identifying a language associated with the audio; recognizing the audio using recognition models for different versions of the language to produce recognition candidates for the audio, where the recognition candidates are associated with corresponding information; comparing the information of the recognition candidates to identify agreement between at least two of the recognition models; selecting a recognition candidate based on information of the recognition candidate and agreement between the at least two of the recognition models; and outputting data corresponding to the selected recognition candidate as a recognized version of the audio.Type: GrantFiled: November 9, 2012Date of Patent: March 1, 2016Assignee: Google Inc.Inventors: Francoise Beaufays, Brian Strope, Yun-hsuan Sung