Patents by Inventor Yu Ouyang
Yu Ouyang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220027645Abstract: Machine-learning models are described detecting the signaling state of a traffic signaling unit. A system can obtain an image of the traffic signaling unit, and select a model of the traffic signaling unit that identifies a position of each traffic lighting element on the unit. First and second neural network inputs are processed with a neural network to generate an estimated signaling state of the traffic signaling unit. The first neural network input can represent the image of the traffic signaling unit, and the second neural network input can represent the model of the traffic signaling unit. Using the estimated signaling state of the traffic signaling unit, the system can inform a driving decision of a vehicle.Type: ApplicationFiled: July 23, 2020Publication date: January 27, 2022Inventors: Edward Hsiao, Yu Ouyang, Maoqing Yao
-
Publication number: 20210405868Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.Type: ApplicationFiled: September 8, 2021Publication date: December 30, 2021Applicant: Google LLCInventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
-
Patent number: 11164363Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing point cloud data using dynamic voxelization. When deployed within an on-board system of a vehicle, processing the point cloud data using dynamic voxelization can be used to make autonomous driving decisions for the vehicle with enhanced accuracy, for example by combining representations of point cloud data characterizing a scene from multiple views of the scene.Type: GrantFiled: July 8, 2020Date of Patent: November 2, 2021Assignee: Waymo LLCInventors: Yin Zhou, Pei Sun, Yu Zhang, Dragomir Anguelov, Jiyang Gao, Yu Ouyang, Zijian Guo, Jiquan Ngiam, Vijay Vasudevan
-
Patent number: 11150804Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.Type: GrantFiled: April 30, 2020Date of Patent: October 19, 2021Assignee: Google LLCInventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
-
Publication number: 20210192135Abstract: A computing device outputs a keyboard for display, receives an indication of a first gesture to select a first sequence of one or more keys, determines a set of candidate strings based in part on the first sequence of keys, and outputs for display at least one of the set of candidate strings. The computing device receives an indication of a second gesture to select a second sequence of one or more keys, and determines that characters associated with the second sequence of keys are included in a first candidate word based at least in part on the set of candidate strings, or are included in a second candidate word not based on the first sequence of keys. The computing device modifies the set of candidate strings based at least in part on the determination and outputs for display at least one of the modified candidate strings.Type: ApplicationFiled: March 4, 2021Publication date: June 24, 2021Applicant: Google LLCInventors: Yu Ouyang, Shumin Zhai, Xiaojun Bi
-
Patent number: 10977440Abstract: A computing device outputs a keyboard for display, receives an indication of a first gesture to select a first sequence of one or more keys, determines a set of candidate strings based in part on the first sequence of keys, and outputs for display at least one of the set of candidate strings. The computing device receives an indication of a second gesture to select a second sequence of one or more keys, and determines that characters associated with the second sequence of keys are included in a first candidate word based at least in part on the set of candidate strings, or are included in a second candidate word not based on the first sequence of keys. The computing device modifies the set of candidate strings based at least in part on the determination and outputs for display at least one of the modified candidate strings.Type: GrantFiled: July 12, 2017Date of Patent: April 13, 2021Assignee: Google LLCInventors: Yu Ouyang, Shumin Zhai, Xiaojun Bi
-
Publication number: 20210019046Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.Type: ApplicationFiled: October 6, 2020Publication date: January 21, 2021Inventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
-
Publication number: 20210012555Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing point cloud data using dynamic voxelization. When deployed within an on-board system of a vehicle, processing the point cloud data using dynamic voxelization can be used to make autonomous driving decisions for the vehicle with enhanced accuracy, for example by combining representations of point cloud data characterizing a scene from multiple views of the scene.Type: ApplicationFiled: July 8, 2020Publication date: January 14, 2021Inventors: Yin Zhou, Pei Sun, Yu Zhang, Dragomir Anguelov, Jiyang Gao, Yu Ouyang, Zijian Guo, Jiquan Ngiam, Vijay Vasudevan
-
Patent number: 10831366Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.Type: GrantFiled: December 29, 2016Date of Patent: November 10, 2020Assignee: Google LLCInventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
-
Publication number: 20200257447Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.Type: ApplicationFiled: April 30, 2020Publication date: August 13, 2020Applicant: Google LLCInventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
-
Patent number: 10671281Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.Type: GrantFiled: January 30, 2019Date of Patent: June 2, 2020Assignee: Google LLCInventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
-
Publication number: 20200050661Abstract: In one example, a computing device includes at least one processor that is operatively coupled to a presence-sensitive display and a gesture module operable by the at least one processor. The gesture module may be operable by the at least one processor to output, for display at the presence-sensitive display, a graphical keyboard comprising a plurality of keys and receive an indication of a continuous gesture detected at the presence-sensitive display, the continuous gesture to select a group of keys of the plurality of keys. The gesture module may be further operable to determine, in response to receiving the indication of the continuous gesture and based at least in part on the group of keys of the plurality of keys, a candidate phrase comprising a group of candidate words.Type: ApplicationFiled: October 16, 2019Publication date: February 13, 2020Applicant: Google LLCInventors: Shumin Zhai, Yu Ouyang, Ken Wakasa, Satoshi Kataoka
-
Patent number: 10489508Abstract: In one example, a computing device includes at least one processor that is operatively coupled to a presence-sensitive display and a gesture module operable by the at least one processor. The gesture module may be operable by the at least one processor to output, for display at the presence-sensitive display, a graphical keyboard comprising a plurality of keys and receive an indication of a continuous gesture detected at the presence-sensitive display, the continuous gesture to select a group of keys of the plurality of keys. The gesture module may be further operable to determine, in response to receiving the indication of the continuous gesture and based at least in part on the group of keys of the plurality of keys, a candidate phrase comprising a group of candidate words.Type: GrantFiled: September 13, 2017Date of Patent: November 26, 2019Assignee: Google LLCInventors: Shumin Zhai, Yu Ouyang, Ken Wakasa, Satoshi Kataoka
-
Publication number: 20190155504Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.Type: ApplicationFiled: January 30, 2019Publication date: May 23, 2019Inventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
-
Patent number: 10282666Abstract: A method may include determining, by a computing device and based on at least one user coherency factor, a user coherency level. The coherency level may include a predicted ability of a user to comprehend information. The method may also include determining, by the computing device and based on the user coherency level, information having a complexity that satisfies the predicted ability of the user to comprehend information. The method may further include outputting, by the computing device, at least a portion of the information.Type: GrantFiled: November 10, 2015Date of Patent: May 7, 2019Assignee: GOOGLE LLCInventor: Yu Ouyang
-
Patent number: 10248313Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.Type: GrantFiled: March 29, 2017Date of Patent: April 2, 2019Assignee: Google LLCInventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
-
Patent number: 10241673Abstract: In one example, a method may include outputting, by a computing device and for display, a graphical keyboard comprising a plurality of keys, and receiving an indication of a gesture. The method may include determining an alignment score that is based at least in part on a word prefix and an alignment point traversed by the gesture. The method may include determining at least one alternative character that is based at least in part on a misspelling that includes at least a portion of the word prefix. The method may include determining an alternative alignment score based at least in part on the alternative character; and outputting, by the computing device and for display, based at least in part on the alternative alignment score, a candidate word based at least in part on the alternative character.Type: GrantFiled: November 9, 2017Date of Patent: March 26, 2019Assignee: Google LLCInventors: Yu Ouyang, Shumin Zhai
-
Patent number: 10140284Abstract: A graphical keyboard including a number of keys is output for display at a display device. The computing device receives an indication of a gesture to select at least two of the keys based at least in part on detecting an input unit at locations of a presence-sensitive input device. In response to the detecting and while the input unit is detected at the presence-sensitive input device: the computing device determines a candidate word for the gesture based at least in part on the at least two keys and the candidate word is output for display at a first location of the output device. In response to determining that the input unit is no longer detected at the presence-sensitive input device, the displayed candidate word is output for display at a second location of the display device.Type: GrantFiled: April 4, 2017Date of Patent: November 27, 2018Assignee: Google LLCInventors: Xiaojun Bi, Yu Ouyang, Shumin Zhai
-
Patent number: D829221Type: GrantFiled: September 19, 2016Date of Patent: September 25, 2018Assignee: Google LLCInventors: Yu Ouyang, Shumin Zhai
-
Patent number: D918755Type: GrantFiled: July 17, 2020Date of Patent: May 11, 2021Assignee: SHENZHEN LANFENG TECHNOLOGY CO., LTD.Inventor: Yu Ouyang