Patents by Inventor Yu Ouyang

Yu Ouyang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220027645
    Abstract: Machine-learning models are described detecting the signaling state of a traffic signaling unit. A system can obtain an image of the traffic signaling unit, and select a model of the traffic signaling unit that identifies a position of each traffic lighting element on the unit. First and second neural network inputs are processed with a neural network to generate an estimated signaling state of the traffic signaling unit. The first neural network input can represent the image of the traffic signaling unit, and the second neural network input can represent the model of the traffic signaling unit. Using the estimated signaling state of the traffic signaling unit, the system can inform a driving decision of a vehicle.
    Type: Application
    Filed: July 23, 2020
    Publication date: January 27, 2022
    Inventors: Edward Hsiao, Yu Ouyang, Maoqing Yao
  • Publication number: 20210405868
    Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.
    Type: Application
    Filed: September 8, 2021
    Publication date: December 30, 2021
    Applicant: Google LLC
    Inventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
  • Patent number: 11164363
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing point cloud data using dynamic voxelization. When deployed within an on-board system of a vehicle, processing the point cloud data using dynamic voxelization can be used to make autonomous driving decisions for the vehicle with enhanced accuracy, for example by combining representations of point cloud data characterizing a scene from multiple views of the scene.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: November 2, 2021
    Assignee: Waymo LLC
    Inventors: Yin Zhou, Pei Sun, Yu Zhang, Dragomir Anguelov, Jiyang Gao, Yu Ouyang, Zijian Guo, Jiquan Ngiam, Vijay Vasudevan
  • Patent number: 11150804
    Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: October 19, 2021
    Assignee: Google LLC
    Inventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
  • Publication number: 20210192135
    Abstract: A computing device outputs a keyboard for display, receives an indication of a first gesture to select a first sequence of one or more keys, determines a set of candidate strings based in part on the first sequence of keys, and outputs for display at least one of the set of candidate strings. The computing device receives an indication of a second gesture to select a second sequence of one or more keys, and determines that characters associated with the second sequence of keys are included in a first candidate word based at least in part on the set of candidate strings, or are included in a second candidate word not based on the first sequence of keys. The computing device modifies the set of candidate strings based at least in part on the determination and outputs for display at least one of the modified candidate strings.
    Type: Application
    Filed: March 4, 2021
    Publication date: June 24, 2021
    Applicant: Google LLC
    Inventors: Yu Ouyang, Shumin Zhai, Xiaojun Bi
  • Patent number: 10977440
    Abstract: A computing device outputs a keyboard for display, receives an indication of a first gesture to select a first sequence of one or more keys, determines a set of candidate strings based in part on the first sequence of keys, and outputs for display at least one of the set of candidate strings. The computing device receives an indication of a second gesture to select a second sequence of one or more keys, and determines that characters associated with the second sequence of keys are included in a first candidate word based at least in part on the set of candidate strings, or are included in a second candidate word not based on the first sequence of keys. The computing device modifies the set of candidate strings based at least in part on the determination and outputs for display at least one of the modified candidate strings.
    Type: Grant
    Filed: July 12, 2017
    Date of Patent: April 13, 2021
    Assignee: Google LLC
    Inventors: Yu Ouyang, Shumin Zhai, Xiaojun Bi
  • Publication number: 20210019046
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.
    Type: Application
    Filed: October 6, 2020
    Publication date: January 21, 2021
    Inventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
  • Publication number: 20210012555
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing point cloud data using dynamic voxelization. When deployed within an on-board system of a vehicle, processing the point cloud data using dynamic voxelization can be used to make autonomous driving decisions for the vehicle with enhanced accuracy, for example by combining representations of point cloud data characterizing a scene from multiple views of the scene.
    Type: Application
    Filed: July 8, 2020
    Publication date: January 14, 2021
    Inventors: Yin Zhou, Pei Sun, Yu Zhang, Dragomir Anguelov, Jiyang Gao, Yu Ouyang, Zijian Guo, Jiquan Ngiam, Vijay Vasudevan
  • Patent number: 10831366
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for cross input modality learning in a mobile device are disclosed. In one aspect, a method includes activating a first modality user input mode in which user inputs by way of a first modality are recognized using a first modality recognizer; and receiving a user input by way of the first modality. The method includes, obtaining, as a result of the first modality recognizer recognizing the user input, a transcription that includes a particular term; and generating an input context data structure that references at least the particular term. The method further includes, transmitting, by the first modality recognizer, the input context data structure to a second modality recognizer for use in updating a second modality recognition model associated with the second modality recognizer.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: November 10, 2020
    Assignee: Google LLC
    Inventors: Yu Ouyang, Diego Melendo Casado, Mohammadinamul Hasan Sheik, Francoise Beaufays, Dragan Zivkovic, Meltem Oktem
  • Publication number: 20200257447
    Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.
    Type: Application
    Filed: April 30, 2020
    Publication date: August 13, 2020
    Applicant: Google LLC
    Inventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
  • Patent number: 10671281
    Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: June 2, 2020
    Assignee: Google LLC
    Inventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
  • Publication number: 20200050661
    Abstract: In one example, a computing device includes at least one processor that is operatively coupled to a presence-sensitive display and a gesture module operable by the at least one processor. The gesture module may be operable by the at least one processor to output, for display at the presence-sensitive display, a graphical keyboard comprising a plurality of keys and receive an indication of a continuous gesture detected at the presence-sensitive display, the continuous gesture to select a group of keys of the plurality of keys. The gesture module may be further operable to determine, in response to receiving the indication of the continuous gesture and based at least in part on the group of keys of the plurality of keys, a candidate phrase comprising a group of candidate words.
    Type: Application
    Filed: October 16, 2019
    Publication date: February 13, 2020
    Applicant: Google LLC
    Inventors: Shumin Zhai, Yu Ouyang, Ken Wakasa, Satoshi Kataoka
  • Patent number: 10489508
    Abstract: In one example, a computing device includes at least one processor that is operatively coupled to a presence-sensitive display and a gesture module operable by the at least one processor. The gesture module may be operable by the at least one processor to output, for display at the presence-sensitive display, a graphical keyboard comprising a plurality of keys and receive an indication of a continuous gesture detected at the presence-sensitive display, the continuous gesture to select a group of keys of the plurality of keys. The gesture module may be further operable to determine, in response to receiving the indication of the continuous gesture and based at least in part on the group of keys of the plurality of keys, a candidate phrase comprising a group of candidate words.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: November 26, 2019
    Assignee: Google LLC
    Inventors: Shumin Zhai, Yu Ouyang, Ken Wakasa, Satoshi Kataoka
  • Publication number: 20190155504
    Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.
    Type: Application
    Filed: January 30, 2019
    Publication date: May 23, 2019
    Inventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
  • Patent number: 10282666
    Abstract: A method may include determining, by a computing device and based on at least one user coherency factor, a user coherency level. The coherency level may include a predicted ability of a user to comprehend information. The method may also include determining, by the computing device and based on the user coherency level, information having a complexity that satisfies the predicted ability of the user to comprehend information. The method may further include outputting, by the computing device, at least a portion of the information.
    Type: Grant
    Filed: November 10, 2015
    Date of Patent: May 7, 2019
    Assignee: GOOGLE LLC
    Inventor: Yu Ouyang
  • Patent number: 10248313
    Abstract: In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: April 2, 2019
    Assignee: Google LLC
    Inventors: Shumin Zhai, Thomas Breuel, Ouais Alsharif, Yu Ouyang, Francoise Beaufays, Johan Schalkwyk
  • Patent number: 10241673
    Abstract: In one example, a method may include outputting, by a computing device and for display, a graphical keyboard comprising a plurality of keys, and receiving an indication of a gesture. The method may include determining an alignment score that is based at least in part on a word prefix and an alignment point traversed by the gesture. The method may include determining at least one alternative character that is based at least in part on a misspelling that includes at least a portion of the word prefix. The method may include determining an alternative alignment score based at least in part on the alternative character; and outputting, by the computing device and for display, based at least in part on the alternative alignment score, a candidate word based at least in part on the alternative character.
    Type: Grant
    Filed: November 9, 2017
    Date of Patent: March 26, 2019
    Assignee: Google LLC
    Inventors: Yu Ouyang, Shumin Zhai
  • Patent number: 10140284
    Abstract: A graphical keyboard including a number of keys is output for display at a display device. The computing device receives an indication of a gesture to select at least two of the keys based at least in part on detecting an input unit at locations of a presence-sensitive input device. In response to the detecting and while the input unit is detected at the presence-sensitive input device: the computing device determines a candidate word for the gesture based at least in part on the at least two keys and the candidate word is output for display at a first location of the output device. In response to determining that the input unit is no longer detected at the presence-sensitive input device, the displayed candidate word is output for display at a second location of the display device.
    Type: Grant
    Filed: April 4, 2017
    Date of Patent: November 27, 2018
    Assignee: Google LLC
    Inventors: Xiaojun Bi, Yu Ouyang, Shumin Zhai
  • Patent number: D829221
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: September 25, 2018
    Assignee: Google LLC
    Inventors: Yu Ouyang, Shumin Zhai
  • Patent number: D918755
    Type: Grant
    Filed: July 17, 2020
    Date of Patent: May 11, 2021
    Assignee: SHENZHEN LANFENG TECHNOLOGY CO., LTD.
    Inventor: Yu Ouyang