Patents by Inventor Kai Zhenyu Wang

Kai Zhenyu Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11292462
    Abstract: A trajectory estimate of a wheeled vehicle can be determined based at least in part on determining a wheel angle associated with the vehicle. In some examples, at least a portion of the image associated with the wheeled vehicle may be input into a machine-learned model that is trained to classify and/or regress wheel directions of wheeled vehicles. The machine-learned model may output a predicted wheel direction. The wheel direction and/or additional or historical sensor data may be used to estimate a trajectory of the wheeled vehicle. The predicted trajectory of the object can then be used to generate and refine an autonomous vehicle's trajectory as the autonomous vehicle proceeds through the environment.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: April 5, 2022
    Assignee: Zoox, Inc.
    Inventors: Vasiliy Karasev, James William Vaisey Philbin, Sarah Tariq, Kai Zhenyu Wang
  • Publication number: 20220092983
    Abstract: Techniques are discussed for determining prediction probabilities of an object based on a top-down representation of an environment. Data representing objects in an environment can be captured. Aspects of the environment can be represented as map data. A multi-channel image representing a top-down view of object(s) in the environment can be generated based on the data representing the objects and map data. The multi-channel image can be used to train a machine learned model by minimizing an error between predictions from the machine learned model and a captured trajectory associated with the object. Once trained, the machine learned model can be used to generate prediction probabilities of objects in an environment, and the vehicle can be controlled based on such prediction probabilities.
    Type: Application
    Filed: December 6, 2021
    Publication date: March 24, 2022
    Inventors: Xi Joey Hong, Benjamin John Sapp, James William Vaisey Philbin, Kai Zhenyu Wang
  • Patent number: 11280630
    Abstract: Techniques are disclosed for updating map data. The techniques may include detecting a traffic light in a first image, determining, based at least in part on the traffic light detected in the first image, a proposed three-dimensional position of the traffic light in a three-dimensional coordinate system associated with map data. The proposed three-dimensional position may then be projected into a second image to determine a two-dimensional position of the traffic light in the second image and the second image may be annotated, as an annotated image, with a proposed traffic light location indicator associated with the traffic light. The techniques further include causing a display to display the annotated image to a user, receiving user input associated with the annotated images, and updating, as updated map data, the map data to include a position of the traffic light in the map data based at least in part on the user input.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: March 22, 2022
    Assignee: Zoox, Inc.
    Inventors: Christopher James Gibson, Kai Zhenyu Wang
  • Patent number: 11276179
    Abstract: Techniques for determining predictions on a top-down representation of an environment based on object movement are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) may capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle, a pedestrian, a bicycle). A multi-channel image representing a top-down view of the object(s) and the environment may be generated based in part on the sensor data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) may also be encoded in the image. Multiple images may be generated representing the environment over time and input into a prediction system configured to output a trajectory template (e.g., general intent for future movement) and a predicted trajectory (e.g., more accurate predicted movement) associated with each object. The prediction system may include a machine learned model configured to output the trajectory template(s) and the predicted trajector(ies).
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: March 15, 2022
    Assignee: Zoox, Inc.
    Inventors: Andres Guillermo Morales Morales, Marin Kobilarov, Gowtham Garimella, Kai Zhenyu Wang
  • Patent number: 11195418
    Abstract: Techniques are discussed for determining prediction probabilities of an object based on a top-down representation of an environment. Data representing objects in an environment can be captured. Aspects of the environment can be represented as map data. A multi-channel image representing a top-down view of object(s) in the environment can be generated based on the data representing the objects and map data. The multi-channel image can be used to train a machine learned model by minimizing an error between predictions from the machine learned model and a captured trajectory associated with the object. Once trained, the machine learned model can be used to generate prediction probabilities of objects in an environment, and the vehicle can be controlled based on such prediction probabilities.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: December 7, 2021
    Assignee: Zoox, Inc.
    Inventors: Xi Joey Hong, Benjamin John Sapp, James William Vaisey Philbin, Kai Zhenyu Wang
  • Publication number: 20210347383
    Abstract: Techniques to predict object behavior in an environment are discussed herein. For example, such techniques may include determining a trajectory of the object, determining an intent of the trajectory, and sending the trajectory and the intent to a vehicle computing system to control an autonomous vehicle. The vehicle computing system may implement a machine learned model to process data such as sensor data and map data. The machine learned model can associate different intentions of an object in an environment with different trajectories. A vehicle, such as an autonomous vehicle, can be controlled to traverse an environment based on object's intentions and trajectories.
    Type: Application
    Filed: May 8, 2020
    Publication date: November 11, 2021
    Inventors: Kenneth Michael Siebert, Gowtham Garimella, Benjamin Isaac Mattinson, Samir Parikh, Kai Zhenyu Wang
  • Publication number: 20210331703
    Abstract: Techniques relating to monitoring map consistency are described. In an example, a monitoring component associated with a vehicle can receive sensor data associated with an environment in which the vehicle is positioned. The monitoring component can generate, based at least in part on the sensor data, an estimated map of the environment, wherein the estimated map is encoded with policy information for driving within the environment. The monitoring component can then compare first information associated with a stored map of the environment with second information associated with the estimated map to determine whether the estimated map and the stored map are consistent. Component(s) associated with the vehicle can then control the object based at least in part on results of the comparing.
    Type: Application
    Filed: April 23, 2020
    Publication date: October 28, 2021
    Inventors: Pengfei Duan, James William Vaisey Philbin, Cooper Stokes Sloan, Sarah Tariq, Feng Tian, Chuang Wang, Kai Zhenyu Wang, Yi Xu
  • Patent number: 11126873
    Abstract: Techniques for determining lighting states of a tracked object, such as a vehicle, are discussed herein. An autonomous vehicle can include an image sensor to capture image data of an environment. Objects such can be identified in the image data as objects to be tracked. Frames of the image data representing the tracked object can be selected and input to a machine learning algorithm (e.g., a convolutional neural network, a recurrent neural network, etc.) that is trained to determine probabilities associated with one or more lighting states of the tracked object. Such lighting states include, but are not limited to, a blinker state(s), a brake state, a hazard state, etc. Based at least in part on the one or more probabilities associated with the one or more lighting states, the autonomous vehicle can determine a trajectory for the autonomous vehicle and/or can determine a predicted trajectory for the tracked object.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: September 21, 2021
    Assignee: Zoox, Inc.
    Inventors: Tencia Lee, Kai Zhenyu Wang, James William Vaisey Philbin
  • Patent number: 11126179
    Abstract: Techniques for determining and/or predicting a trajectory of an object by using the appearance of the object, as captured in an image, are discussed herein. Image data, sensor data, and/or a predicted trajectory of the object (e.g., a pedestrian, animal, and the like) may be used to train a machine learning model that can subsequently be provided to, and used by, an autonomous vehicle for operation and navigation. In some implementations, predicted trajectories may be compared to actual trajectories and such comparisons are used as training data for machine learning.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: September 21, 2021
    Assignee: Zoox, Inc.
    Inventors: Vasiliy Karasev, Tencia Lee, James William Vaisey Philbin, Sarah Tariq, Kai Zhenyu Wang
  • Publication number: 20210271901
    Abstract: Techniques for determining predictions on a top-down representation of an environment based on vehicle action(s) are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) can capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle or a pedestrian). A multi-channel image representing a top-down view of the object(s) and the environment can be generated based on the sensor data, map data, and/or action data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) can be encoded in the image. Action data can represent a target lane, trajectory, etc. of the first vehicle. Multiple images can be generated representing the environment over time and input into a prediction system configured to output prediction probabilities associated with possible locations of the object(s) in the future, which may be based on the actions of the autonomous vehicle.
    Type: Application
    Filed: May 20, 2021
    Publication date: September 2, 2021
    Inventors: Gowtham Garimella, Marin Kobilarov, Andres Guillermo Morales Morales, Kai Zhenyu Wang
  • Publication number: 20210192748
    Abstract: Techniques for determining predictions on a top-down representation of an environment based on object movement are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) may capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle, a pedestrian, a bicycle). A multi-channel image representing a top-down view of the object(s) and the environment may be generated based in part on the sensor data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) may also be encoded in the image. Multiple images may be generated representing the environment over time and input into a prediction system configured to output a trajectory template (e.g., general intent for future movement) and a predicted trajectory (e.g., more accurate predicted movement) associated with each object. The prediction system may include a machine learned model configured to output the trajectory template(s) and the predicted trajector(ies).
    Type: Application
    Filed: December 18, 2019
    Publication date: June 24, 2021
    Inventors: Andres Guillermo Morales Morales, Marin Kobilarov, Gowtham Garimella, Kai Zhenyu Wang
  • Patent number: 11023749
    Abstract: Techniques for determining predictions on a top-down representation of an environment based on vehicle action(s) are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) can capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle or a pedestrian). A multi-channel image representing a top-down view of the object(s) and the environment can be generated based on the sensor data, map data, and/or action data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) can be encoded in the image. Action data can represent a target lane, trajectory, etc. of the first vehicle. Multiple images can be generated representing the environment over time and input into a prediction system configured to output prediction probabilities associated with possible locations of the object(s) in the future, which may be based on the actions of the autonomous vehicle.
    Type: Grant
    Filed: July 5, 2019
    Date of Patent: June 1, 2021
    Assignee: Zoox, Inc.
    Inventors: Gowtham Garimella, Marin Kobilarov, Andres Guillermo Morales Morales, Kai Zhenyu Wang
  • Publication number: 20210156704
    Abstract: Techniques are disclosed for updating map data. The techniques may include detecting a traffic light in a first image, determining, based at least in part on the traffic light detected in the first image, a proposed three-dimensional position of the traffic light in a three-dimensional coordinate system associated with map data. The proposed three-dimensional position may then be projected into a second image to determine a two-dimensional position of the traffic light in the second image and the second image may be annotated, as an annotated image, with a proposed traffic light location indicator associated with the traffic light. The techniques further include causing a display to display the annotated image to a user, receiving user input associated with the annotated images, and updating, as updated map data, the map data to include a position of the traffic light in the map data based at least in part on the user input.
    Type: Application
    Filed: November 27, 2019
    Publication date: May 27, 2021
    Inventors: Christopher James Gibson, Kai Zhenyu Wang
  • Publication number: 20210053570
    Abstract: Techniques for determining a vehicle action and controlling a vehicle to perform the vehicle action for navigating the vehicle in an environment can include determining a vehicle action, such as a lane change action, for a vehicle to perform in an environment. The vehicle can detect, based at least in part on sensor data, an object associated with a target lane associated with the lane change action sensor data. In some instances, the vehicle may determine attribute data associated with the object and input the attribute data to a machine-learned model that can output a yield score. Based on such a yield score, the vehicle may determine whether it is safe to perform the lane change action.
    Type: Application
    Filed: August 23, 2019
    Publication date: February 25, 2021
    Inventors: Abishek Krishna Akella, Vasiliy Karasev, Kai Zhenyu Wang, Rick Zhang
  • Publication number: 20210004611
    Abstract: Techniques for determining predictions on a top-down representation of an environment based on vehicle action(s) are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) can capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle or a pedestrian). A multi-channel image representing a top-down view of the object(s) and the environment can be generated based on the sensor data, map data, and/or action data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) can be encoded in the image. Action data can represent a target lane, trajectory, etc. of the first vehicle. Multiple images can be generated representing the environment over time and input into a prediction system configured to output prediction probabilities associated with possible locations of the object(s) in the future, which may be based on the actions of the autonomous vehicle.
    Type: Application
    Filed: July 5, 2019
    Publication date: January 7, 2021
    Inventors: Gowtham Garimella, Marin Kobilarov, Andres Guillermo Morales Morales, Kai Zhenyu Wang
  • Publication number: 20200272148
    Abstract: Techniques for determining and/or predicting a trajectory of an object by using the appearance of the object, as captured in an image, are discussed herein. Image data, sensor data, and/or a predicted trajectory of the object (e.g., a pedestrian, animal, and the like) may be used to train a machine learning model that can subsequently be provided to, and used by, an autonomous vehicle for operation and navigation. In some implementations, predicted trajectories may be compared to actual trajectories and such comparisons are used as training data for machine learning.
    Type: Application
    Filed: February 21, 2019
    Publication date: August 27, 2020
    Inventors: Vasiliy Karasev, Tencia Lee, James William Vaisey Philbin, Sarah Tariq, Kai Zhenyu Wang
  • Publication number: 20190354786
    Abstract: Techniques for determining lighting states of a tracked object, such as a vehicle, are discussed herein. An autonomous vehicle can include an image sensor to capture image data of an environment. Objects such can be identified in the image data as objects to be tracked. Frames of the image data representing the tracked object can be selected and input to a machine learning algorithm (e.g., a convolutional neural network, a recurrent neural network, etc.) that is trained to determine probabilities associated with one or more lighting states of the tracked object. Such lighting states include, but are not limited to, a blinker state(s), a brake state, a hazard state, etc. Based at least in part on the one or more probabilities associated with the one or more lighting states, the autonomous vehicle can determine a trajectory for the autonomous vehicle and/or can determine a predicted trajectory for the tracked object.
    Type: Application
    Filed: May 17, 2018
    Publication date: November 21, 2019
    Inventors: Tencia Lee, Kai Zhenyu Wang, James William Vaisey Philbin
  • Publication number: 20190272446
    Abstract: A system may automatically create training datasets for training a segmentation model to recognize features such as lanes on a road. The system may receive sensor data representative of a portion of an environment and map data from a map data store including existing map data for the portion of the environment that includes features present in that portion of the environment. The system may project or overlay the features onto the sensor data to create training datasets for training the segmentation model, which may be a neural network. The training datasets may be communicated to the segmentation model to train the segmentation model to segment data associated with similar features present in different sensor data. The trained segmentation model may be used to update the map data store, and may be used to segment sensor data obtained from other portions of the environment, such as portions not previously mapped.
    Type: Application
    Filed: March 2, 2018
    Publication date: September 5, 2019
    Inventors: Juhana Kangaspunta, Kai Zhenyu Wang, James William Vaisey Philbin