Patents by Inventor Mengye Ren

Mengye Ren has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240085908
    Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
    Type: Application
    Filed: November 17, 2023
    Publication date: March 14, 2024
    Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
  • Publication number: 20240010241
    Abstract: A computing system can input first relative location embedding data into an interaction transformer model and receive, as an output of the interaction transformer model, motion forecast data for actors relative to a vehicle. The computing system can input the motion forecast data into a prediction model to receive respective trajectories for the actors for a current time step and respective projected trajectories for the actors for a subsequent time step. The computing system can generate second relative location embedding data based on the respective projected trajectories from the second time step. The computing system can produce second motion forecast data using the interaction transformer model based on the second relative location embedding. The computing system can determine second respective trajectories for the actors using the prediction model based on the second forecast data.
    Type: Application
    Filed: August 31, 2023
    Publication date: January 11, 2024
    Inventors: Lingyun Li, Bin Yang, Wenyuan Zeng, Ming Liang, Mengye Ren, Sean Segal, Raquel Urtasun Sotil
  • Patent number: 11860629
    Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: January 2, 2024
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
  • Patent number: 11834069
    Abstract: Systems and methods for generating semantic occupancy maps are provided. In particular, a computing system can obtain map data for a geographic area and sensor data obtained by the autonomous vehicle. The computer system can identify feature data included in the map data and sensor data. The computer system can, for a respective semantic object type from a plurality of semantic object types, determine, by the computing system and using feature data as input to a respective machine-learned model from a plurality of machine-learned models, one or more occupancy maps for one or more timesteps in the future, and wherein the respective machine-learned model is trained to determine occupancy for the respective semantic object type. The computer system can select a trajectory for the autonomous vehicle based on a plurality of occupancy maps associated with the plurality of semantic object types.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: December 5, 2023
    Assignee: UATC, LCC
    Inventors: Raquel Urtasun, Abbas Sadat, Sergio Casas, Mengye Ren
  • Publication number: 20230359202
    Abstract: Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.
    Type: Application
    Filed: July 19, 2023
    Publication date: November 9, 2023
    Inventors: Raquel Urtasun, Yen-Chen Lin, Andrei Pokrovsky, Mengye Ren, Abbas Sadat, Ersin Yumer
  • Patent number: 11780472
    Abstract: A computing system can input first relative location embedding data into an interaction transformer model and receive, as an output of the interaction transformer model, motion forecast data for actors relative to a vehicle. The computing system can input the motion forecast data into a prediction model to receive respective trajectories for the actors for a current time step and respective projected trajectories for the actors for a subsequent time step. The computing system can generate second relative location embedding data based on the respective projected trajectories from the second time step. The computing system can produce second motion forecast data using the interaction transformer model based on the second relative location embedding. The computing system can determine second respective trajectories for the actors using the prediction model based on the second forecast data.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: October 10, 2023
    Assignee: UATC, LLC
    Inventors: Lingyun Li, Bin Yang, Wenyuan Zeng, Ming Liang, Mengye Ren, Sean Segal, Raquel Urtasun
  • Patent number: 11769058
    Abstract: Systems and methods of the present disclosure provide an improved approach for open-set instance segmentation by identifying both known and unknown instances in an environment. For example, a method can include receiving sensor point cloud input data including a plurality of three-dimensional points. The method can include determining a feature embedding and at least one of an instance embedding, class embedding, and/or background embedding for each of the plurality of three-dimensional points. The method can include determining a first subset of points associated with one or more known instances within the environment based on the class embedding and the background embedding associated with each point in the plurality of points. The method can include determining a second subset of points associated with one or more unknown instances within the environment based on the first subset of points. The method can include segmenting the input data into known and unknown instances.
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: September 26, 2023
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Kelvin Ka Wing Wong, Shenlong Wang, Mengye Ren, Ming Liang
  • Patent number: 11755014
    Abstract: Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: September 12, 2023
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Abbas Sadat, Mengye Ren, Andrei Pokrovsky, Yen-Chen Lin, Ersin Yumer
  • Patent number: 11691650
    Abstract: A computing system can be configured to input data that describes sensor data into an object detection model and receive, as an output of the object detection model, object detection data describing features of the plurality of the actors relative to the autonomous vehicle. The computing system can generate an input sequence that describes the object detection data. The computing system can analyze the input sequence using an interaction model to produce, as an output of the interaction model, an attention embedding with respect to the plurality of actors. The computing system can be configured to input the attention embedding into a recurrent model and determine respective trajectories for the plurality of actors based on motion forecast data received as an output of the recurrent model.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: July 4, 2023
    Assignee: UATC, LLC
    Inventors: Lingyun Li, Bin Yang, Ming Liang, Wenyuan Zeng, Mengye Ren, Sean Segal, Raquel Urtasun
  • Patent number: 11686848
    Abstract: Systems and methods for training object detection models using adversarial examples are provided. A method includes obtaining a training scene and identifying a target object within the training scene. The method includes obtaining an adversarial object and generating a modified training scene based on the adversarial object, the target object, and the training scene. The modified training scene includes the training scene modified to include the adversarial object placed on the target object. The modified training scene is input to a machine-learned model configured to detect the training object. A detection score is determined based on whether the training object is detected, and the machine-learned model and the parameters of the adversarial object are trained based on the detection output. The machine-learned model is trained to maximize the detection output. The parameters of the adversarial object are trained to minimize the detection output.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: June 27, 2023
    Assignee: UATC, LLC
    Inventors: Xuanyuan Tu, Sivabalan Manivasagam, Mengye Ren, Ming Liang, Bin Yang, Raquel Urtasun
  • Publication number: 20230196909
    Abstract: Example aspects of the present disclosure describe a scene generator for simulating scenes in an environment. For example, snapshots of simulated traffic scenes can be generated by sampling a joint probability distribution trained on real-world traffic scenes. In some implementations, samples of the joint probability distribution can be obtained by sampling a plurality of factorized probability distributions for a plurality of objects for sequential insertion into the scene.
    Type: Application
    Filed: February 13, 2023
    Publication date: June 22, 2023
    Inventors: Shuhan Tan, Kelvin Ka Wing Wong, Shenlong Wang, Sivabalan Manivasagam, Mengye Ren, Raquel Urtasun
  • Publication number: 20230057604
    Abstract: Systems and methods of the present disclosure provide an improved approach for open-set instance segmentation by identifying both known and unknown instances in an environment. For example, a method can include receiving sensor point cloud input data including a plurality of three-dimensional points. The method can include determining a feature embedding and at least one of an instance embedding, class embedding, and/or background embedding for each of the plurality of three-dimensional points. The method can include determining a first subset of points associated with one or more known instances within the environment based on the class embedding and the background embedding associated with each point in the plurality of points. The method can include determining a second subset of points associated with one or more unknown instances within the environment based on the first subset of points. The method can include segmenting the input data into known and unknown instances.
    Type: Application
    Filed: October 17, 2022
    Publication date: February 23, 2023
    Inventors: Raquel Urtasun, Kelvin Ka Wing Wong, Shenlong Wang, Mengye Ren, Ming Liang
  • Patent number: 11580851
    Abstract: Example aspects of the present disclosure describe a scene generator for simulating scenes in an environment. For example, snapshots of simulated traffic scenes can be generated by sampling a joint probability distribution trained on real-world traffic scenes. In some implementations, samples of the joint probability distribution can be obtained by sampling a plurality of factorized probability distributions for a plurality of objects for sequential insertion into the scene.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: February 14, 2023
    Assignee: UATC, LLC
    Inventors: Shuhan Tan, Kelvin Ka Wing Wong, Shenlong Wang, Sivabalan Manivasagam, Mengye Ren, Raquel Urtasun
  • Patent number: 11475675
    Abstract: Systems and methods of the present disclosure provide an improved approach for open-set instance segmentation by identifying both known and unknown instances in an environment. For example, a method can include receiving sensor point cloud input data including a plurality of three-dimensional points. The method can include determining a feature embedding and at least one of an instance embedding, class embedding, and/or background embedding for each of the plurality of three-dimensional points. The method can include determining a first subset of points associated with one or more known instances within the environment based on the class embedding and the background embedding associated with each point in the plurality of points. The method can include determining a second subset of points associated with one or more unknown instances within the environment based on the first subset of points. The method can include segmenting the input data into known and unknown instances.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: October 18, 2022
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Kelvin Ka Wing Wong, Shenlong Wang, Mengye Ren, Ming Liang
  • Publication number: 20220153298
    Abstract: Techniques for generating testing data for an autonomous vehicle (AV) are described herein. A system can obtain sensor data descriptive of a traffic scenario. The traffic scenario can include a subject vehicle and actors in an environment. Additionally, the system can generate a perturbed trajectory for a first actor in the environment based on perturbation values. Moreover, the system can generate simulated sensor data. The simulated sensor data can include data descriptive of the perturbed trajectory for the first actor in the environment. Furthermore, the system can provide the simulated sensor data as input to an AV control system. The AV control system can be configured to process the simulated sensor data to generate an updated trajectory for the subject vehicle in the environment. Subsequently, the system can evaluate an adversarial loss function based on the updated trajectory for the subject vehicle to generate an adversarial loss value.
    Type: Application
    Filed: November 17, 2021
    Publication date: May 19, 2022
    Inventors: Jingkang Wang, Ava Alison Pun, Xuanyuan Tu, Mengye Ren, Abbas Sadat, Sergio Casas, Sivabalan Manivasagam, Raquel Urtasun
  • Publication number: 20220157161
    Abstract: Example aspects of the present disclosure describe a scene generator for simulating scenes in an environment. For example, snapshots of simulated traffic scenes can be generated by sampling a joint probability distribution trained on real-world traffic scenes. In some implementations, samples of the joint probability distribution can be obtained by sampling a plurality of factorized probability distributions for a plurality of objects for sequential insertion into the scene.
    Type: Application
    Filed: November 17, 2021
    Publication date: May 19, 2022
    Inventors: Shuhan Tan, Kelvin Ka Wing Wong, Shenlong Wang, Sivabalan Manivasagam, Mengye Ren, Raquel Urtasun
  • Publication number: 20220032970
    Abstract: Systems and methods for improved vehicle-to-vehicle communications are provided. A system can obtain sensor data depicting its surrounding environment and input the sensor data (or processed sensor data) to a machine-learned model to perceive its surrounding environment based on its location within the environment. The machine-learned model can generate an intermediate environmental representation that encodes features within the surrounding environment. The system can receive a number of different intermediate environmental representations and corresponding locations from various other systems, aggregate the representations based on the corresponding locations, and perceive its surrounding environment based on the aggregated representations. The system can determine relative poses between the each of the systems and an absolute pose for each system based on the representations.
    Type: Application
    Filed: January 15, 2021
    Publication date: February 3, 2022
    Inventors: Nicholas Baskar Vadivelu, Mengye Ren, Xuanyuan Tu, Raquel Urtasun, Jingkang Wang
  • Publication number: 20210325882
    Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
    Type: Application
    Filed: June 30, 2021
    Publication date: October 21, 2021
    Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
  • Publication number: 20210303922
    Abstract: Systems and methods for training object detection models using adversarial examples are provided. A method includes obtaining a training scene and identifying a target object within the training scene. The method includes obtaining an adversarial object and generating a modified training scene based on the adversarial object, the target object, and the training scene. The modified training scene includes the training scene modified to include the adversarial object placed on the target object. The modified training scene is input to a machine-learned model configured to detect the training object. A detection score is determined based on whether the training object is detected, and the machine-learned model and the parameters of the adversarial object are trained based on the detection output. The machine-learned model is trained to maximize the detection output. The parameters of the adversarial object are trained to minimize the detection output.
    Type: Application
    Filed: August 31, 2020
    Publication date: September 30, 2021
    Inventors: James Tu, Sivabalan Manivasagam, Mengye Ren, Ming Liang, Bin Yang, Raquel Urtasun
  • Publication number: 20210276591
    Abstract: Systems and methods for generating semantic occupancy maps are provided. In particular, a computing system can obtain map data for a geographic area and sensor data obtained by the autonomous vehicle. The computer system can identify feature data included in the map data and sensor data. The computer system can, for a respective semantic object type from a plurality of semantic object types, determine, by the computing system and using feature data as input to a respective machine-learned model from a plurality of machine-learned models, one or more occupancy maps for one or more timesteps in the future, and wherein the respective machine-learned model is trained to determine occupancy for the respective semantic object type. The computer system can select a trajectory for the autonomous vehicle based on a plurality of occupancy maps associated with the plurality of semantic object types.
    Type: Application
    Filed: January 15, 2021
    Publication date: September 9, 2021
    Inventors: Raquel Urtasun, Abbas Sadat, Sergio Casas, Mengye Ren