Patents by Inventor Raquel Urtasun

Raquel Urtasun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11797407
    Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned model can predict one or more dropout probabilities for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
    Type: Grant
    Filed: April 22, 2022
    Date of Patent: October 24, 2023
    Assignee: UATC, LLC
    Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Kelvin Ka Wing Wong, Wenyuan Zeng, Raquel Urtasun
  • Patent number: 11794785
    Abstract: Generally, the disclosed systems and methods utilize multi-task machine-learned models for object intention determination in autonomous driving applications. For example, a computing system can receive sensor data obtained relative to an autonomous vehicle and map data associated with a surrounding geographic environment of the autonomous vehicle. The sensor data and map data can be provided as input to a machine-learned intent model. The computing system can receive a jointly determined prediction from the machine-learned intent model for multiple outputs including at least one detection output indicative of one or more objects detected within the surrounding environment of the autonomous vehicle, a first corresponding forecasting output descriptive of a trajectory indicative of an expected path of the one or more objects towards a goal location, and/or a second corresponding forecasting output descriptive of a discrete behavior intention determined from a predefined group of possible behavior intentions.
    Type: Grant
    Filed: May 20, 2022
    Date of Patent: October 24, 2023
    Assignee: UATC, LLC
    Inventors: Sergio Casas, Wenjie Luo, Raquel Urtasun
  • Patent number: 11780472
    Abstract: A computing system can input first relative location embedding data into an interaction transformer model and receive, as an output of the interaction transformer model, motion forecast data for actors relative to a vehicle. The computing system can input the motion forecast data into a prediction model to receive respective trajectories for the actors for a current time step and respective projected trajectories for the actors for a subsequent time step. The computing system can generate second relative location embedding data based on the respective projected trajectories from the second time step. The computing system can produce second motion forecast data using the interaction transformer model based on the second relative location embedding. The computing system can determine second respective trajectories for the actors using the prediction model based on the second forecast data.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: October 10, 2023
    Assignee: UATC, LLC
    Inventors: Lingyun Li, Bin Yang, Wenyuan Zeng, Ming Liang, Mengye Ren, Sean Segal, Raquel Urtasun
  • Patent number: 11769058
    Abstract: Systems and methods of the present disclosure provide an improved approach for open-set instance segmentation by identifying both known and unknown instances in an environment. For example, a method can include receiving sensor point cloud input data including a plurality of three-dimensional points. The method can include determining a feature embedding and at least one of an instance embedding, class embedding, and/or background embedding for each of the plurality of three-dimensional points. The method can include determining a first subset of points associated with one or more known instances within the environment based on the class embedding and the background embedding associated with each point in the plurality of points. The method can include determining a second subset of points associated with one or more unknown instances within the environment based on the first subset of points. The method can include segmenting the input data into known and unknown instances.
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: September 26, 2023
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Kelvin Ka Wing Wong, Shenlong Wang, Mengye Ren, Ming Liang
  • Patent number: 11768292
    Abstract: Generally, the disclosed systems and methods implement improved detection of objects in three-dimensional (3D) space. More particularly, an improved 3D object detection system can exploit continuous fusion of multiple sensors and/or integrated geographic prior map data to enhance effectiveness and robustness of object detection in applications such as autonomous driving. In some implementations, geographic prior data (e.g., geometric ground and/or semantic road features) can be exploited to enhance three-dimensional object detection for autonomous vehicle applications. In some implementations, object detection systems and methods can be improved based on dynamic utilization of multiple sensor modalities. More particularly, an improved 3D object detection system can exploit both LIDAR systems and cameras to perform very accurate localization of objects within three-dimensional space relative to an autonomous vehicle.
    Type: Grant
    Filed: January 10, 2022
    Date of Patent: September 26, 2023
    Assignee: UATC, LLC
    Inventors: Ming Liang, Bin Yang, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
  • Publication number: 20230298263
    Abstract: Real world object reconstruction and representation include performing operations that include sampling locations along a camera ray from a virtual camera to a target object to obtain a sample set of the locations along the camera ray. For each location of the at least a subset of the sample set, the operations include determining a position of the location with respect to the target object, executing, based on the position, a reflectance multilayer perceptron (MLP) model, to determine an albedo and material shininess for the location, and computing a radiance for the location and based on a viewing direction of the camera ray using the albedo and the material shininess. The operations further includes rendering a color value for the camera ray by compositing the radiance across the first sample set.
    Type: Application
    Filed: March 13, 2023
    Publication date: September 21, 2023
    Applicant: WAABI Innovation Inc.
    Inventors: Ze Yang, Sivabalan Manivasagam, Yun Chen, Jingkang Wang, Raquel Urtasun
  • Patent number: 11760385
    Abstract: Systems and methods for vehicle-to-vehicle communications are provided. An example computer-implemented method includes obtaining, by a computing system onboard a first autonomous vehicle, sensor data associated with an environment of the first autonomous vehicle. The method includes determining, by the computing system, an intermediate environmental representation of at least a portion of the environment of the first autonomous vehicle based at least in part on the sensor data. The method includes generating, by the computing system, a compressed intermediate environmental representation by compressing the intermediate environmental representation of at least the portion of the environment of the first autonomous vehicle. The method includes communicating, by the computing system, the compressed intermediate environmental representation to a second autonomous vehicle.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: September 19, 2023
    Assignee: UATC, LLC
    Inventors: Sivabalan Manivasagam, Ming Liang, Bin Yang, Wenyuan Zeng, Raquel Urtasun, Tsun-hsuan Wang
  • Patent number: 11760386
    Abstract: Systems and methods for vehicle-to-vehicle communications are provided. An example computer-implemented method includes obtaining from a first autonomous vehicle, by a computing system onboard a second autonomous vehicle, a first compressed intermediate environmental representation. The first compressed intermediate environmental representation is indicative of at least a portion of an environment of the second autonomous vehicle and is based at least in part on sensor data acquired by the first autonomous vehicle at a first time. The method includes generating, by the computing system, a first decompressed intermediate environmental representation by decompressing the first compressed intermediate environmental representation. The method includes determining, by the computing system, a first time-corrected intermediate environmental representation based at least in part on the first decompressed intermediate environmental representation.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: September 19, 2023
    Assignee: UATC, LLC
    Inventors: Sivabalan Manivasagam, Ming Liang, Bin Yang, Wenyuan Zeng, Raquel Urtasun, Tsun-Hsuan Wang
  • Patent number: 11755014
    Abstract: Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: September 12, 2023
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Abbas Sadat, Mengye Ren, Andrei Pokrovsky, Yen-Chen Lin, Ersin Yumer
  • Patent number: 11755018
    Abstract: Systems and methods for generating motion plans including target trajectories for autonomous vehicles are provided. An autonomous vehicle may include or access a machine-learned motion planning model including a backbone network configured to generate a cost volume including data indicative of a cost associated with future locations of the autonomous vehicle. The cost volume can be generated from raw sensor data as part of motion planning for the autonomous vehicle. The backbone network can generate intermediate representations associated with object detections and objection predictions. The motion planning model can include a trajectory generator configured to evaluate one or more potential trajectories for the autonomous vehicle and to select a target trajectory based at least in part on the cost volume generate by the backbone network.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: September 12, 2023
    Assignee: UATC, LLC
    Inventors: Wenyuan Zeng, Wenjie Luo, Abbas Sadat, Bin Yang, Raquel Urtasun
  • Publication number: 20230278582
    Abstract: Trajectory value learning for autonomous systems includes generating an environment image from sensor input and processing the environment image through an image neural network to obtain a feature map. Trajectory value learning further includes sampling possible trajectories to obtain a candidate trajectory for an autonomous system, extracting, from the feature map, feature vectors corresponding to the candidate trajectory, combining the feature vectors into the input vector, and processing, by a score neural network model, the input vector to obtain a projected score for the candidate trajectory. Trajectory value learning further includes selecting, from the candidate trajectories, the candidate trajectory as a selected trajectory based on the projected score, and implementing the selected trajectory.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 7, 2023
    Applicant: WAABI Innovation Inc.
    Inventors: Chris Jia Han Zhang, Runsheng Guo, Wenyuan Zeng, Raquel Urtasun
  • Publication number: 20230274540
    Abstract: Systems and methods for facilitating communication with autonomous vehicles are provided. In one example embodiment, a computing system can obtain a first type of sensor data (e.g., camera image data) associated with a surrounding environment of an autonomous vehicle and/or a second type of sensor data (e.g., LIDAR data) associated with the surrounding environment of the autonomous vehicle. The computing system can generate overhead image data indicative of at least a portion of the surrounding environment of the autonomous vehicle based at least in part on the first and/or second types of sensor data. The computing system can determine one or more lane boundaries within the surrounding environment of the autonomous vehicle based at least in part on the overhead image data indicative of at least the portion of the surrounding environment of the autonomous vehicle and a machine-learned lane boundary detection model.
    Type: Application
    Filed: May 8, 2023
    Publication date: August 31, 2023
    Inventors: Min Bai, Gellert Sandor Mattyus, Namdar Homayounfar, Shenlong Wang, Shrinidihi Kowshika Lakshmikanth, Raquel Urtasun
  • Patent number: 11731663
    Abstract: Systems and methods are provided for forecasting the motion of actors within a surrounding environment of an autonomous platform. For example, a computing system of an autonomous platform can use machine-learned model(s) to generate actor-specific graphs with past motions of actors and the local map topology. The computing system can project the actor-specific graphs of all actors to a global graph. The global graph can allow the computing system to determine which actors may interact with one another by propagating information over the global graph. The computing system can distribute the interactions determined using the global graph to the individual actor-specific graphs. The computing system can then predict a motion trajectory for an actor based on the associated actor-specific graph, which captures the actor-to-actor interactions and actor-to-map relations.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: August 22, 2023
    Assignee: UATC, LLC
    Inventors: Wenyuan Zeng, Ming Liang, Renjie Liao, Raquel Urtasun
  • Patent number: 11734885
    Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned geometry model can predict one or more adjusted depths for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
    Type: Grant
    Filed: October 3, 2022
    Date of Patent: August 22, 2023
    Assignee: UATC, LLC
    Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
  • Patent number: 11734828
    Abstract: Disclosed herein are methods and systems for performing instance segmentation that can provide improved estimation of object boundaries. Implementations can include a machine-learned segmentation model trained to estimate an initial object boundary based on a truncated signed distance function (TSDF) generated by the model. The model can also generate outputs for optimizing the TSDF over a series of iterations to produce a final TSDF that can be used to determine the segmentation mask.
    Type: Grant
    Filed: August 1, 2022
    Date of Patent: August 22, 2023
    Assignee: UATC, LLC
    Inventors: Namdar Homayounfar, Yuwen Xiong, Justin Liang, Wei-Chiu Ma, Raquel Urtasun
  • Patent number: 11726208
    Abstract: Aspects of the present disclosure involve systems, methods, and devices for autonomous vehicle localization using a Lidar intensity map. A system is configured to generate a map embedding using a first neural network and to generate an online Lidar intensity embedding using a second neural network. The map embedding is based on input map data comprising a Lidar intensity map, and the Lidar sweep embedding is based on online Lidar sweep data. The system is further configured to generate multiple pose candidates based on the online Lidar intensity embedding and compute a three-dimensional (3D) score map comprising a match score for each pose candidate that indicates a similarity between the pose candidate and the map embedding. The system is further configured to determine a pose of a vehicle based on the 3D score map and to control one or more operations of the vehicle based on the determined pose.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: August 15, 2023
    Assignee: UATC, LLC
    Inventors: Shenlong Wang, Andrei Pokrovsky, Raquel Urtasun Sotil, Ioan Andrei Bârsan
  • Publication number: 20230252777
    Abstract: Systems and methods for performing semantic segmentation of three-dimensional data are provided. In one example embodiment, a computing system can be configured to obtain sensor data including three-dimensional data associated with an environment. The three-dimensional data can include a plurality of points and can be associated with one or more times. The computing system can be configured to determine data indicative of a two-dimensional voxel representation associated with the environment based at least in part on the three-dimensional data. The computing system can be configured to determine a classification for each point of the plurality of points within the three-dimensional data based at least in part on the two-dimensional voxel representation associated with the environment and a machine-learned semantic segmentation model. The computing system can be configured to initiate one or more actions based at least in part on the per-point classifications.
    Type: Application
    Filed: April 13, 2023
    Publication date: August 10, 2023
    Inventors: Chris Jia-Han Zhang, Wenjie Luo, Raquel Urtasun
  • Patent number: 11715012
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with object localization and generation of compressed feature representations are provided. For example, a computing system can access source data and target data. The source data can include a source representation of an environment including a source object. The target data can include a compressed target feature representation of the environment. The compressed target feature representation can be based on compression of a target feature representation of the environment produced by machine-learned models. A source feature representation can be generated based on the source representation and the machine-learned models. The machine-learned models can include machine-learned feature extraction models or machine-learned attention models. A localized state of the source object with respect to the environment can be determined based on the source feature representation and the compressed target feature representation.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: August 1, 2023
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Xinkai Wei, Ioan Andrei Barsan, Julieta Martinez Covarrubias, Shenlong Wang
  • Publication number: 20230229889
    Abstract: Systems and methods for generating motion forecast data for actors with respect to an autonomous vehicle and training a machine learned model for the same are disclosed. The computing system can include an object detection model and a graph neural network including a plurality of nodes and a plurality of edges. The computing system can be configured to input sensor data into the object detection model; receive object detection data describing the location of the plurality of the actors relative to the autonomous vehicle as an output of the object detection model; input the object detection data into the graph neural network; iteratively update a plurality of node states respectively associated with the plurality of nodes; and receive, as an output of the graph neural network, the motion forecast data with respect to the plurality of actors.
    Type: Application
    Filed: March 20, 2023
    Publication date: July 20, 2023
    Inventors: Raquel Urtasun, Renjie Liao, Sergio Casas, Cole Christian Gulino
  • Patent number: 11691650
    Abstract: A computing system can be configured to input data that describes sensor data into an object detection model and receive, as an output of the object detection model, object detection data describing features of the plurality of the actors relative to the autonomous vehicle. The computing system can generate an input sequence that describes the object detection data. The computing system can analyze the input sequence using an interaction model to produce, as an output of the interaction model, an attention embedding with respect to the plurality of actors. The computing system can be configured to input the attention embedding into a recurrent model and determine respective trajectories for the plurality of actors based on motion forecast data received as an output of the recurrent model.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: July 4, 2023
    Assignee: UATC, LLC
    Inventors: Lingyun Li, Bin Yang, Ming Liang, Wenyuan Zeng, Mengye Ren, Sean Segal, Raquel Urtasun