Patents by Inventor Raquel Urtasun

Raquel Urtasun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210383616
    Abstract: The present disclosure provides systems and methods for generating photorealistic image simulation data with geometry-aware composition for testing autonomous vehicles. In particular, aspects of the present disclosure can involve the intake of data on an environment and output of augmented data on the environment with the photorealistic addition of an object. As one example, data on the driving experiences of a self-driving vehicle can be augmented to add another vehicle into the collected environment data. The augmented data may then be used to test safety features of software for a self-driving vehicle.
    Type: Application
    Filed: January 15, 2021
    Publication date: December 9, 2021
    Inventors: Frieda Rong, Yun Chen, Shivam Duggal, Shenlong Wang, Xinchen Yan, Sivabalan Manivasagam, Ersin Yumer, Raquel Urtasun
  • Publication number: 20210374437
    Abstract: A method includes receiving image data associated with an image of a roadway including a crosswalk, generating a plurality of different characteristics of the image based on the image data, determining a position of the crosswalk on the roadway based on the plurality of different characteristics, the position including a first boundary and a second boundary of the crosswalk in the roadway, and providing map data associated with a map of the roadway, the map data including the position of the crosswalk on the roadway in the map. The plurality of different characteristics include a classification of one or more elements of the image, a segmentation of the one or more elements of the image, and one or more angles of the one or more elements of the image with respect to a line in the roadway.
    Type: Application
    Filed: August 5, 2021
    Publication date: December 2, 2021
    Inventors: Justin Jin-Wei Liang, Raquel Urtasun Sotil
  • Publication number: 20210362596
    Abstract: Systems and methods for detecting and tracking objects are provided. In one example, a computer-implemented method includes receiving sensor data from one or more sensors. The method includes inputting the sensor data to one or more machine-learned models including one or more first neural networks configured to detect one or more objects based at least in part on the sensor data and one or more second neural networks configured to track the one or more objects over a sequence of sensor data. The method includes generating, as an output of the one or more first neural networks, a 3D bounding box and detection score for a plurality of object detections. The method includes generating, as an output of the one or more second neural networks, a matching score associated with pairs of object detections. The method includes determining a trajectory for each object detection.
    Type: Application
    Filed: May 24, 2021
    Publication date: November 25, 2021
    Inventors: Davi Eugenio Nascimento Frossard, Raquel Urtasun
  • Publication number: 20210326607
    Abstract: Systems and methods for facilitating communication with autonomous vehicles are provided. In one example embodiment, a computing system can obtain a first type of sensor data (e.g., camera image data) associated with a surrounding environment of an autonomous vehicle and/or a second type of sensor data (e.g., LIDAR data) associated with the surrounding environment of the autonomous vehicle. The computing system can generate overhead image data indicative of at least a portion of the surrounding environment of the autonomous vehicle based at least in part on the first and/or second types of sensor data. The computing system can determine one or more lane boundaries within the surrounding environment of the autonomous vehicle based at least in part on the overhead image data indicative of at least the portion of the surrounding environment of the autonomous vehicle and a machine-learned lane boundary detection model.
    Type: Application
    Filed: June 25, 2021
    Publication date: October 21, 2021
    Inventors: Min Bai, Gellert Sandor Mattyus, Namdar Homayounfar, Shenlong Wang, Shrindihi Kowshika Lakshmikanth, Raquel Urtasun
  • Publication number: 20210325882
    Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
    Type: Application
    Filed: June 30, 2021
    Publication date: October 21, 2021
    Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
  • Publication number: 20210303922
    Abstract: Systems and methods for training object detection models using adversarial examples are provided. A method includes obtaining a training scene and identifying a target object within the training scene. The method includes obtaining an adversarial object and generating a modified training scene based on the adversarial object, the target object, and the training scene. The modified training scene includes the training scene modified to include the adversarial object placed on the target object. The modified training scene is input to a machine-learned model configured to detect the training object. A detection score is determined based on whether the training object is detected, and the machine-learned model and the parameters of the adversarial object are trained based on the detection output. The machine-learned model is trained to maximize the detection output. The parameters of the adversarial object are trained to minimize the detection output.
    Type: Application
    Filed: August 31, 2020
    Publication date: September 30, 2021
    Inventors: James Tu, Sivabalan Manivasagam, Mengye Ren, Ming Liang, Bin Yang, Raquel Urtasun
  • Publication number: 20210276595
    Abstract: A computer-implemented method for determining scene-consistent motion forecasts from sensor data can include obtaining scene data including one or more actor features. The computer-implemented method can include providing the scene data to a latent prior model, the latent prior model configured to generate scene latent data in response to receipt of scene data, the scene latent data including one or more latent variables. The computer-implemented method can include obtaining the scene latent data from the latent prior model. The computer-implemented method can include sampling latent sample data from the scene latent data. The computer-implemented method can include providing the latent sample data to a decoder model, the decoder model configured to decode the latent sample data into a motion forecast including one or more predicted trajectories of the one or more actor features.
    Type: Application
    Filed: January 15, 2021
    Publication date: September 9, 2021
    Inventors: Sergio Casas, Cole Chistian Gulino, Shun Da Suo, Katie Z. Luo, Renjie Liao, Raquel Urtasun
  • Publication number: 20210276587
    Abstract: Systems and methods of the present disclosure are directed to a method. The method can include obtaining simplified scenario data associated with a simulated scenario. The method can include determining, using a machine-learned perception-prediction simulation model, a simulated perception-prediction output based at least in part on the simplified scenario data. The method can include evaluating a loss function comprising a perception loss term and a prediction loss term. The method can include adjusting one or more parameters of the machine-learned perception-prediction simulation model based at least in part on the loss function.
    Type: Application
    Filed: January 15, 2021
    Publication date: September 9, 2021
    Inventors: Raquel Urtasun, Kelvin Ka Wing Wong, Qiang Zhang, Bin Yang, Ming Liang, Renjie Liao
  • Publication number: 20210276591
    Abstract: Systems and methods for generating semantic occupancy maps are provided. In particular, a computing system can obtain map data for a geographic area and sensor data obtained by the autonomous vehicle. The computer system can identify feature data included in the map data and sensor data. The computer system can, for a respective semantic object type from a plurality of semantic object types, determine, by the computing system and using feature data as input to a respective machine-learned model from a plurality of machine-learned models, one or more occupancy maps for one or more timesteps in the future, and wherein the respective machine-learned model is trained to determine occupancy for the respective semantic object type. The computer system can select a trajectory for the autonomous vehicle based on a plurality of occupancy maps associated with the plurality of semantic object types.
    Type: Application
    Filed: January 15, 2021
    Publication date: September 9, 2021
    Inventors: Raquel Urtasun, Abbas Sadat, Sergio Casas, Mengye Ren
  • Publication number: 20210278852
    Abstract: Systems and methods for generating attention masks are provided. In particular, a computing system can access sensor data and map data for an area around an autonomous vehicle. The computing system can generate a voxel grid representation of the sensor data and map data. The computing system can generate an attention mask based on the voxel grid representation. The computing system can generate, by using the voxel grid representation and the attention mask as input to a machine-learned model, an attention weighted feature map. The computing system can determine using the attention weighted feature map, a planning cost volume for an area around the autonomous vehicle. The computing system can select a trajectory for the autonomous vehicle based, at least in part, on the planning cost volume.
    Type: Application
    Filed: January 15, 2021
    Publication date: September 9, 2021
    Inventors: Raquel Urtasun, Bob Qingyuan Wei, Mengye Ren, Wenyuan Zeng, Ming Liang, Bin Yang
  • Publication number: 20210279640
    Abstract: Systems and methods for vehicle-to-vehicle communications are provided. An adverse system can obtain sensor data representative of an environment proximate to a targeted system. The adverse system can generate an intermediate representation of the environment and a representation deviation for the intermediate representation. The representation deviation can be designed to disrupt a machine-learned model associated with the target system. The adverse system can communicate the intermediate representation modified by the representation deviation to the target system. The target system can train the machine-learned model associated with the target system to detect the modified intermediate representation. Detected modified intermediate representations can be discarded before disrupting the machine-learned model.
    Type: Application
    Filed: January 15, 2021
    Publication date: September 9, 2021
    Inventors: Xuanyuan Tu, Raquel Urtasun, Tsu-shuan Wang, Sivabalan Manivasagam, Jingkang Wang, Mengye Ren
  • Publication number: 20210278523
    Abstract: Systems and methods for integrating radar and LIDAR data are disclosed. In particular, a computing system can access radar sensor data and LIDAR data for the area around the autonomous vehicle. The computing system can determine, using the one or more machine-learned models, one or more objects in the area of the autonomous vehicle. The computing system can, for a respective object, select a plurality of radar points from the radar sensor data. The computing system can generate a similarity score for each selected radar point. The computing system can generate weight associated with each radar point based on the similarity score. The computing system can calculate predicted velocity for the respective object based on a weighted average of a plurality of velocities associated with the plurality of radar points. The computing system can generate a proposed motion plan based on the predicted velocity for the respective object.
    Type: Application
    Filed: January 15, 2021
    Publication date: September 9, 2021
    Inventors: Raquel Urtasun, Bin Yang, Ming Liang, Sergio Casas, Runsheng Benson Guo
  • Publication number: 20210272018
    Abstract: The present disclosure provides systems and methods for training probabilistic object motion prediction models using non-differentiable representations of prior knowledge. As one example, object motion prediction models can be used by autonomous vehicles to probabilistically predict the future location(s) of observed objects (e.g., other vehicles, bicyclists, pedestrians, etc.). For example, such models can output a probability distribution that provides a distribution of probabilities for the future location(s) of each object at one or more future times. Aspects of the present disclosure enable these models to be trained using non-differentiable prior knowledge about motion of objects within the autonomous vehicle's environment such as, for example, prior knowledge about lane or road geometry or topology and/or traffic information such as current traffic control states (e.g., traffic light status).
    Type: Application
    Filed: January 15, 2021
    Publication date: September 2, 2021
    Inventors: Sergio Casas, Cole Christian Gulino, Shun Da Suo, Raquel Urtasun
  • Publication number: 20210258611
    Abstract: A machine-learned image compression model includes a first encoder configured to generate a first image code based at least in part on first image data. The first encoder includes a first series of convolutional layers configured to generate a first series of respective feature maps based at least in part on the first image. A second encoder is configured to generate a second image code based at least in part on second image data and includes a second series of convolutional layers configured to generate a second series of respective feature maps based at least in part on the second image and disparity-warped feature data. Respective parametric skip functions associated convolutional layers of the second series are configured to generate disparity-warped feature data based at least in part on disparity associated with the first series of respective feature maps and the second series of respective feature maps.
    Type: Application
    Filed: May 4, 2021
    Publication date: August 19, 2021
    Inventors: Jerry Junkai Liu, Shenlong Wang, Raquel Urtasun
  • Publication number: 20210248460
    Abstract: A computing system can be configured to generate, for an autonomous vehicle, a route through a transportation network comprising a plurality of segments. The method can include receiving sets of agent attention data from additional autonomous vehicles that are respectively currently located at one or more other segments of the transportation network. The method can include inputting the sets of agent attention data into a value iteration graph neural network that comprises a plurality of nodes that respectively correspond to the plurality of segments of the transportation network. The method can include receiving node values respectively for the segments as an output of the value iteration graph neural network. The method can include selecting a next segment to include in the route for the autonomous vehicle based at least in part on the node values.
    Type: Application
    Filed: September 25, 2020
    Publication date: August 12, 2021
    Inventors: Quinlan Sykora, Mengye Ren, Raquel Urtasun
  • Patent number: 11080537
    Abstract: Systems and methods for facilitating communication with autonomous vehicles are provided. In one example embodiment, a computing system can obtain a first type of sensor data (e.g., camera image data) associated with a surrounding environment of an autonomous vehicle and/or a second type of sensor data (e.g., LIDAR data) associated with the surrounding environment of the autonomous vehicle. The computing system can generate overhead image data indicative of at least a portion of the surrounding environment of the autonomous vehicle based at least in part on the first and/or second types of sensor data. The computing system can determine one or more lane boundaries within the surrounding environment of the autonomous vehicle based at least in part on the overhead image data indicative of at least the portion of the surrounding environment of the autonomous vehicle and a machine-learned lane boundary detection model.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: August 3, 2021
    Assignee: UATC, LLC
    Inventors: Min Bai, Gellert Sandor Mattyus, Namdar Homayounfar, Shenlong Wang, Shrindihi Kowshika Lakshmikanth, Raquel Urtasun
  • Patent number: 11061402
    Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: July 13, 2021
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
  • Publication number: 20210209370
    Abstract: Systems and methods for performing semantic segmentation of three-dimensional data are provided. In one example embodiment, a computing system can be configured to obtain sensor data including three-dimensional data associated with an environment. The three-dimensional data can include a plurality of points and can be associated with one or more times. The computing system can be configured to determine data indicative of a two-dimensional voxel representation associated with the environment based at least in part on the three-dimensional data. The computing system can be configured to determine a classification for each point of the plurality of points within the three-dimensional data based at least in part on the two-dimensional voxel representation associated with the environment and a machine-learned semantic segmentation model. The computing system can be configured to initiate one or more actions based at least in part on the per-point classifications.
    Type: Application
    Filed: March 22, 2021
    Publication date: July 8, 2021
    Inventors: Chris Jia-Han Zhang, Wenjie Luo, Raquel Urtasun
  • Publication number: 20210200212
    Abstract: Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.
    Type: Application
    Filed: March 20, 2020
    Publication date: July 1, 2021
    Inventors: Raquel Urtasun, Abbas Sadat, Mengye Ren, Andrei Pokrovsky, Yen-Chen Lin, Ersin Yumer
  • Patent number: 11017550
    Abstract: Systems and methods for detecting and tracking objects are provided. In one example, a computer-implemented method includes receiving sensor data from one or more sensors. The method includes inputting the sensor data to one or more machine-learned models including one or more first neural networks configured to detect one or more objects based at least in part on the sensor data and one or more second neural networks configured to track the one or more objects over a sequence of sensor data. The method includes generating, as an output of the one or more first neural networks, a 3D bounding box and detection score for a plurality of object detections. The method includes generating, as an output of the one or more second neural networks, a matching score associated with pairs of object detections. The method includes determining a trajectory for each object detection.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: May 25, 2021
    Assignee: UATC, LLC
    Inventors: Davi Eugenio Nascimento Frossard, Raquel Urtasun