Patents by Inventor Raquel Urtasun

Raquel Urtasun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240302530
    Abstract: LiDAR based memory segmentation includes obtaining a LiDAR point cloud that includes LiDAR points from a LiDAR sensor, voxelizing the LiDAR points to obtain LiDAR voxels, and encoding the LiDAR voxels to obtain encoded voxels. A LiDAR voxel memory is revised using the encoded voxels to obtain revised LiDAR voxel memory, decoding the revised LiDAR voxel memory to obtain decoded LiDAR voxel memory features. The LiDAR points are segmented using the decoded LiDAR voxel memory features to generate a segmented LiDAR point cloud.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 12, 2024
    Applicant: Waabi Innovation Inc.
    Inventors: Enxu LI, Sergio CASAS ROMERO, Raquel URTASUN
  • Patent number: 12051001
    Abstract: Provided are systems and methods that perform multi-task and/or multi-sensor fusion for three-dimensional object detection in furtherance of, for example, autonomous vehicle perception and control. In particular, according to one aspect of the present disclosure, example systems and methods described herein exploit simultaneous training of a machine-learned model ensemble relative to multiple related tasks to learn to perform more accurate multi-sensor 3D object detection. For example, the present disclosure provides an end-to-end learnable architecture with multiple machine-learned models that interoperate to reason about 2D and/or 3D object detection as well as one or more auxiliary tasks. According to another aspect of the present disclosure, example systems and methods described herein can perform multi-sensor fusion (e.g.
    Type: Grant
    Filed: October 24, 2022
    Date of Patent: July 30, 2024
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Bin Yang, Ming Liang
  • Patent number: 12037027
    Abstract: Systems and methods for generating synthetic testing data for autonomous vehicles are provided. A computing system can obtain map data descriptive of an environment and object data descriptive of a plurality of objects within the environment. The computing system can generate context data including deep or latent features extracted from the map and object data by one or more machine-learned models. The computing system can process the context data with a machine-learned model to generate synthetic motion prediction for the plurality of objects. The synthetic motion predictions for the objects can include one or more synthesized states for the objects at future times. The computing system can provide, as an output, synthetic testing data that includes the plurality of synthetic motion predictions for the objects. The synthetic testing data can be used to test an autonomous vehicle control system in a simulation.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: July 16, 2024
    Assignee: UATC, LLC
    Inventors: Shun Da Suo, Sebastián David Regalado Lozano, Sergio Casas, Raquel Urtasun
  • Patent number: 12037025
    Abstract: Systems and methods are disclosed for motion forecasting and planning for autonomous vehicles. For example, a plurality of future traffic scenarios are determined by modeling a joint distribution of actor trajectories for a plurality of actors, as opposed to an approach that models actors individually. As another example, a diversity objective is evaluated that rewards sampling of the future traffic scenarios that require distinct reactions from the autonomous vehicle. An estimated probability for the plurality of future traffic scenarios can be determined and used to generate a contingency plan for motion of the autonomous vehicle. The contingency plan can include at least one initial short-term trajectory intended for immediate action of the AV and a plurality of subsequent long-term trajectories associated with the plurality of future traffic scenarios.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: July 16, 2024
    Assignee: UATC, LLC
    Inventors: Alexander Yuhao Cui, Abbas Sadat, Sergio Casas, Renjie Liao, Raquel Urtasun
  • Patent number: 12032067
    Abstract: Systems and methods for identifying travel way features in real time are provided. A method can include receiving two-dimensional and three-dimensional data associated with the surrounding environment of a vehicle. The method can include providing the two-dimensional data as one or more input into a machine-learned segmentation model to output a two-dimensional segmentation. The method can include fusing the two-dimensional segmentation with the three-dimensional data to generate a three-dimensional segmentation. The method can include storing the three-dimensional segmentation in a classification database with data indicative of one or more previously generated three-dimensional segmentations. The method can include providing one or more datapoint sets from the classification database as one or more inputs into a machine-learned enhancing model to obtain an enhanced three-dimensional segmentation.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: July 9, 2024
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Min Bai, Shenlong Wang
  • Patent number: 12023812
    Abstract: Systems and methods for streaming sensor packets in real-time are provided. An example method includes obtaining a sensor data packet representing a first portion of a three-hundred and sixty degree view of a surrounding environment of a robotic platform. The method includes generating, using machine-learned model(s), a local feature map based at least in part on the sensor data packet. The local feature map is indicative of local feature(s) associated with the first portion of the three-hundred and sixty degree view. The method includes updating, based at least in part on the local feature map, a spatial map to include the local feature(s). The spatial map includes previously extracted local features associated with a previous sensor data packet representing a different portion of the three-hundred and sixty degree view than the first portion. The method includes determining an object within the surrounding environment based on the updated spatial map.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: July 2, 2024
    Assignee: UATC, LLC
    Inventors: Sergio Casas, Davi Eugenio Nascimento Frossard, Shun Da Suo, Xuanyuan Tu, Raquel Urtasun
  • Publication number: 20240199058
    Abstract: Systems and methods for determining object intentions through visual attributes are provided. A method can include determining, by a computing system, one or more regions of interest. The regions of interest can be associated with surrounding environment of a first vehicle. The method can include determining, by a computing system, spatial features and temporal features associated with the regions of interest. The spatial features can be indicative of a vehicle orientation associated with a vehicle of interest. The temporal features can be indicative of a semantic state associated with signal lights of the vehicle of interest. The method can include determining, by the computing system, a vehicle intention. The vehicle intention can be based on the spatial and temporal features. The method can include initiating, by the computing system, an action. The action can be based on the vehicle intention.
    Type: Application
    Filed: February 29, 2024
    Publication date: June 20, 2024
    Inventors: Davi Eugenio Nascimento Frossard, Eric Randall Kee, Raquel Urtasun
  • Patent number: 12013457
    Abstract: Systems and methods for integrating radar and LIDAR data are disclosed. In particular, a computing system can access radar sensor data and LIDAR data for the area around the autonomous vehicle. The computing system can determine, using the one or more machine-learned models, one or more objects in the area of the autonomous vehicle. The computing system can, for a respective object, select a plurality of radar points from the radar sensor data. The computing system can generate a similarity score for each selected radar point. The computing system can generate weight associated with each radar point based on the similarity score. The computing system can calculate predicted velocity for the respective object based on a weighted average of a plurality of velocities associated with the plurality of radar points. The computing system can generate a proposed motion plan based on the predicted velocity for the respective object.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: June 18, 2024
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Bin Yang, Ming Liang, Sergio Casas, Runsheng Benson Guo
  • Patent number: 12008454
    Abstract: Systems and methods for generating motion forecast data for actors with respect to an autonomous vehicle and training a machine learned model for the same are disclosed. The computing system can include an object detection model and a graph neural network including a plurality of nodes and a plurality of edges. The computing system can be configured to input sensor data into the object detection model; receive object detection data describing the location of the plurality of the actors relative to the autonomous vehicle as an output of the object detection model; input the object detection data into the graph neural network; iteratively update a plurality of node states respectively associated with the plurality of nodes; and receive, as an output of the graph neural network, the motion forecast data with respect to the plurality of actors.
    Type: Grant
    Filed: March 20, 2023
    Date of Patent: June 11, 2024
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Renjie Liao, Sergio Casas, Cole Christian Gulino
  • Patent number: 11989847
    Abstract: The present disclosure provides systems and methods for generating photorealistic image simulation data with geometry-aware composition for testing autonomous vehicles. In particular, aspects of the present disclosure can involve the intake of data on an environment and output of augmented data on the environment with the photorealistic addition of an object. As one example, data on the driving experiences of a self-driving vehicle can be augmented to add another vehicle into the collected environment data. The augmented data may then be used to test safety features of software for a self-driving vehicle.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: May 21, 2024
    Assignee: UATC, LLC
    Inventors: Frieda Rong, Yun Chen, Shivam Duggal, Shenlong Wang, Xinchen Yan, Sivabalan Manivasagam, Ersin Yumer, Raquel Urtasun
  • Publication number: 20240159871
    Abstract: Unsupervised object detection from lidar point clouds includes forecasting a set of new positions of a set of objects in a geographic region based on a first set of object tracks to obtain a set of forecasted object positions, and obtaining a new LiDAR point cloud of the geographic region. A detector model processes the new LiDAR point cloud to obtain a new set of bounding boxes around the set of objects detected in the new LiDAR point cloud. Object detection further includes matching the new set of bounding boxes to the set of forecasted object positions to generate a set of matches, updating the first set of object tracks with the new set of bounding boxes according to the set of matches to obtain an updated set of object tracks, and filtering, after updating, the updated set of object tracks to remove object tracks failing to satisfy a track length threshold, to generate a training set of object tracks.
    Type: Application
    Filed: November 10, 2023
    Publication date: May 16, 2024
    Inventors: Lunjun ZHANG, Yuwen XIONG, Sergio CASAS ROMERO, Mengye REN, Raquel URTASUN, Angi Joyce YANG
  • Publication number: 20240157978
    Abstract: A method includes obtaining, from sensor data, map data of a geographic region and multiple trajectories of multiple agents located in the geographic region. The agents and the map data have a corresponding physical location in the geographic region. The method further includes determining, for an agent, an agent route from a trajectory that corresponds to the agent, generating, by an encoder model, an interaction encoding that encodes the trajectories and the map data, and generating, from the interaction encoding, an agent attribute encoding of the agent and the agent route. The method further includes processing the agent attribute encoding to generate positional information for the agent, and updating the trajectory of the agent using the positional information to obtain an updated trajectory.
    Type: Application
    Filed: November 10, 2023
    Publication date: May 16, 2024
    Inventors: Kelvin WONG, Simon SUO, Raquel URTASUN
  • Publication number: 20240161436
    Abstract: Compact LiDAR representation includes performing operations that include generating a three-dimensional (3D) LiDAR image from LiDAR input data, encoding, by an encoder model, the 3D LiDAR image to a continuous embedding in continuous space, and performing, using a code map, a vector quantization of the continuous embedding to generate a discrete embedding. The operations further include decoding, by the decoder model, the discrete embedding to generate modified LiDAR data, and outputting the modified LiDAR data.
    Type: Application
    Filed: November 10, 2023
    Publication date: May 16, 2024
    Inventors: Yuwen XIONG, Wei-Chiu MA, Jingkang WANG, Raquel URTASUN
  • Patent number: 11972606
    Abstract: Systems and methods for facilitating communication with autonomous vehicles are provided. In one example embodiment, a computing system can obtain a first type of sensor data (e.g., camera image data) associated with a surrounding environment of an autonomous vehicle and/or a second type of sensor data (e.g., LIDAR data) associated with the surrounding environment of the autonomous vehicle. The computing system can generate overhead image data indicative of at least a portion of the surrounding environment of the autonomous vehicle based at least in part on the first and/or second types of sensor data. The computing system can determine one or more lane boundaries within the surrounding environment of the autonomous vehicle based at least in part on the overhead image data indicative of at least the portion of the surrounding environment of the autonomous vehicle and a machine-learned lane boundary detection model.
    Type: Grant
    Filed: May 8, 2023
    Date of Patent: April 30, 2024
    Assignee: UATC, LLC
    Inventors: Min Bai, Gellert Sandor Mattyus, Namdar Homayounfar, Shenlong Wang, Shrinidihi Kowshika Lakshmikanth, Raquel Urtasun
  • Publication number: 20240104335
    Abstract: Motion forecasting for autonomous systems includes obtaining map data of a geographic region and historical trajectories of agents located in the geographic region. The map data includes map elements. The agents and the map elements have a corresponding physical locations in the geographic region. Motion forecasting further includes building, from the historical trajectories and the map data, a heterogeneous graph for the agents and the map elements. The heterogeneous graph defines the corresponding physical locations of the agents and the map elements relative to each other of the agents and the map elements. Motion forecasting further includes modelling, by a graph neural network, agent actions of an agent of the agents using the heterogeneous graph to generate an agent goal location, and operating an autonomous system based on the agent goal location.
    Type: Application
    Filed: September 14, 2023
    Publication date: March 28, 2024
    Applicant: WAABI Innovation Inc.
    Inventors: Alexander CUI, Sergio CASAS, Raquel URTASUN
  • Publication number: 20240096083
    Abstract: A computer-implemented method for determining scene-consistent motion forecasts from sensor data can include obtaining scene data including one or more actor features. The computer-implemented method can include providing the scene data to a latent prior model, the latent prior model configured to generate scene latent data in response to receipt of scene data, the scene latent data including one or more latent variables. The computer-implemented method can include obtaining the scene latent data from the latent prior model. The computer-implemented method can include sampling latent sample data from the scene latent data. The computer-implemented method can include providing the latent sample data to a decoder model, the decoder model configured to decode the latent sample data into a motion forecast including one or more predicted trajectories of the one or more actor features.
    Type: Application
    Filed: November 27, 2023
    Publication date: March 21, 2024
    Inventors: Sergio Casas, Cole Christian Gulino, Shun Da Suo, Katie Z. Luo, Renjie Liao, Raquel Urtasun
  • Publication number: 20240085908
    Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
    Type: Application
    Filed: November 17, 2023
    Publication date: March 14, 2024
    Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
  • Patent number: 11926337
    Abstract: Systems and methods for determining object intentions through visual attributes are provided. A method can include determining, by a computing system, one or more regions of interest. The regions of interest can be associated with surrounding environment of a first vehicle. The method can include determining, by a computing system, spatial features and temporal features associated with the regions of interest. The spatial features can be indicative of a vehicle orientation associated with a vehicle of interest. The temporal features can be indicative of a semantic state associated with signal lights of the vehicle of interest. The method can include determining, by the computing system, a vehicle intention. The vehicle intention can be based on the spatial and temporal features. The method can include initiating, by the computing system, an action. The action can be based on the vehicle intention.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: March 12, 2024
    Assignee: UATC, LLC
    Inventors: Davi Eugenio Nascimento Frossard, Eric Randall Kee, Raquel Urtasun
  • Publication number: 20240054407
    Abstract: The present disclosure provides systems and methods for training probabilistic object motion prediction models using non-differentiable representations of prior knowledge. As one example, object motion prediction models can be used by autonomous vehicles to probabilistically predict the future location(s) of observed objects (e.g., other vehicles, bicyclists, pedestrians, etc.). For example, such models can output a probability distribution that provides a distribution of probabilities for the future location(s) of each object at one or more future times. Aspects of the present disclosure enable these models to be trained using non-differentiable prior knowledge about motion of objects within the autonomous vehicle's environment such as, for example, prior knowledge about lane or road geometry or topology and/or traffic information such as current traffic control states (e.g., traffic light status).
    Type: Application
    Filed: October 26, 2023
    Publication date: February 15, 2024
    Inventors: Sergio Casas, Cole Christian Gulino, Shun Da Suo, Raquel Urtasun
  • Patent number: 11880771
    Abstract: Systems and methods are provided for machine-learned models including convolutional neural networks that generate predictions using continuous convolution techniques. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can perform, with a machine-learned convolutional neural network, one or more convolutions over input data using a continuous filter relative to a support domain associated with the input data, and receive a prediction from the machine-learned convolutional neural network. A machine-learned convolutional neural network in some examples includes at least one continuous convolution layer configured to perform convolutions over input data with a parametric continuous kernel.
    Type: Grant
    Filed: January 12, 2023
    Date of Patent: January 23, 2024
    Assignee: UATC, LLC
    Inventors: Shenlong Wang, Wei-Chiu Ma, Shun Da Suo, Raquel Urtasun, Ming Liang