Patents by Inventor Raquel Urtasun

Raquel Urtasun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240427022
    Abstract: Systems and methods for identifying travel way features in real time are provided. A method can include receiving two-dimensional and three-dimensional data associated with the surrounding environment of a vehicle. The method can include providing the two-dimensional data as one or more input into a machine-learned segmentation model to output a two-dimensional segmentation. The method can include fusing the two-dimensional segmentation with the three-dimensional data to generate a three-dimensional segmentation. The method can include storing the three-dimensional segmentation in a classification database with data indicative of one or more previously generated three-dimensional segmentations. The method can include providing one or more datapoint sets from the classification database as one or more inputs into a machine-learned enhancing model to obtain an enhanced three-dimensional segmentation.
    Type: Application
    Filed: May 23, 2024
    Publication date: December 26, 2024
    Inventors: Raquel Urtasun, Min Bai, Shenlong Wang
  • Publication number: 20240411663
    Abstract: Latent representation based appearance modification for adversarial testing and training include obtaining a first latent representation of an actor, performing a modification of the first latent representation of an actor to obtain a second latent representation, and generating a 3D model from the second latent representation. The operations further include performing, by a simulator interacting with the virtual driver, a simulation of the virtual world having the 3D model of the actor and the autonomous system moving in the virtual world, evaluating the virtual driver interacting in the virtual world during the simulation to obtain an evaluation result, and outputting the evaluation result.
    Type: Application
    Filed: June 6, 2024
    Publication date: December 12, 2024
    Applicant: WAABI Innovation Inc.
    Inventors: Jay SARVA, Jingkang WANG, James TU, Yuwen XIONG, Sivabalan MANIVASAGAM, Raquel URTASUN
  • Publication number: 20240409124
    Abstract: A method implements automatic labeling of objects from LiDAR point clouds via trajectory level refinement. The method includes executing an encoder model using a set of bounding box vectors and a set of point clouds to generate a set of combined feature vectors and executing an attention model using the set of combined feature vectors to generate a set of updated feature vectors. The method further includes executing a decoder model using the set of updated feature vectors to generate a set of pose residuals and a size residual and updating the set of bounding box vectors with the set of pose residuals and the size residual to generate a set of refined bounding box vectors. The method further includes executing an action responsive to the set of refined bounding box vectors.
    Type: Application
    Filed: June 6, 2024
    Publication date: December 12, 2024
    Applicant: Waabi Innovation Inc.
    Inventors: Anqi Joyce YANG, Sergio CASAS ROMERO, Mikita DVORNIK, Sean SEGAL, Raquel URTASUN
  • Publication number: 20240412497
    Abstract: A method implements multimodal four-dimensional panoptic segmentation. The method includes receiving a set of images and a set of point clouds and executing an image encoder model using the set of images to extract a set of image feature maps. The method further includes executing a point voxel encoder model using the set of image feature maps and the set of point clouds to extract a set of voxel features, a set of image features, and a set of point features and executing a panoptic decoder model using the set of voxel features, the set of image features, the set of point features, and a set of queries to generate a semantic mask and a track mask. The method further includes performing an action responsive to at least one of the semantic mask and the track mask.
    Type: Application
    Filed: June 6, 2024
    Publication date: December 12, 2024
    Applicant: Waabi Innovation Inc.
    Inventors: Ali ATHAR, Enxu LI, Sergio CASAS ROMERO, Raquel URTASUN
  • Publication number: 20240391097
    Abstract: Systems and methods for streaming sensor packets in real-time are provided. An example method includes obtaining a sensor data packet representing a first portion of a three-hundred and sixty degree view of a surrounding environment of a robotic platform. The method includes generating, using machine-learned model(s), a local feature map based at least in part on the sensor data packet. The local feature map is indicative of local feature(s) associated with the first portion of the three-hundred and sixty degree view. The method includes updating, based at least in part on the local feature map, a spatial map to include the local feature(s). The spatial map includes previously extracted local features associated with a previous sensor data packet representing a different portion of the three-hundred and sixty degree view than the first portion. The method includes determining an object within the surrounding environment based on the updated spatial map.
    Type: Application
    Filed: May 21, 2024
    Publication date: November 28, 2024
    Inventors: Sergio Casas, Davi Eugenio Nascimento Frossard, Shun Da Suo, Xuanyuan Tu, Raquel Urtasun
  • Publication number: 20240391504
    Abstract: Systems and methods for generating synthetic testing data for autonomous vehicles are provided. A computing system can obtain map data descriptive of an environment and object data descriptive of a plurality of objects within the environment. The computing system can generate context data including deep or latent features extracted from the map and object data by one or more machine-learned models. The computing system can process the context data with a machine-learned model to generate synthetic motion prediction for the plurality of objects. The synthetic motion predictions for the objects can include one or more synthesized states for the objects at future times. The computing system can provide, as an output, synthetic testing data that includes the plurality of synthetic motion predictions for the objects. The synthetic testing data can be used to test an autonomous vehicle control system in a simulation.
    Type: Application
    Filed: May 28, 2024
    Publication date: November 28, 2024
    Inventors: Shun Da Suo, Sebastián David Regalado Lozano, Sergio Casas, Raquel Urtasun
  • Publication number: 20240386656
    Abstract: Deferred neural lighting in augmented image generation includes performing operations. The operations include generating a source light representation of a real-world scene from a panoramic image of the real-world scene, augmenting the real-world scene in an object representation of the real-world scene to generate an augmented scene, and processing the augmented scene to generate augmented image buffers. The operations further include selecting a target lighting representation identifying a target light source, processing, by a neural deferred rendering model, the augmented image buffers, the source lighting representation, and a target lighting representation to generate an augmented image having a lighting appearance according to the target light source and outputting the augmented image.
    Type: Application
    Filed: May 16, 2024
    Publication date: November 21, 2024
    Applicant: WAABI Innovation Inc.
    Inventors: Ava PUN, Gary SUN, Jingkang WANG, Yun CHEN, Ze YANG, Sivabalan MANIVASAGAM, Wei-Chiu MA, Raquel URTASUN
  • Patent number: 12141995
    Abstract: Systems and methods for generating simulation data based on real-world dynamic objects are provided. A method includes obtaining two- and three-dimensional data descriptive of a dynamic object in the real world. The two- and three-dimensional information can be provided as an input to a machine-learned model to receive object model parameters descriptive of a pose and shape modification with respect to a three-dimensional template object model. The parameters can represent a three-dimensional dynamic object model indicative of an object pose and an object shape for the dynamic object. The method can be repeated on sequential two- and three-dimensional information to generate a sequence of object model parameters over time. Portions of a sequence of parameters can be stored as simulation data descriptive of a simulated trajectory of a unique dynamic object. The parameters can be evaluated by an objective function to refine the parameters and train the machine-learned model.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: November 12, 2024
    Assignee: AURORA OPERATIONS, INC.
    Inventors: Ming Liang, Wei-Chiu Ma, Sivabalan Manivasagam, Raquel Urtasun, Bin Yang, Ze Yang
  • Publication number: 20240367688
    Abstract: Systems and methods are disclosed for motion forecasting and planning for autonomous vehicles. For example, a plurality of future traffic scenarios are determined by modeling a joint distribution of actor trajectories for a plurality of actors, as opposed to an approach that models actors individually. As another example, a diversity objective is evaluated that rewards sampling of the future traffic scenarios that require distinct reactions from the autonomous vehicle. An estimated probability for the plurality of future traffic scenarios can be determined and used to generate a contingency plan for motion of the autonomous vehicle. The contingency plan can include at least one initial short-term trajectory intended for immediate action of the AV and a plurality of subsequent long-term trajectories associated with the plurality of future traffic scenarios.
    Type: Application
    Filed: May 8, 2024
    Publication date: November 7, 2024
    Inventors: Alexander Yuhao Cui, Abbas Sadat, Sergio Casas, Renjie Liao, Raquel Urtasun
  • Patent number: 12127085
    Abstract: Systems and methods for improved vehicle-to-vehicle communications are provided. A system can obtain sensor data depicting its surrounding environment and input the sensor data (or processed sensor data) to a machine-learned model to perceive its surrounding environment based on its location within the environment. The machine-learned model can generate an intermediate environmental representation that encodes features within the surrounding environment. The system can receive a number of different intermediate environmental representations and corresponding locations from various other systems, aggregate the representations based on the corresponding locations, and perceive its surrounding environment based on the aggregated representations. The system can determine relative poses between the each of the systems and an absolute pose for each system based on the representations.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: October 22, 2024
    Assignee: AURORA OPERATIONS, INC.
    Inventors: Nicholas Baskar Vadivelu, Mengye Ren, Xuanyuan Tu, Raquel Urtasun, Jingkang Wang
  • Patent number: 12124269
    Abstract: Systems and methods for the simultaneous localization and mapping of autonomous vehicle systems are provided. A method includes receiving a plurality of input image frames from the plurality of asynchronous image devices triggered at different times to capture the plurality of input image frames. The method includes identifying reference image frame(s) corresponding to a respective input image frame by matching the field of view of the respective input image frame to the fields of view of the reference image frame(s). The method includes determining association(s) between the respective input image frame and three-dimensional map point(s) based on a comparison of the respective input image frame to the one or more reference image frames. The method includes generating an estimated pose for the autonomous vehicle the one or more three-dimensional map points. The method includes updating a continuous-time motion model of the autonomous vehicle based on the estimated pose.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: October 22, 2024
    Assignee: AURORA OPERATIONS, INC.
    Inventors: Anqi Joyce Yang, Can Cui, Ioan Andrei Bârsan, Shenlong Wang, Raquel Urtasun
  • Patent number: 12116015
    Abstract: Techniques for improving the performance of an autonomous vehicle (AV) by automatically annotating objects surrounding the AV are described herein. A system can obtain sensor data from a sensor coupled to the AV and generate an initial object trajectory for an object using the sensor data. Additionally, the system can determine a fixed value for the object size of the object based on the initial object trajectory. Moreover, the system can generate an updated initial object trajectory, wherein the object size corresponds to the fixed value. Furthermore, the system can determine, based on the sensor data and the updated initial object trajectory, a refined object trajectory. Subsequently, the system can generate a multi-dimensional label for the object based on the refined object trajectory. A motion plan for controlling the AV can be generated based on the multi-dimensional label.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: October 15, 2024
    Assignee: AURORA OPERATIONS, INC.
    Inventors: Bin Yang, Ming Liang, Wenyuan Zeng, Min Bai, Raquel Urtasun
  • Publication number: 20240338567
    Abstract: Provided are systems and methods that perform multi-task and/or multi-sensor fusion for three-dimensional object detection in furtherance of, for example, autonomous vehicle perception and control. In particular, according to one aspect of the present disclosure, example systems and methods described herein exploit simultaneous training of a machine-learned model ensemble relative to multiple related tasks to learn to perform more accurate multi-sensor 3D object detection. For example, the present disclosure provides an end-to-end learnable architecture with multiple machine-learned models that interoperate to reason about 2D and/or 3D object detection as well as one or more auxiliary tasks. According to another aspect of the present disclosure, example systems and methods described herein can perform multi-sensor fusion (e.g.
    Type: Application
    Filed: June 18, 2024
    Publication date: October 10, 2024
    Inventors: Raquel Urtasun, Bin Yang, Ming Liang
  • Patent number: 12103554
    Abstract: Systems and methods of the present disclosure are directed to a method. The method can include obtaining simplified scenario data associated with a simulated scenario. The method can include determining, using a machine-learned perception-prediction simulation model, a simulated perception-prediction output based at least in part on the simplified scenario data. The method can include evaluating a loss function comprising a perception loss term and a prediction loss term. The method can include adjusting one or more parameters of the machine-learned perception-prediction simulation model based at least in part on the loss function.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: October 1, 2024
    Assignee: AURORA OPERATIONS, INC.
    Inventors: Raquel Urtasun, Kelvin Ka Wing Wong, Qiang Zhang, Bin Yang, Ming Liang, Renjie Liao
  • Patent number: 12106435
    Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned geometry model can predict one or more adjusted depths for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
    Type: Grant
    Filed: June 30, 2023
    Date of Patent: October 1, 2024
    Assignee: AURORA OPERATIONS, INC.
    Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
  • Publication number: 20240320466
    Abstract: Systems and methods for generating motion forecast data for actors with respect to an autonomous vehicle and training a machine learned model for the same are disclosed. The computing system can include an object detection model and a graph neural network including a plurality of nodes and a plurality of edges. The computing system can be configured to input sensor data into the object detection model; receive object detection data describing the location of the plurality of the actors relative to the autonomous vehicle as an output of the object detection model; input the object detection data into the graph neural network; iteratively update a plurality of node states respectively associated with the plurality of nodes; and receive, as an output of the graph neural network, the motion forecast data with respect to the plurality of actors.
    Type: Application
    Filed: May 6, 2024
    Publication date: September 26, 2024
    Inventors: Raquel Urtasun, Renjie Liao, Sergio Casas, Cole Christian Gulino
  • Publication number: 20240303400
    Abstract: A method includes generating a first sample including first raw parameter values of a first modifiable parameters by a probabilistic model and a kernel and executing a first test of a virtual driver of an autonomous system according to the first sample to generate a first evaluation result of multiple evaluation results. The method further includes updating the probabilistic model according to the first evaluation result and training the kernel using the first evaluation result. The method additionally includes generating a second sample including second raw parameter values of the parameters by the probabilistic model and the kernel and executing a second test of a virtual driver of an autonomous system according to the second sample to generate a second evaluation result of the evaluation results. The method further includes presenting the evaluation results.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 12, 2024
    Applicant: Waabi Innovation Inc.
    Inventors: James TU, Simon SUO, Raquel URTASUN
  • Publication number: 20240300527
    Abstract: Diffusion for realistic scene generation includes obtaining a current set of agent state vectors and a map data of a geographic region, and iteratively, through multiple diffusion timesteps, updating the current set of agent state vectors. Iteratively updating includes processing, by a noise prediction model, the current set of agent state vectors, a current diffusion timestep of the plurality of diffusion timesteps, and the map data to obtain a noise prediction value, generating a mean using the noise prediction value, generating a distribution function according to the mean, sampling a revised set of agent state vectors from the distribution function, and replacing the current set of agent state vectors with the revised set of agent state vectors. The current set of agent state vectors are outputted.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 12, 2024
    Applicant: Waabi Innovation Inc.
    Inventors: Jack LU, Kelvin WONG, Chris ZHANG, Simon SUO, Raquel URTASUN
  • Publication number: 20240303501
    Abstract: Imitation and reinforcement learning for multi-agent simulation includes performing operations. The operations include obtaining a first real-world scenario of agents moving according to first trajectories and simulating the first real-world scenario in a virtual world to generate first simulated states. The simulating includes processing, by an agent model, the first simulated states for the agents to obtain second trajectories. For each of at least a subset of the agents, a difference between a first corresponding trajectory of the agent and a second corresponding trajectory of the agent is calculated and determining an imitation loss is determined based on the difference. The operations further include evaluating the second trajectories according to a reward function to generate a reinforcement learning loss, calculating a total loss as a combination of the imitation loss and the reinforcement learning loss, and updating the agent model using the total loss.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 12, 2024
    Applicant: Waabi Innovation Inc.
    Inventors: Chris ZHANG, James TU, Lunjun ZHANG, Kelvin WONG, Simon SUO, Raquel URTASUN
  • Publication number: 20240300526
    Abstract: Motion planning with implicit occupancy for autonomous systems includes obtaining a set of trajectories through a geographic region for an autonomous system, and generating, for each trajectory in the set of trajectories, a set of points of interest in the geographic region to obtains sets of points of interest. Motion planning further includes quantizing the sets of points of interest to obtain a set of query points in the geographic region and querying the implicit decoder model with the set of query points to obtain point attributes for the set of query points. Motion planning further includes processing, for each trajectory of a least a subset of trajectories, the point attributes corresponding to the set of points of interest to obtain a trajectory cost for the trajectory. From the set of trajectories, a selected trajectory is selected according to trajectory cost.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 12, 2024
    Applicant: Waabi Innovation Inc.
    Inventors: Sourav BISWAS, Sergio CASAS ROMERO, Quinlan SKYORA, Ben Taylor Caldwell AGRO, Abbas SADAT, Raquel URTASUN