Patents by Inventor Raquel Urtasun
Raquel Urtasun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12103554Abstract: Systems and methods of the present disclosure are directed to a method. The method can include obtaining simplified scenario data associated with a simulated scenario. The method can include determining, using a machine-learned perception-prediction simulation model, a simulated perception-prediction output based at least in part on the simplified scenario data. The method can include evaluating a loss function comprising a perception loss term and a prediction loss term. The method can include adjusting one or more parameters of the machine-learned perception-prediction simulation model based at least in part on the loss function.Type: GrantFiled: January 15, 2021Date of Patent: October 1, 2024Assignee: AURORA OPERATIONS, INC.Inventors: Raquel Urtasun, Kelvin Ka Wing Wong, Qiang Zhang, Bin Yang, Ming Liang, Renjie Liao
-
Patent number: 12106435Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned geometry model can predict one or more adjusted depths for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.Type: GrantFiled: June 30, 2023Date of Patent: October 1, 2024Assignee: AURORA OPERATIONS, INC.Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
-
Publication number: 20240320466Abstract: Systems and methods for generating motion forecast data for actors with respect to an autonomous vehicle and training a machine learned model for the same are disclosed. The computing system can include an object detection model and a graph neural network including a plurality of nodes and a plurality of edges. The computing system can be configured to input sensor data into the object detection model; receive object detection data describing the location of the plurality of the actors relative to the autonomous vehicle as an output of the object detection model; input the object detection data into the graph neural network; iteratively update a plurality of node states respectively associated with the plurality of nodes; and receive, as an output of the graph neural network, the motion forecast data with respect to the plurality of actors.Type: ApplicationFiled: May 6, 2024Publication date: September 26, 2024Inventors: Raquel Urtasun, Renjie Liao, Sergio Casas, Cole Christian Gulino
-
Publication number: 20240303400Abstract: A method includes generating a first sample including first raw parameter values of a first modifiable parameters by a probabilistic model and a kernel and executing a first test of a virtual driver of an autonomous system according to the first sample to generate a first evaluation result of multiple evaluation results. The method further includes updating the probabilistic model according to the first evaluation result and training the kernel using the first evaluation result. The method additionally includes generating a second sample including second raw parameter values of the parameters by the probabilistic model and the kernel and executing a second test of a virtual driver of an autonomous system according to the second sample to generate a second evaluation result of the evaluation results. The method further includes presenting the evaluation results.Type: ApplicationFiled: March 7, 2024Publication date: September 12, 2024Applicant: Waabi Innovation Inc.Inventors: James TU, Simon SUO, Raquel URTASUN
-
Publication number: 20240300527Abstract: Diffusion for realistic scene generation includes obtaining a current set of agent state vectors and a map data of a geographic region, and iteratively, through multiple diffusion timesteps, updating the current set of agent state vectors. Iteratively updating includes processing, by a noise prediction model, the current set of agent state vectors, a current diffusion timestep of the plurality of diffusion timesteps, and the map data to obtain a noise prediction value, generating a mean using the noise prediction value, generating a distribution function according to the mean, sampling a revised set of agent state vectors from the distribution function, and replacing the current set of agent state vectors with the revised set of agent state vectors. The current set of agent state vectors are outputted.Type: ApplicationFiled: March 7, 2024Publication date: September 12, 2024Applicant: Waabi Innovation Inc.Inventors: Jack LU, Kelvin WONG, Chris ZHANG, Simon SUO, Raquel URTASUN
-
Publication number: 20240300526Abstract: Motion planning with implicit occupancy for autonomous systems includes obtaining a set of trajectories through a geographic region for an autonomous system, and generating, for each trajectory in the set of trajectories, a set of points of interest in the geographic region to obtains sets of points of interest. Motion planning further includes quantizing the sets of points of interest to obtain a set of query points in the geographic region and querying the implicit decoder model with the set of query points to obtain point attributes for the set of query points. Motion planning further includes processing, for each trajectory of a least a subset of trajectories, the point attributes corresponding to the set of points of interest to obtain a trajectory cost for the trajectory. From the set of trajectories, a selected trajectory is selected according to trajectory cost.Type: ApplicationFiled: March 7, 2024Publication date: September 12, 2024Applicant: Waabi Innovation Inc.Inventors: Sourav BISWAS, Sergio CASAS ROMERO, Quinlan SKYORA, Ben Taylor Caldwell AGRO, Abbas SADAT, Raquel URTASUN
-
Publication number: 20240302530Abstract: LiDAR based memory segmentation includes obtaining a LiDAR point cloud that includes LiDAR points from a LiDAR sensor, voxelizing the LiDAR points to obtain LiDAR voxels, and encoding the LiDAR voxels to obtain encoded voxels. A LiDAR voxel memory is revised using the encoded voxels to obtain revised LiDAR voxel memory, decoding the revised LiDAR voxel memory to obtain decoded LiDAR voxel memory features. The LiDAR points are segmented using the decoded LiDAR voxel memory features to generate a segmented LiDAR point cloud.Type: ApplicationFiled: March 7, 2024Publication date: September 12, 2024Applicant: Waabi Innovation Inc.Inventors: Enxu LI, Sergio CASAS ROMERO, Raquel URTASUN
-
Publication number: 20240303501Abstract: Imitation and reinforcement learning for multi-agent simulation includes performing operations. The operations include obtaining a first real-world scenario of agents moving according to first trajectories and simulating the first real-world scenario in a virtual world to generate first simulated states. The simulating includes processing, by an agent model, the first simulated states for the agents to obtain second trajectories. For each of at least a subset of the agents, a difference between a first corresponding trajectory of the agent and a second corresponding trajectory of the agent is calculated and determining an imitation loss is determined based on the difference. The operations further include evaluating the second trajectories according to a reward function to generate a reinforcement learning loss, calculating a total loss as a combination of the imitation loss and the reinforcement learning loss, and updating the agent model using the total loss.Type: ApplicationFiled: March 7, 2024Publication date: September 12, 2024Applicant: Waabi Innovation Inc.Inventors: Chris ZHANG, James TU, Lunjun ZHANG, Kelvin WONG, Simon SUO, Raquel URTASUN
-
Patent number: 12051001Abstract: Provided are systems and methods that perform multi-task and/or multi-sensor fusion for three-dimensional object detection in furtherance of, for example, autonomous vehicle perception and control. In particular, according to one aspect of the present disclosure, example systems and methods described herein exploit simultaneous training of a machine-learned model ensemble relative to multiple related tasks to learn to perform more accurate multi-sensor 3D object detection. For example, the present disclosure provides an end-to-end learnable architecture with multiple machine-learned models that interoperate to reason about 2D and/or 3D object detection as well as one or more auxiliary tasks. According to another aspect of the present disclosure, example systems and methods described herein can perform multi-sensor fusion (e.g.Type: GrantFiled: October 24, 2022Date of Patent: July 30, 2024Assignee: UATC, LLCInventors: Raquel Urtasun, Bin Yang, Ming Liang
-
Patent number: 12037027Abstract: Systems and methods for generating synthetic testing data for autonomous vehicles are provided. A computing system can obtain map data descriptive of an environment and object data descriptive of a plurality of objects within the environment. The computing system can generate context data including deep or latent features extracted from the map and object data by one or more machine-learned models. The computing system can process the context data with a machine-learned model to generate synthetic motion prediction for the plurality of objects. The synthetic motion predictions for the objects can include one or more synthesized states for the objects at future times. The computing system can provide, as an output, synthetic testing data that includes the plurality of synthetic motion predictions for the objects. The synthetic testing data can be used to test an autonomous vehicle control system in a simulation.Type: GrantFiled: November 17, 2021Date of Patent: July 16, 2024Assignee: UATC, LLCInventors: Shun Da Suo, Sebastián David Regalado Lozano, Sergio Casas, Raquel Urtasun
-
Patent number: 12037025Abstract: Systems and methods are disclosed for motion forecasting and planning for autonomous vehicles. For example, a plurality of future traffic scenarios are determined by modeling a joint distribution of actor trajectories for a plurality of actors, as opposed to an approach that models actors individually. As another example, a diversity objective is evaluated that rewards sampling of the future traffic scenarios that require distinct reactions from the autonomous vehicle. An estimated probability for the plurality of future traffic scenarios can be determined and used to generate a contingency plan for motion of the autonomous vehicle. The contingency plan can include at least one initial short-term trajectory intended for immediate action of the AV and a plurality of subsequent long-term trajectories associated with the plurality of future traffic scenarios.Type: GrantFiled: November 17, 2021Date of Patent: July 16, 2024Assignee: UATC, LLCInventors: Alexander Yuhao Cui, Abbas Sadat, Sergio Casas, Renjie Liao, Raquel Urtasun
-
Patent number: 12032067Abstract: Systems and methods for identifying travel way features in real time are provided. A method can include receiving two-dimensional and three-dimensional data associated with the surrounding environment of a vehicle. The method can include providing the two-dimensional data as one or more input into a machine-learned segmentation model to output a two-dimensional segmentation. The method can include fusing the two-dimensional segmentation with the three-dimensional data to generate a three-dimensional segmentation. The method can include storing the three-dimensional segmentation in a classification database with data indicative of one or more previously generated three-dimensional segmentations. The method can include providing one or more datapoint sets from the classification database as one or more inputs into a machine-learned enhancing model to obtain an enhanced three-dimensional segmentation.Type: GrantFiled: December 10, 2021Date of Patent: July 9, 2024Assignee: UATC, LLCInventors: Raquel Urtasun, Min Bai, Shenlong Wang
-
Patent number: 12023812Abstract: Systems and methods for streaming sensor packets in real-time are provided. An example method includes obtaining a sensor data packet representing a first portion of a three-hundred and sixty degree view of a surrounding environment of a robotic platform. The method includes generating, using machine-learned model(s), a local feature map based at least in part on the sensor data packet. The local feature map is indicative of local feature(s) associated with the first portion of the three-hundred and sixty degree view. The method includes updating, based at least in part on the local feature map, a spatial map to include the local feature(s). The spatial map includes previously extracted local features associated with a previous sensor data packet representing a different portion of the three-hundred and sixty degree view than the first portion. The method includes determining an object within the surrounding environment based on the updated spatial map.Type: GrantFiled: July 29, 2021Date of Patent: July 2, 2024Assignee: UATC, LLCInventors: Sergio Casas, Davi Eugenio Nascimento Frossard, Shun Da Suo, Xuanyuan Tu, Raquel Urtasun
-
Publication number: 20240199058Abstract: Systems and methods for determining object intentions through visual attributes are provided. A method can include determining, by a computing system, one or more regions of interest. The regions of interest can be associated with surrounding environment of a first vehicle. The method can include determining, by a computing system, spatial features and temporal features associated with the regions of interest. The spatial features can be indicative of a vehicle orientation associated with a vehicle of interest. The temporal features can be indicative of a semantic state associated with signal lights of the vehicle of interest. The method can include determining, by the computing system, a vehicle intention. The vehicle intention can be based on the spatial and temporal features. The method can include initiating, by the computing system, an action. The action can be based on the vehicle intention.Type: ApplicationFiled: February 29, 2024Publication date: June 20, 2024Inventors: Davi Eugenio Nascimento Frossard, Eric Randall Kee, Raquel Urtasun
-
Patent number: 12013457Abstract: Systems and methods for integrating radar and LIDAR data are disclosed. In particular, a computing system can access radar sensor data and LIDAR data for the area around the autonomous vehicle. The computing system can determine, using the one or more machine-learned models, one or more objects in the area of the autonomous vehicle. The computing system can, for a respective object, select a plurality of radar points from the radar sensor data. The computing system can generate a similarity score for each selected radar point. The computing system can generate weight associated with each radar point based on the similarity score. The computing system can calculate predicted velocity for the respective object based on a weighted average of a plurality of velocities associated with the plurality of radar points. The computing system can generate a proposed motion plan based on the predicted velocity for the respective object.Type: GrantFiled: January 15, 2021Date of Patent: June 18, 2024Assignee: UATC, LLCInventors: Raquel Urtasun, Bin Yang, Ming Liang, Sergio Casas, Runsheng Benson Guo
-
Patent number: 12008454Abstract: Systems and methods for generating motion forecast data for actors with respect to an autonomous vehicle and training a machine learned model for the same are disclosed. The computing system can include an object detection model and a graph neural network including a plurality of nodes and a plurality of edges. The computing system can be configured to input sensor data into the object detection model; receive object detection data describing the location of the plurality of the actors relative to the autonomous vehicle as an output of the object detection model; input the object detection data into the graph neural network; iteratively update a plurality of node states respectively associated with the plurality of nodes; and receive, as an output of the graph neural network, the motion forecast data with respect to the plurality of actors.Type: GrantFiled: March 20, 2023Date of Patent: June 11, 2024Assignee: UATC, LLCInventors: Raquel Urtasun, Renjie Liao, Sergio Casas, Cole Christian Gulino
-
Patent number: 11989847Abstract: The present disclosure provides systems and methods for generating photorealistic image simulation data with geometry-aware composition for testing autonomous vehicles. In particular, aspects of the present disclosure can involve the intake of data on an environment and output of augmented data on the environment with the photorealistic addition of an object. As one example, data on the driving experiences of a self-driving vehicle can be augmented to add another vehicle into the collected environment data. The augmented data may then be used to test safety features of software for a self-driving vehicle.Type: GrantFiled: February 10, 2022Date of Patent: May 21, 2024Assignee: UATC, LLCInventors: Frieda Rong, Yun Chen, Shivam Duggal, Shenlong Wang, Xinchen Yan, Sivabalan Manivasagam, Ersin Yumer, Raquel Urtasun
-
Publication number: 20240157978Abstract: A method includes obtaining, from sensor data, map data of a geographic region and multiple trajectories of multiple agents located in the geographic region. The agents and the map data have a corresponding physical location in the geographic region. The method further includes determining, for an agent, an agent route from a trajectory that corresponds to the agent, generating, by an encoder model, an interaction encoding that encodes the trajectories and the map data, and generating, from the interaction encoding, an agent attribute encoding of the agent and the agent route. The method further includes processing the agent attribute encoding to generate positional information for the agent, and updating the trajectory of the agent using the positional information to obtain an updated trajectory.Type: ApplicationFiled: November 10, 2023Publication date: May 16, 2024Inventors: Kelvin WONG, Simon SUO, Raquel URTASUN
-
Publication number: 20240159871Abstract: Unsupervised object detection from lidar point clouds includes forecasting a set of new positions of a set of objects in a geographic region based on a first set of object tracks to obtain a set of forecasted object positions, and obtaining a new LiDAR point cloud of the geographic region. A detector model processes the new LiDAR point cloud to obtain a new set of bounding boxes around the set of objects detected in the new LiDAR point cloud. Object detection further includes matching the new set of bounding boxes to the set of forecasted object positions to generate a set of matches, updating the first set of object tracks with the new set of bounding boxes according to the set of matches to obtain an updated set of object tracks, and filtering, after updating, the updated set of object tracks to remove object tracks failing to satisfy a track length threshold, to generate a training set of object tracks.Type: ApplicationFiled: November 10, 2023Publication date: May 16, 2024Inventors: Lunjun ZHANG, Yuwen XIONG, Sergio CASAS ROMERO, Mengye REN, Raquel URTASUN, Angi Joyce YANG
-
Publication number: 20240161436Abstract: Compact LiDAR representation includes performing operations that include generating a three-dimensional (3D) LiDAR image from LiDAR input data, encoding, by an encoder model, the 3D LiDAR image to a continuous embedding in continuous space, and performing, using a code map, a vector quantization of the continuous embedding to generate a discrete embedding. The operations further include decoding, by the decoder model, the discrete embedding to generate modified LiDAR data, and outputting the modified LiDAR data.Type: ApplicationFiled: November 10, 2023Publication date: May 16, 2024Inventors: Yuwen XIONG, Wei-Chiu MA, Jingkang WANG, Raquel URTASUN