Patents by Inventor Raquel Urtasun

Raquel Urtasun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11354820
    Abstract: Systems and methods for determining a location based on image data are provided. A method can include receiving, by a computing system, a query image depicting a surrounding environment of a vehicle. The query image can be input into a machine-learned image embedding model and a machine-learned feature extraction model to obtain a query embedding and a query feature representation, respectively. The method can include identifying a subset of candidate embeddings that have embeddings similar to the query embedding. The method can include obtaining a respective feature representation for each image associated with the subset of candidate embeddings. The method can include determining a set of relative displacements between each image associated with the subset of candidate embeddings and the query image and determining a localized state of a vehicle based at least in part on the set of relative displacements.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: June 7, 2022
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Julieta Martinez Covarrubias, Shenlong Wang, Hongbo Fan
  • Publication number: 20220165043
    Abstract: The present disclosure provides systems and methods for generating photorealistic image simulation data with geometry-aware composition for testing autonomous vehicles. In particular, aspects of the present disclosure can involve the intake of data on an environment and output of augmented data on the environment with the photorealistic addition of an object. As one example, data on the driving experiences of a self-driving vehicle can be augmented to add another vehicle into the collected environment data. The augmented data may then be used to test safety features of software for a self-driving vehicle.
    Type: Application
    Filed: February 10, 2022
    Publication date: May 26, 2022
    Inventors: Frieda Rong, Yun Chen, Shivam Duggal, Shenlong Wang, Xinchen Yan, Sivabalan Manivasagam, Ersin Yumer, Raquel Urtasun
  • Patent number: 11341356
    Abstract: Systems and methods for determining object intentions through visual attributes are provided. A method can include determining, by a computing system, one or more regions of interest. The regions of interest can be associated with surrounding environment of a first vehicle. The method can include determining, by a computing system, spatial features and temporal features associated with the regions of interest. The spatial features can be indicative of a vehicle orientation associated with a vehicle of interest. The temporal features can be indicative of a semantic state associated with signal lights of the vehicle of interest. The method can include determining, by the computing system, a vehicle intention. The vehicle intention can be based on the spatial and temporal features. The method can include initiating, by the computing system, an action. The action can be based on the vehicle intention.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: May 24, 2022
    Assignee: UATC, LLC
    Inventors: Davi Eugenio Nascimento Frossard, Eric Randall Kee, Raquel Urtasun
  • Publication number: 20220156939
    Abstract: Systems and methods for generating object segmentations across videos are provided. An example system can enable an annotator to identify objects within a first image frame of a video sequence by clicking anywhere within the object. The system processes the first image frame and a second, subsequent, image frame to assign each pixel of the second image frame to one of the objects identified in the first image frame or the background. The system refines the resulting object masks for the second image frame using a recurrent attention module based on contextual features extracted from the second image frame. The system receives additional user input for the second image frame and uses the input, in combination with the object masks for the second image frame, to determine object masks for a third, subsequent, image frame in the video sequence. The process is repeated for each image in the video sequence.
    Type: Application
    Filed: November 17, 2021
    Publication date: May 19, 2022
    Inventors: Namdar Homayounfar, Wei-Chiu Ma, Raquel Urtasun
  • Publication number: 20220157161
    Abstract: Example aspects of the present disclosure describe a scene generator for simulating scenes in an environment. For example, snapshots of simulated traffic scenes can be generated by sampling a joint probability distribution trained on real-world traffic scenes. In some implementations, samples of the joint probability distribution can be obtained by sampling a plurality of factorized probability distributions for a plurality of objects for sequential insertion into the scene.
    Type: Application
    Filed: November 17, 2021
    Publication date: May 19, 2022
    Inventors: Shuhan Tan, Kelvin Ka Wing Wong, Shenlong Wang, Sivabalan Manivasagam, Mengye Ren, Raquel Urtasun
  • Publication number: 20220153309
    Abstract: Systems and methods are disclosed for motion forecasting and planning for autonomous vehicles. For example, a plurality of future traffic scenarios are determined by modeling a joint distribution of actor trajectories for a plurality of actors, as opposed to an approach that models actors individually. As another example, a diversity objective is evaluated that rewards sampling of the future traffic scenarios that require distinct reactions from the autonomous vehicle. An estimated probability for the plurality of future traffic scenarios can be determined and used to generate a contingency plan for motion of the autonomous vehicle. The contingency plan can include at least one initial short-term trajectory intended for immediate action of the AV and a plurality of subsequent long-term trajectories associated with the plurality of future traffic scenarios.
    Type: Application
    Filed: November 17, 2021
    Publication date: May 19, 2022
    Inventors: Alexander Yuhao Cui, Abbas Sadat, Sergio Casas, Renjie Liao, Raquel Urtasun
  • Publication number: 20220153310
    Abstract: Techniques for improving the performance of an autonomous vehicle (AV) by automatically annotating objects surrounding the AV are described herein. A system can obtain sensor data from a sensor coupled to the AV and generate an initial object trajectory for an object using the sensor data. Additionally, the system can determine a fixed value for the object size of the object based on the initial object trajectory. Moreover, the system can generate an updated initial object trajectory, wherein the object size corresponds to the fixed value. Furthermore, the system can determine, based on the sensor data and the updated initial object trajectory, a refined object trajectory. Subsequently, the system can generate a multi-dimensional label for the object based on the refined object trajectory. A motion plan for controlling the AV can be generated based on the multi-dimensional label.
    Type: Application
    Filed: November 17, 2021
    Publication date: May 19, 2022
    Inventors: Bin Yang, Ming Liang, Wenyuan Zeng, Min Bai, Raquel Urtasun
  • Publication number: 20220153315
    Abstract: Systems and methods are provided for forecasting the motion of actors within a surrounding environment of an autonomous platform. For example, a computing system of an autonomous platform can use machine-learned model(s) to generate actor-specific graphs with past motions of actors and the local map topology. The computing system can project the actor-specific graphs of all actors to a global graph. The global graph can allow the computing system to determine which actors may interact with one another by propagating information over the global graph. The computing system can distribute the interactions determined using the global graph to the individual actor-specific graphs. The computing system can then predict a motion trajectory for an actor based on the associated actor-specific graph, which captures the actor-to-actor interactions and actor-to-map relations.
    Type: Application
    Filed: November 17, 2021
    Publication date: May 19, 2022
    Inventors: Wenyuan Zeng, Ming Liang, Renjie Liao, Raquel Urtasun
  • Publication number: 20220153298
    Abstract: Techniques for generating testing data for an autonomous vehicle (AV) are described herein. A system can obtain sensor data descriptive of a traffic scenario. The traffic scenario can include a subject vehicle and actors in an environment. Additionally, the system can generate a perturbed trajectory for a first actor in the environment based on perturbation values. Moreover, the system can generate simulated sensor data. The simulated sensor data can include data descriptive of the perturbed trajectory for the first actor in the environment. Furthermore, the system can provide the simulated sensor data as input to an AV control system. The AV control system can be configured to process the simulated sensor data to generate an updated trajectory for the subject vehicle in the environment. Subsequently, the system can evaluate an adversarial loss function based on the updated trajectory for the subject vehicle to generate an adversarial loss value.
    Type: Application
    Filed: November 17, 2021
    Publication date: May 19, 2022
    Inventors: Jingkang Wang, Ava Alison Pun, Xuanyuan Tu, Mengye Ren, Abbas Sadat, Sergio Casas, Sivabalan Manivasagam, Raquel Urtasun
  • Publication number: 20220153314
    Abstract: Systems and methods for generating synthetic testing data for autonomous vehicles are provided. A computing system can obtain map data descriptive of an environment and object data descriptive of a plurality of objects within the environment. The computing system can generate context data including deep or latent features extracted from the map and object data by one or more machine-learned models. The computing system can process the context data with a machine-learned model to generate synthetic motion prediction for the plurality of objects. The synthetic motion predictions for the objects can include one or more synthesized states for the objects at future times. The computing system can provide, as an output, synthetic testing data that includes the plurality of synthetic motion predictions for the objects. The synthetic testing data can be used to test an autonomous vehicle control system in a simulation.
    Type: Application
    Filed: November 17, 2021
    Publication date: May 19, 2022
    Inventors: Shun Da Suo, Sebastián David Regalado Lozano, Sergio Casas, Raquel Urtasun
  • Publication number: 20220137636
    Abstract: Systems and methods for the simultaneous localization and mapping of autonomous vehicle systems are provided. A method includes receiving a plurality of input image frames from the plurality of asynchronous image devices triggered at different times to capture the plurality of input image frames. The method includes identifying reference image frame(s) corresponding to a respective input image frame by matching the field of view of the respective input image frame to the fields of view of the reference image frame(s). The method includes determining association(s) between the respective input image frame and three-dimensional map point(s) based on a comparison of the respective input image frame to the one or more reference image frames. The method includes generating an estimated pose for the autonomous vehicle the one or more three-dimensional map points. The method includes updating a continuous-time motion model of the autonomous vehicle based on the estimated pose.
    Type: Application
    Filed: November 1, 2021
    Publication date: May 5, 2022
    Inventors: Anqi Joyce Yang, Can Cui, Ioan Andrei Bârsan, Shenlong Wang, Raquel Urtasun
  • Publication number: 20220101600
    Abstract: Systems and methods for identifying travel way features in real time are provided. A method can include receiving two-dimensional and three-dimensional data associated with the surrounding environment of a vehicle. The method can include providing the two-dimensional data as one or more input into a machine-learned segmentation model to output a two-dimensional segmentation. The method can include fusing the two-dimensional segmentation with the three-dimensional data to generate a three-dimensional segmentation. The method can include storing the three-dimensional segmentation in a classification database with data indicative of one or more previously generated three-dimensional segmentations. The method can include providing one or more datapoint sets from the classification database as one or more inputs into a machine-learned enhancing model to obtain an enhanced three-dimensional segmentation.
    Type: Application
    Filed: December 10, 2021
    Publication date: March 31, 2022
    Inventors: Raquel Urtasun, Min Bai, Shenlong Wang
  • Patent number: 11245927
    Abstract: A machine-learned image compression model includes a first encoder configured to generate a first image code based at least in part on first image data. The first encoder includes a first series of convolutional layers configured to generate a first series of respective feature maps based at least in part on the first image. A second encoder is configured to generate a second image code based at least in part on second image data and includes a second series of convolutional layers configured to generate a second series of respective feature maps based at least in part on the second image and disparity-warped feature data. Respective parametric skip functions associated convolutional layers of the second series are configured to generate disparity-warped feature data based at least in part on disparity associated with the first series of respective feature maps and the second series of respective feature maps.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: February 8, 2022
    Assignee: UATC, LLC
    Inventors: Jerry Junkai Liu, Shenlong Wang, Raquel Urtasun
  • Publication number: 20220036184
    Abstract: A computing system can include one or more processors and one or more computer-readable media storing instructions that, when executed by the one or more processors, cause the computing system to perform operations including obtaining model structure data indicative of a plurality of parameters of a machine-learned model; determining a codebook comprising a plurality of centroids, the plurality of centroids having a respective index of a plurality of indices indicative of an ordering of the codebook; determining a plurality of codes respective to the plurality of parameters, the plurality of codes respectively comprising a code index of the plurality of indices corresponding to a closest centroid of the plurality of centroids to a respective parameter of the plurality of parameters; and providing encoded data as an encoded representation of the plurality of parameters of the machine-learned model, the encoded data comprising the codebook and the plurality of codes.
    Type: Application
    Filed: July 29, 2021
    Publication date: February 3, 2022
    Inventors: Ting Wei Liu, Julieta Martinez Covarrubias, Jashan Sunil Shewakramani, Raquel Urtasun, Wenyuan Zeng
  • Publication number: 20220032452
    Abstract: Systems and methods for streaming sensor packets in real-time are provided. An example method includes obtaining a sensor data packet representing a first portion of a three-hundred and sixty degree view of a surrounding environment of a robotic platform. The method includes generating, using machine-learned model(s), a local feature map based at least in part on the sensor data packet. The local feature map is indicative of local feature(s) associated with the first portion of the three-hundred and sixty degree view. The method includes updating, based at least in part on the local feature map, a spatial map to include the local feature(s). The spatial map includes previously extracted local features associated with a previous sensor data packet representing a different portion of the three-hundred and sixty degree view than the first portion. The method includes determining an object within the surrounding environment based on the updated spatial map.
    Type: Application
    Filed: July 29, 2021
    Publication date: February 3, 2022
    Inventors: Sergio Casas, Davi Eugenio Nascimento Frossard, Shun Da Suo, Xuanyuan Tu, Raquel Urtasun
  • Publication number: 20220036579
    Abstract: Systems and methods for generating simulation data based on real-world dynamic objects are provided. A method includes obtaining two- and three-dimensional data descriptive of a dynamic object in the real world. The two- and three-dimensional information can be provided as an input to a machine-learned model to receive object model parameters descriptive of a pose and shape modification with respect to a three-dimensional template object model. The parameters can represent a three-dimensional dynamic object model indicative of an object pose and an object shape for the dynamic object. The method can be repeated on sequential two- and three-dimensional information to generate a sequence of object model parameters over time. Portions of a sequence of parameters can be stored as simulation data descriptive of a simulated trajectory of a unique dynamic object. The parameters can be evaluated by an objective function to refine the parameters and train the machine-learned model.
    Type: Application
    Filed: July 29, 2021
    Publication date: February 3, 2022
    Inventors: Ming Liang, Wei-Chiu Ma, Sivabalan Manivasagam, Raquel Urtasun, Bin Yang, Ze Yang
  • Publication number: 20220032970
    Abstract: Systems and methods for improved vehicle-to-vehicle communications are provided. A system can obtain sensor data depicting its surrounding environment and input the sensor data (or processed sensor data) to a machine-learned model to perceive its surrounding environment based on its location within the environment. The machine-learned model can generate an intermediate environmental representation that encodes features within the surrounding environment. The system can receive a number of different intermediate environmental representations and corresponding locations from various other systems, aggregate the representations based on the corresponding locations, and perceive its surrounding environment based on the aggregated representations. The system can determine relative poses between the each of the systems and an absolute pose for each system based on the representations.
    Type: Application
    Filed: January 15, 2021
    Publication date: February 3, 2022
    Inventors: Nicholas Baskar Vadivelu, Mengye Ren, Xuanyuan Tu, Raquel Urtasun, Jingkang Wang
  • Patent number: 11221413
    Abstract: Generally, the disclosed systems and methods implement improved detection of objects in three-dimensional (3D) space. More particularly, an improved 3D object detection system can exploit continuous fusion of multiple sensors and/or integrated geographic prior map data to enhance effectiveness and robustness of object detection in applications such as autonomous driving. In some implementations, geographic prior data (e.g., geometric ground and/or semantic road features) can be exploited to enhance three-dimensional object detection for autonomous vehicle applications. In some implementations, object detection systems and methods can be improved based on dynamic utilization of multiple sensor modalities. More particularly, an improved 3D object detection system can exploit both LIDAR systems and cameras to perform very accurate localization of objects within three-dimensional space relative to an autonomous vehicle.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: January 11, 2022
    Assignee: UATC, LLC
    Inventors: Ming Liang, Bin Yang, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
  • Patent number: 11216004
    Abstract: A computer system including one or more processors programmed or configured to receive image data associated with an image of one or more roads, where the one or more roads comprise one or more lanes, determine a lane classification of the one or more lanes based on the image data associated with the image of the one or more roads, and provide lane classification data associated with the lane classification of the one or more lanes.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: January 4, 2022
    Assignee: UATC, LLC
    Inventors: Justin Jin-Wei Liang, Raquel Urtasun Sotil
  • Patent number: 11217012
    Abstract: Systems and methods for identifying travel way features in real time are provided. A method can include receiving two-dimensional and three-dimensional data associated with the surrounding environment of a vehicle. The method can include providing the two-dimensional data as one or more input into a machine-learned segmentation model to output a two-dimensional segmentation. The method can include fusing the two-dimensional segmentation with the three-dimensional data to generate a three-dimensional segmentation. The method can include storing the three-dimensional segmentation in a classification database with data indicative of one or more previously generated three-dimensional segmentations. The method can include providing one or more datapoint sets from the classification database as one or more inputs into a machine-learned enhancing model to obtain an enhanced three-dimensional segmentation.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: January 4, 2022
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Min Bai, Shenlong Wang