Patents by Inventor Raquel Urtasun

Raquel Urtasun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190383945
    Abstract: Aspects of the present disclosure involve systems, methods, and devices for autonomous vehicle localization using a Lidar intensity map. A system is configured to generate a map embedding using a first neural network and to generate an online Lidar intensity embedding using a second neural network. The map embedding is based on input map data comprising a Lidar intensity map, and the Lidar sweep embedding is based on online Lidar sweep data. The system is further configured to generate multiple pose candidates based on the online Lidar intensity embedding and compute a three-dimensional (3D) score map comprising a match score for each pose candidate that indicates a similarity between the pose candidate and the map embedding. The system is further configured to determine a pose of a vehicle based on the 3D score map and to control one or more operations of the vehicle based on the determined pose.
    Type: Application
    Filed: June 11, 2019
    Publication date: December 19, 2019
    Inventors: Shenlong Wang, Andrei Pokrovsky, Raquel Urtasun Sotil, Ioan Andrei Bârsan
  • Publication number: 20190384994
    Abstract: Systems and methods for determining object intentions through visual attributes are provided. A method can include determining, by a computing system, one or more regions of interest. The regions of interest can be associated with surrounding environment of a first vehicle. The method can include determining, by a computing system, spatial features and temporal features associated with the regions of interest. The spatial features can be indicative of a vehicle orientation associated with a vehicle of interest. The temporal features can be indicative of a semantic state associated with signal lights of the vehicle of interest. The method can include determining, by the computing system, a vehicle intention. The vehicle intention can be based on the spatial and temporal features. The method can include initiating, by the computing system, an action. The action can be based on the vehicle intention.
    Type: Application
    Filed: February 26, 2019
    Publication date: December 19, 2019
    Inventors: Davi Eugenio Nascimento Frossard, Eric Randall Kee, Raquel Urtasun
  • Publication number: 20190382007
    Abstract: Generally, the disclosed systems and methods utilize multi-task machine-learned models for object intention determination in autonomous driving applications. For example, a computing system can receive sensor data obtained relative to an autonomous vehicle and map data associated with a surrounding geographic environment of the autonomous vehicle. The sensor data and map data can be provided as input to a machine-learned intent model. The computing system can receive a jointly determined prediction from the machine-learned intent model for multiple outputs including at least one detection output indicative of one or more objects detected within the surrounding environment of the autonomous vehicle, a first corresponding forecasting output descriptive of a trajectory indicative of an expected path of the one or more objects towards a goal location, and/or a second corresponding forecasting output descriptive of a discrete behavior intention determined from a predefined group of possible behavior intentions.
    Type: Application
    Filed: May 23, 2019
    Publication date: December 19, 2019
    Inventors: Sergio Casas, Wenjie Luo, Raquel Urtasun
  • Publication number: 20190286921
    Abstract: A method includes receiving image data associated with an image of a roadway including a crosswalk, generating a plurality of different characteristics of the image based on the image data, determining a position of the crosswalk on the roadway based on the plurality of different characteristics, the position including a first boundary and a second boundary of the crosswalk in the roadway, and providing map data associated with a map of the roadway, the map data including the position of the crosswalk on the roadway in the map. The plurality of different characteristics include a classification of one or more elements of the image, a segmentation of the one or more elements of the image, and one or more angles of the one or more elements of the image with respect to a line in the roadway.
    Type: Application
    Filed: March 14, 2019
    Publication date: September 19, 2019
    Inventors: Justin Jin-Wei Liang, Raquel Urtasun Sotil
  • Publication number: 20190147255
    Abstract: Systems and methods for generating sparse geographic data for autonomous vehicles are provided. In one example embodiment, a computing system can obtain sensor data associated with at least a portion of a surrounding environment of an autonomous vehicle. The computing system can identify a plurality of lane boundaries within the portion of the surrounding environment of the autonomous vehicle based at least in part on the sensor data and a first machine-learned model. The computing system can generate a plurality of polylines indicative of the plurality of lane boundaries based at least in part on a second machine-learned model. Each polyline of the plurality of polylines can be indicative of a lane boundary of the plurality of lane boundaries. The computing system can output a lane graph including the plurality of polylines.
    Type: Application
    Filed: September 6, 2018
    Publication date: May 16, 2019
    Inventors: Namdar Homayounfar, Wei-Chiu Ma, Shrinidhi Kowshika Lakshmikanth, Raquel Urtasun
  • Publication number: 20190147250
    Abstract: Systems and methods for performing semantic segmentation of three-dimensional data are provided. In one example embodiment, a computing system can be configured to obtain sensor data including three-dimensional data associated with an environment. The three-dimensional data can include a plurality of points and can be associated with one or more times. The computing system can be configured to determine data indicative of a two-dimensional voxel representation associated with the environment based at least in part on the three-dimensional data. The computing system can be configured to determine a classification for each point of the plurality of points within the three-dimensional data based at least in part on the two-dimensional voxel representation associated with the environment and a machine-learned semantic segmentation model. The computing system can be configured to initiate one or more actions based at least in part on the per-point classifications.
    Type: Application
    Filed: September 6, 2018
    Publication date: May 16, 2019
    Inventors: Chris Jia-Han Zhang, Wenjie Luo, Raquel Urtasun
  • Publication number: 20190147610
    Abstract: Systems and methods for detecting and tracking objects are provided. In one example, a computer-implemented method includes receiving sensor data from one or more sensors. The method includes inputting the sensor data to one or more machine-learned models including one or more first neural networks configured to detect one or more objects based at least in part on the sensor data and one or more second neural networks configured to track the one or more objects over a sequence of sensor data. The method includes generating, as an output of the one or more first neural networks, a 3D bounding box and detection score for a plurality of object detections. The method includes generating, as an output of the one or more second neural networks, a matching score associated with pairs of object detections. The method includes determining a trajectory for each object detection.
    Type: Application
    Filed: September 5, 2018
    Publication date: May 16, 2019
    Inventors: Davi Eugenio Nascimento Frossard, Raquel Urtasun
  • Publication number: 20190147254
    Abstract: Systems and methods for facilitating communication with autonomous vehicles are provided. In one example embodiment, a computing system can obtain a first type of sensor data (e.g., camera image data) associated with a surrounding environment of an autonomous vehicle and/or a second type of sensor data (e.g., LIDAR data) associated with the surrounding environment of the autonomous vehicle. The computing system can generate overhead image data indicative of at least a portion of the surrounding environment of the autonomous vehicle based at least in part on the first and/or second types of sensor data. The computing system can determine one or more lane boundaries within the surrounding environment of the autonomous vehicle based at least in part on the overhead image data indicative of at least the portion of the surrounding environment of the autonomous vehicle and a machine-learned lane boundary detection model.
    Type: Application
    Filed: September 5, 2018
    Publication date: May 16, 2019
    Inventors: Min Bai, Gellert Sandor Mattyus, Namdar Homayounfar, Shenlong Wang, Shrindihi Kowshika Lakshmikanth, Raquel Urtasun
  • Publication number: 20190147372
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for object detection, tracking, and motion prediction are provided. For example, the disclosed technology can include receiving sensor data including information based on sensor outputs associated with detection of objects in an environment over one or more time intervals by one or more sensors. The operations can include generating, based on the sensor data, an input representation of the objects. The input representation can include a temporal dimension and spatial dimensions. The operations can include determining, based on the input representation and a machine-learned model, detected object classes of the objects, locations of the objects over the one or more time intervals, or predicted paths of the objects. Furthermore, the operations can include generating, based on the input representation and the machine-learned model, an output including bounding shapes corresponding to the objects.
    Type: Application
    Filed: September 7, 2018
    Publication date: May 16, 2019
    Inventors: Wenjie Luo, Bin Yang, Raquel Urtasun
  • Publication number: 20190146497
    Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
    Type: Application
    Filed: February 7, 2018
    Publication date: May 16, 2019
    Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
  • Publication number: 20190147253
    Abstract: Systems and methods for facilitating communication with autonomous vehicles are provided. In one example embodiment, a computing system can obtain rasterized LIDAR data associated with a surrounding environment of an autonomous vehicle. The rasterized LIDAR data can include LIDAR image data that is rasterized from a LIDAR point cloud. The computing system can access data indicative of a machine-learned lane boundary detection model. The computing system can input the rasterized LIDAR data associated with the surrounding environment of the autonomous vehicle into the machine-learned lane boundary detection model. The computing system can obtain an output from the machine-learned lane boundary detection model. The output can be indicative of one or more lane boundaries within the surrounding environment of the autonomous vehicle.
    Type: Application
    Filed: September 5, 2018
    Publication date: May 16, 2019
    Inventors: Min Bai, Gellert Sandor Mattyus, Namdar Homayounfar, Shenlong Wang, Shrindihi Kowshika Lakshmikanth, Raquel Urtasun, Wei-Chiu Ma
  • Publication number: 20190145765
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for object detection are provided. For example, sensor data associated with objects can be received. Segments encompassing areas associated with the objects can be generated based on the sensor data and a machine-learned model. A position, a shape, and an orientation of each of the objects in each of the one or more segments can be determined over a plurality of time intervals. Further, a predicted position, a predicted shape, and a predicted orientation of each of the objects at a last one of the plurality of time intervals can be determined. Furthermore, an output based at least in part on the predicted position, the predicted shape, or the predicted orientation of each of the one or more objects at the last one of the plurality of time intervals can be generated.
    Type: Application
    Filed: September 17, 2018
    Publication date: May 16, 2019
    Inventors: Wenjie Luo, Bin Yang, Raquel Urtasun
  • Publication number: 20190147335
    Abstract: Systems and methods are provided for machine-learned models including convolutional neural networks that generate predictions using continuous convolution techniques. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can perform, with a machine-learned convolutional neural network, one or more convolutions over input data using a continuous filter relative to a support domain associated with the input data, and receive a prediction from the machine-learned convolutional neural network. A machine-learned convolutional neural network in some examples includes at least one continuous convolution layer configured to perform convolutions over input data with a parametric continuous kernel.
    Type: Application
    Filed: October 30, 2018
    Publication date: May 16, 2019
    Inventors: Shenlong Wang, Wei-Chiu Ma, Shun Da Suo, Raquel Urtasun, Ming Liang
  • Publication number: 20190147320
    Abstract: A method includes obtaining training data including one or more images and one or more ground truth labels of the one or more images, and training an adversarial network including a siamese discriminator network and a generator network. The training includes generating, with the generator network, one or more generated images based on the one or more images; processing, with the siamese discriminator network, at least one pair of images to determine a prediction of whether the at least one pair of images includes the one or more generated images; and modifying, using a loss function of the adversarial network that depends on the ground truth label and the prediction, one or more parameters of the generator network.
    Type: Application
    Filed: November 15, 2018
    Publication date: May 16, 2019
    Inventors: Gellert Sandor Mattyus, Raquel Urtasun Sotil
  • Publication number: 20190145784
    Abstract: Systems and methods for autonomous vehicle localization are provided. In one example embodiment, a computer-implemented method includes obtaining, by a computing system that includes one or more computing devices onboard an autonomous vehicle, sensor data indicative of one or more geographic cues within the surrounding environment of the autonomous vehicle. The method includes obtaining, by the computing system, sparse geographic data associated with the surrounding environment of the autonomous vehicle. The sparse geographic data is indicative of the one or more geographic cues. The method includes determining, by the computing system, a location of the autonomous vehicle within the surrounding environment based at least in part on the sensor data indicative of the one or more geographic cues and the sparse geographic data. The method includes outputting, by the computing system, data indicative of the location of the autonomous vehicle within the surrounding environment.
    Type: Application
    Filed: September 6, 2018
    Publication date: May 16, 2019
    Inventors: Wei-Chiu Ma, Shenlong Wang, Namdar Homayounfar, Shrinidhi Kowshika Lakshmikanth, Raquel Urtasun
  • Publication number: 20190138024
    Abstract: A computer system including one or more processors programmed or configured to receive image data associated with an image of one or more roads, where the one or more roads comprise one or more lanes, determine a lane classification of the one or more lanes based on the image data associated with the image of the one or more roads, and provide lane classification data associated with the lane classification of the one or more lanes.
    Type: Application
    Filed: November 7, 2018
    Publication date: May 9, 2019
    Inventors: Justin Jin-Wei Liang, Raquel Urtasun Sotil
  • Patent number: 9430847
    Abstract: A method determines a motion between a first and second coordinate system, by first extracting a first set of primitives from a 3D image acquired in the first coordinate system from an environment, and extracting a second set of primitives from a 3D image acquired in the second coordinate system from the environment. Motion hypotheses are generated for different combinations of the first and second sets of primitives using a RANdom SAmple Consensus procedure. Each motion hypothesis is scored using a scoring function learned using parameter learning techniques. Then, a best motion hypothesis is selected as the motion between the first and second coordinate system.
    Type: Grant
    Filed: June 12, 2014
    Date of Patent: August 30, 2016
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Srikumar Ramalingam, Yuichi Taguchi, Michel Antunes, Raquel Urtasun
  • Publication number: 20150363938
    Abstract: A method determines a motion between a first and second coordinate system, by first extracting a first set of primitives from a 3D image acquired in the first coordinate system from an environment, and extracting a second set of primitives from a 3D image acquired in the second coordinate system from the environment. Motion hypotheses are generated for different combinations of the first and second sets of primitives using a RANdom SAmple Consensus procedure. Each motion hypothesis is scored using a scoring function learned using parameter learning techniques. Then, a best motion hypothesis is selected as the motion between the first and second coordinate system.
    Type: Application
    Filed: June 12, 2014
    Publication date: December 17, 2015
    Inventors: Srikumar Ramalingam, Yuichi Taguchi, Michel Antunes, Raquel Urtasun