Patents by Inventor Shenlong Wang
Shenlong Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250130909Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned model can predict one or more dropout probabilities for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.Type: ApplicationFiled: December 31, 2024Publication date: April 24, 2025Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Kelvin Ka Wing Wong, Wenyuan Zeng, Raquel Urtasun
-
Patent number: 12260483Abstract: Described herein are techniques for learning neural reflectance shaders from images. A set of one or more machine learning models can be trained to optimize an illumination latent code and a set of reflectance latent codes for an object within a set of input images. A shader can then be generated based on a machine learning model of the one or more machine learning models. The shader is configured to sample the illumination latent code and the set of reflectance latent codes for the object. A 3D representation of the object can be rendered using the generated shader.Type: GrantFiled: January 30, 2024Date of Patent: March 25, 2025Assignee: Intel CorporationInventors: Benjamin Ummenhofer, Shenlong Wang, Sanskar Agrawal, Yixing Lao, Kai Zhang, Stephan Richter, Vladlen Koltun
-
Patent number: 12248075Abstract: Systems and methods for identifying travel way features in real time are provided. A method can include receiving two-dimensional and three-dimensional data associated with the surrounding environment of a vehicle. The method can include providing the two-dimensional data as one or more input into a machine-learned segmentation model to output a two-dimensional segmentation. The method can include fusing the two-dimensional segmentation with the three-dimensional data to generate a three-dimensional segmentation. The method can include storing the three-dimensional segmentation in a classification database with data indicative of one or more previously generated three-dimensional segmentations. The method can include providing one or more datapoint sets from the classification database as one or more inputs into a machine-learned enhancing model to obtain an enhanced three-dimensional segmentation.Type: GrantFiled: May 23, 2024Date of Patent: March 11, 2025Assignee: AURORA OPERATIONS, INC.Inventors: Raquel Urtasun, Min Bai, Shenlong Wang
-
Patent number: 12222832Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned model can predict one or more dropout probabilities for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.Type: GrantFiled: September 13, 2023Date of Patent: February 11, 2025Assignee: AURORA OPERATIONS, INC.Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Kelvin Ka Wing Wong, Wenyuan Zeng, Raquel Urtasun
-
Patent number: 12198358Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with motion flow estimation are provided. For example, scene data including representations of an environment over a first set of time intervals can be accessed. Extracted visual cues can be generated based on the representations and machine-learned feature extraction models. At least one of the machine-learned feature extraction models can be configured to generate a portion of the extracted visual cues based on a first set of the representations of the environment from a first perspective and a second set of the representations of the environment from a second perspective. The extracted visual cues can be encoded using energy functions. Three-dimensional motion estimates of object instances at time intervals subsequent to the first set of time intervals can be determined based on the energy functions and machine-learned inference models.Type: GrantFiled: October 10, 2022Date of Patent: January 14, 2025Assignee: AURORA OPERATIONS, INC.Inventors: Raquel Urtasun, Wei-Chiu Ma, Shenlong Wang, Yuwen Xiong, Rui Hu
-
Patent number: 12198357Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.Type: GrantFiled: September 12, 2023Date of Patent: January 14, 2025Assignee: Snap Inc.Inventors: Shenlong Wang, Linjie Luo, Ning Zhang, Jia Li
-
Publication number: 20250013235Abstract: Systems and methods for the simultaneous localization and mapping of autonomous vehicle systems are provided. A method includes receiving a plurality of input image frames from the plurality of asynchronous image devices triggered at different times to capture the plurality of input image frames. The method includes identifying reference image frame(s) corresponding to a respective input image frame by matching the field of view of the respective input image frame to the fields of view of the reference image frame(s). The method includes determining association(s) between the respective input image frame and three-dimensional map point(s) based on a comparison of the respective input image frame to the one or more reference image frames. The method includes generating an estimated pose for the autonomous vehicle the one or more three-dimensional map points. The method includes updating a continuous-time motion model of the autonomous vehicle based on the estimated pose.Type: ApplicationFiled: September 20, 2024Publication date: January 9, 2025Inventors: Anqi Joyce Yang, Can Cui, Ioan Andrei Bârsan, Shenlong Wang, Raquel Urtasun
-
Publication number: 20240427022Abstract: Systems and methods for identifying travel way features in real time are provided. A method can include receiving two-dimensional and three-dimensional data associated with the surrounding environment of a vehicle. The method can include providing the two-dimensional data as one or more input into a machine-learned segmentation model to output a two-dimensional segmentation. The method can include fusing the two-dimensional segmentation with the three-dimensional data to generate a three-dimensional segmentation. The method can include storing the three-dimensional segmentation in a classification database with data indicative of one or more previously generated three-dimensional segmentations. The method can include providing one or more datapoint sets from the classification database as one or more inputs into a machine-learned enhancing model to obtain an enhanced three-dimensional segmentation.Type: ApplicationFiled: May 23, 2024Publication date: December 26, 2024Inventors: Raquel Urtasun, Min Bai, Shenlong Wang
-
Patent number: 12124269Abstract: Systems and methods for the simultaneous localization and mapping of autonomous vehicle systems are provided. A method includes receiving a plurality of input image frames from the plurality of asynchronous image devices triggered at different times to capture the plurality of input image frames. The method includes identifying reference image frame(s) corresponding to a respective input image frame by matching the field of view of the respective input image frame to the fields of view of the reference image frame(s). The method includes determining association(s) between the respective input image frame and three-dimensional map point(s) based on a comparison of the respective input image frame to the one or more reference image frames. The method includes generating an estimated pose for the autonomous vehicle the one or more three-dimensional map points. The method includes updating a continuous-time motion model of the autonomous vehicle based on the estimated pose.Type: GrantFiled: November 1, 2021Date of Patent: October 22, 2024Assignee: AURORA OPERATIONS, INC.Inventors: Anqi Joyce Yang, Can Cui, Ioan Andrei Bârsan, Shenlong Wang, Raquel Urtasun
-
Patent number: 12106435Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned geometry model can predict one or more adjusted depths for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.Type: GrantFiled: June 30, 2023Date of Patent: October 1, 2024Assignee: AURORA OPERATIONS, INC.Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
-
Patent number: 12032067Abstract: Systems and methods for identifying travel way features in real time are provided. A method can include receiving two-dimensional and three-dimensional data associated with the surrounding environment of a vehicle. The method can include providing the two-dimensional data as one or more input into a machine-learned segmentation model to output a two-dimensional segmentation. The method can include fusing the two-dimensional segmentation with the three-dimensional data to generate a three-dimensional segmentation. The method can include storing the three-dimensional segmentation in a classification database with data indicative of one or more previously generated three-dimensional segmentations. The method can include providing one or more datapoint sets from the classification database as one or more inputs into a machine-learned enhancing model to obtain an enhanced three-dimensional segmentation.Type: GrantFiled: December 10, 2021Date of Patent: July 9, 2024Assignee: UATC, LLCInventors: Raquel Urtasun, Min Bai, Shenlong Wang
-
Publication number: 20240221277Abstract: Described herein are techniques for learning neural reflectance shaders from images. A set of one or more machine learning models can be trained to optimize an illumination latent code and a set of reflectance latent codes for an object within a set of input images. A shader can then be generated based on a machine learning model of the one or more machine learning models. The shader is configured to sample the illumination latent code and the set of reflectance latent codes for the object. A 3D representation of the object can be rendered using the generated shader.Type: ApplicationFiled: January 30, 2024Publication date: July 4, 2024Applicant: Intel CorporationInventors: Benjamin Ummenhofer, Shenlong Wang, Sanskar Agrawal, Yixing Lao, Kai Zhang, Stephan Richter, Vladlen Koltun
-
Patent number: 11989847Abstract: The present disclosure provides systems and methods for generating photorealistic image simulation data with geometry-aware composition for testing autonomous vehicles. In particular, aspects of the present disclosure can involve the intake of data on an environment and output of augmented data on the environment with the photorealistic addition of an object. As one example, data on the driving experiences of a self-driving vehicle can be augmented to add another vehicle into the collected environment data. The augmented data may then be used to test safety features of software for a self-driving vehicle.Type: GrantFiled: February 10, 2022Date of Patent: May 21, 2024Assignee: UATC, LLCInventors: Frieda Rong, Yun Chen, Shivam Duggal, Shenlong Wang, Xinchen Yan, Sivabalan Manivasagam, Ersin Yumer, Raquel Urtasun
-
Patent number: 11972606Abstract: Systems and methods for facilitating communication with autonomous vehicles are provided. In one example embodiment, a computing system can obtain a first type of sensor data (e.g., camera image data) associated with a surrounding environment of an autonomous vehicle and/or a second type of sensor data (e.g., LIDAR data) associated with the surrounding environment of the autonomous vehicle. The computing system can generate overhead image data indicative of at least a portion of the surrounding environment of the autonomous vehicle based at least in part on the first and/or second types of sensor data. The computing system can determine one or more lane boundaries within the surrounding environment of the autonomous vehicle based at least in part on the overhead image data indicative of at least the portion of the surrounding environment of the autonomous vehicle and a machine-learned lane boundary detection model.Type: GrantFiled: May 8, 2023Date of Patent: April 30, 2024Assignee: UATC, LLCInventors: Min Bai, Gellert Sandor Mattyus, Namdar Homayounfar, Shenlong Wang, Shrinidihi Kowshika Lakshmikanth, Raquel Urtasun
-
Patent number: 11972519Abstract: Described herein are techniques for learning neural reflectance shaders from images. A set of one or more machine learning models can be trained to optimize an illumination latent code and a set of reflectance latent codes for an object within a set of input images. A shader can then be generated based on a machine learning model of the one or more machine learning models. The shader is configured to sample the illumination latent code and the set of reflectance latent codes for the object. A 3D representation of the object can be rendered using the generated shader.Type: GrantFiled: June 24, 2022Date of Patent: April 30, 2024Assignee: Intel CorporationInventors: Benjamin Ummenhofer, Shenlong Wang, Sanskar Agrawal, Yixing Lao, Kai Zhang, Stephan Richter, Vladlen Koltun
-
Patent number: 11880771Abstract: Systems and methods are provided for machine-learned models including convolutional neural networks that generate predictions using continuous convolution techniques. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can perform, with a machine-learned convolutional neural network, one or more convolutions over input data using a continuous filter relative to a support domain associated with the input data, and receive a prediction from the machine-learned convolutional neural network. A machine-learned convolutional neural network in some examples includes at least one continuous convolution layer configured to perform convolutions over input data with a parametric continuous kernel.Type: GrantFiled: January 12, 2023Date of Patent: January 23, 2024Assignee: UATC, LLCInventors: Shenlong Wang, Wei-Chiu Ma, Shun Da Suo, Raquel Urtasun, Ming Liang
-
Patent number: 11861854Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.Type: GrantFiled: May 26, 2022Date of Patent: January 2, 2024Assignee: Snap Inc.Inventors: Shenlong Wang, Linjie Luo, Ning Zhang, Jia Li
-
Publication number: 20230419512Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.Type: ApplicationFiled: September 12, 2023Publication date: December 28, 2023Inventors: Shenlong Wang, Linjie Luo, Ning Zhang, Jia Li
-
Publication number: 20230418717Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned model can predict one or more dropout probabilities for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.Type: ApplicationFiled: September 13, 2023Publication date: December 28, 2023Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Kelvin Ka Wing Wong, Wenyuan Zeng, Raquel Urtasun
-
Patent number: 11820397Abstract: A computer-implemented method for localizing a vehicle can include accessing, by a computing system comprising one or more computing devices, a machine-learned retrieval model that has been trained using a ground truth dataset comprising a plurality of pre-localized sensor observations. Each of the plurality of pre-localized sensor observations has a predetermined pose value associated with a previously obtained sensor reading representation. The method also includes obtaining, by the computing system, a current sensor reading representation obtained by one or more sensors located at the vehicle. The method also includes inputting, by the computing system, the current sensor reading representation into the machine-learned retrieval model.Type: GrantFiled: September 11, 2020Date of Patent: November 21, 2023Assignee: UATC, LLCInventors: Julieta Martinez Covarrubias, Raquel Urtasun, Shenlong Wang, Ioan Andrei Barsan, Gellert Sandor Mattyus, Alexandre Doubov, Hongbo Fan