Patents by Inventor Shenlong Wang

Shenlong Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11972519
    Abstract: Described herein are techniques for learning neural reflectance shaders from images. A set of one or more machine learning models can be trained to optimize an illumination latent code and a set of reflectance latent codes for an object within a set of input images. A shader can then be generated based on a machine learning model of the one or more machine learning models. The shader is configured to sample the illumination latent code and the set of reflectance latent codes for the object. A 3D representation of the object can be rendered using the generated shader.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: April 30, 2024
    Assignee: Intel Corporation
    Inventors: Benjamin Ummenhofer, Shenlong Wang, Sanskar Agrawal, Yixing Lao, Kai Zhang, Stephan Richter, Vladlen Koltun
  • Patent number: 11972606
    Abstract: Systems and methods for facilitating communication with autonomous vehicles are provided. In one example embodiment, a computing system can obtain a first type of sensor data (e.g., camera image data) associated with a surrounding environment of an autonomous vehicle and/or a second type of sensor data (e.g., LIDAR data) associated with the surrounding environment of the autonomous vehicle. The computing system can generate overhead image data indicative of at least a portion of the surrounding environment of the autonomous vehicle based at least in part on the first and/or second types of sensor data. The computing system can determine one or more lane boundaries within the surrounding environment of the autonomous vehicle based at least in part on the overhead image data indicative of at least the portion of the surrounding environment of the autonomous vehicle and a machine-learned lane boundary detection model.
    Type: Grant
    Filed: May 8, 2023
    Date of Patent: April 30, 2024
    Assignee: UATC, LLC
    Inventors: Min Bai, Gellert Sandor Mattyus, Namdar Homayounfar, Shenlong Wang, Shrinidihi Kowshika Lakshmikanth, Raquel Urtasun
  • Patent number: 11880771
    Abstract: Systems and methods are provided for machine-learned models including convolutional neural networks that generate predictions using continuous convolution techniques. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can perform, with a machine-learned convolutional neural network, one or more convolutions over input data using a continuous filter relative to a support domain associated with the input data, and receive a prediction from the machine-learned convolutional neural network. A machine-learned convolutional neural network in some examples includes at least one continuous convolution layer configured to perform convolutions over input data with a parametric continuous kernel.
    Type: Grant
    Filed: January 12, 2023
    Date of Patent: January 23, 2024
    Assignee: UATC, LLC
    Inventors: Shenlong Wang, Wei-Chiu Ma, Shun Da Suo, Raquel Urtasun, Ming Liang
  • Patent number: 11861854
    Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: January 2, 2024
    Assignee: Snap Inc.
    Inventors: Shenlong Wang, Linjie Luo, Ning Zhang, Jia Li
  • Publication number: 20230418717
    Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned model can predict one or more dropout probabilities for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
    Type: Application
    Filed: September 13, 2023
    Publication date: December 28, 2023
    Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Kelvin Ka Wing Wong, Wenyuan Zeng, Raquel Urtasun
  • Publication number: 20230419512
    Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.
    Type: Application
    Filed: September 12, 2023
    Publication date: December 28, 2023
    Inventors: Shenlong Wang, Linjie Luo, Ning Zhang, Jia Li
  • Patent number: 11820397
    Abstract: A computer-implemented method for localizing a vehicle can include accessing, by a computing system comprising one or more computing devices, a machine-learned retrieval model that has been trained using a ground truth dataset comprising a plurality of pre-localized sensor observations. Each of the plurality of pre-localized sensor observations has a predetermined pose value associated with a previously obtained sensor reading representation. The method also includes obtaining, by the computing system, a current sensor reading representation obtained by one or more sensors located at the vehicle. The method also includes inputting, by the computing system, the current sensor reading representation into the machine-learned retrieval model.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: November 21, 2023
    Assignee: UATC, LLC
    Inventors: Julieta Martinez Covarrubias, Raquel Urtasun, Shenlong Wang, Ioan Andrei Barsan, Gellert Sandor Mattyus, Alexandre Doubov, Hongbo Fan
  • Publication number: 20230351689
    Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned geometry model can predict one or more adjusted depths for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
    Type: Application
    Filed: June 30, 2023
    Publication date: November 2, 2023
    Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
  • Publication number: 20230343014
    Abstract: Described herein are techniques for learning neural reflectance shaders from images. A set of one or more machine learning models can be trained to optimize an illumination latent code and a set of reflectance latent codes for an object within a set of input images. A shader can then be generated based on a machine learning model of the one or more machine learning models. The shader is configured to sample the illumination latent code and the set of reflectance latent codes for the object. A 3D representation of the object can be rendered using the generated shader.
    Type: Application
    Filed: June 24, 2022
    Publication date: October 26, 2023
    Applicant: Intel Corporation
    Inventors: Benjamin Ummenhofer, Shenlong Wang, Sanskar Agrawal, Yixing Lao, Kai Zhang, Stephan Richter, Vladlen Koltun
  • Patent number: 11797407
    Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned model can predict one or more dropout probabilities for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
    Type: Grant
    Filed: April 22, 2022
    Date of Patent: October 24, 2023
    Assignee: UATC, LLC
    Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Kelvin Ka Wing Wong, Wenyuan Zeng, Raquel Urtasun
  • Patent number: 11768292
    Abstract: Generally, the disclosed systems and methods implement improved detection of objects in three-dimensional (3D) space. More particularly, an improved 3D object detection system can exploit continuous fusion of multiple sensors and/or integrated geographic prior map data to enhance effectiveness and robustness of object detection in applications such as autonomous driving. In some implementations, geographic prior data (e.g., geometric ground and/or semantic road features) can be exploited to enhance three-dimensional object detection for autonomous vehicle applications. In some implementations, object detection systems and methods can be improved based on dynamic utilization of multiple sensor modalities. More particularly, an improved 3D object detection system can exploit both LIDAR systems and cameras to perform very accurate localization of objects within three-dimensional space relative to an autonomous vehicle.
    Type: Grant
    Filed: January 10, 2022
    Date of Patent: September 26, 2023
    Assignee: UATC, LLC
    Inventors: Ming Liang, Bin Yang, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
  • Patent number: 11769058
    Abstract: Systems and methods of the present disclosure provide an improved approach for open-set instance segmentation by identifying both known and unknown instances in an environment. For example, a method can include receiving sensor point cloud input data including a plurality of three-dimensional points. The method can include determining a feature embedding and at least one of an instance embedding, class embedding, and/or background embedding for each of the plurality of three-dimensional points. The method can include determining a first subset of points associated with one or more known instances within the environment based on the class embedding and the background embedding associated with each point in the plurality of points. The method can include determining a second subset of points associated with one or more unknown instances within the environment based on the first subset of points. The method can include segmenting the input data into known and unknown instances.
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: September 26, 2023
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Kelvin Ka Wing Wong, Shenlong Wang, Mengye Ren, Ming Liang
  • Publication number: 20230274540
    Abstract: Systems and methods for facilitating communication with autonomous vehicles are provided. In one example embodiment, a computing system can obtain a first type of sensor data (e.g., camera image data) associated with a surrounding environment of an autonomous vehicle and/or a second type of sensor data (e.g., LIDAR data) associated with the surrounding environment of the autonomous vehicle. The computing system can generate overhead image data indicative of at least a portion of the surrounding environment of the autonomous vehicle based at least in part on the first and/or second types of sensor data. The computing system can determine one or more lane boundaries within the surrounding environment of the autonomous vehicle based at least in part on the overhead image data indicative of at least the portion of the surrounding environment of the autonomous vehicle and a machine-learned lane boundary detection model.
    Type: Application
    Filed: May 8, 2023
    Publication date: August 31, 2023
    Inventors: Min Bai, Gellert Sandor Mattyus, Namdar Homayounfar, Shenlong Wang, Shrinidihi Kowshika Lakshmikanth, Raquel Urtasun
  • Patent number: 11734885
    Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned geometry model can predict one or more adjusted depths for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
    Type: Grant
    Filed: October 3, 2022
    Date of Patent: August 22, 2023
    Assignee: UATC, LLC
    Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
  • Patent number: 11726208
    Abstract: Aspects of the present disclosure involve systems, methods, and devices for autonomous vehicle localization using a Lidar intensity map. A system is configured to generate a map embedding using a first neural network and to generate an online Lidar intensity embedding using a second neural network. The map embedding is based on input map data comprising a Lidar intensity map, and the Lidar sweep embedding is based on online Lidar sweep data. The system is further configured to generate multiple pose candidates based on the online Lidar intensity embedding and compute a three-dimensional (3D) score map comprising a match score for each pose candidate that indicates a similarity between the pose candidate and the map embedding. The system is further configured to determine a pose of a vehicle based on the 3D score map and to control one or more operations of the vehicle based on the determined pose.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: August 15, 2023
    Assignee: UATC, LLC
    Inventors: Shenlong Wang, Andrei Pokrovsky, Raquel Urtasun Sotil, Ioan Andrei Bârsan
  • Patent number: 11715223
    Abstract: An active depth detection system can generate a depth map from an image and user interaction data, such as a pair of clicks. The active depth detection system can be implemented as a recurrent neural network that can receive the user interaction data as runtime inputs after training. The active depth detection system can store the generated depth map for further processing, such as image manipulation or real-world object detection.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: August 1, 2023
    Assignee: Snap Inc.
    Inventors: Kun Duan, Daniel Ron, Chongyang Ma, Ning Xu, Shenlong Wang, Sumant Milind Hanumante, Dhritiman Sagar
  • Patent number: 11715012
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with object localization and generation of compressed feature representations are provided. For example, a computing system can access source data and target data. The source data can include a source representation of an environment including a source object. The target data can include a compressed target feature representation of the environment. The compressed target feature representation can be based on compression of a target feature representation of the environment produced by machine-learned models. A source feature representation can be generated based on the source representation and the machine-learned models. The machine-learned models can include machine-learned feature extraction models or machine-learned attention models. A localized state of the source object with respect to the environment can be determined based on the source feature representation and the compressed target feature representation.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: August 1, 2023
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Xinkai Wei, Ioan Andrei Barsan, Julieta Martinez Covarrubias, Shenlong Wang
  • Publication number: 20230196909
    Abstract: Example aspects of the present disclosure describe a scene generator for simulating scenes in an environment. For example, snapshots of simulated traffic scenes can be generated by sampling a joint probability distribution trained on real-world traffic scenes. In some implementations, samples of the joint probability distribution can be obtained by sampling a plurality of factorized probability distributions for a plurality of objects for sequential insertion into the scene.
    Type: Application
    Filed: February 13, 2023
    Publication date: June 22, 2023
    Inventors: Shuhan Tan, Kelvin Ka Wing Wong, Shenlong Wang, Sivabalan Manivasagam, Mengye Ren, Raquel Urtasun
  • Patent number: 11682196
    Abstract: Systems and methods for facilitating communication with autonomous vehicles are provided. In one example embodiment, a computing system can obtain a first type of sensor data (e.g., camera image data) associated with a surrounding environment of an autonomous vehicle and/or a second type of sensor data (e.g., LIDAR data) associated with the surrounding environment of the autonomous vehicle. The computing system can generate overhead image data indicative of at least a portion of the surrounding environment of the autonomous vehicle based at least in part on the first and/or second types of sensor data. The computing system can determine one or more lane boundaries within the surrounding environment of the autonomous vehicle based at least in part on the overhead image data indicative of at least the portion of the surrounding environment of the autonomous vehicle and a machine-learned lane boundary detection model.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: June 20, 2023
    Assignee: UATC, LLC
    Inventors: Min Bai, Gellert Sandor Mattyus, Namdar Homayounfar, Shenlong Wang, Shrindihi Kowshika Lakshmikanth, Raquel Urtasun
  • Patent number: 11676310
    Abstract: The present disclosure is directed encoding LIDAR point cloud data. In particular, a computing system can receive point cloud data for a three-dimensional space. The computing system can generate a tree-based data structure from the point cloud data, the tree-based data structure comprising a plurality of nodes. The computing system can generate a serial representation of the tree-based data structure. The computing system can, for each respective node represented by a symbol in the serial representation: determine contextual information for the respective node, generate, using the contextual information as input to a machine-learned model, a statistical distribution associated with the respective node, and generate a compressed representation of the symbol associated with the respective node by encoding the symbol using the statistical distribution for the respective node.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: June 13, 2023
    Assignee: UATC, LLC
    Inventors: Yushu Huang, Jerry Junkai Liu, Kelvin Ka Wing Wong, Shenlong Wang, Raquel Urtasun, Sourav Biswas