Patents by Inventor Wei-Chiu Ma

Wei-Chiu Ma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210150722
    Abstract: Disclosed herein are methods and systems for performing instance segmentation that can provide improved estimation of object boundaries. Implementations can include a machine-learned segmentation model trained to estimate an initial object boundary based on a truncated signed distance function (TSDF) generated by the model. The model can also generate outputs for optimizing the TSDF over a series of iterations to produce a final TSDF that can be used to determine the segmentation mask.
    Type: Application
    Filed: September 10, 2020
    Publication date: May 20, 2021
    Inventors: Namdar Homayounfar, Yuwen Xiong, Justin Liang, Wei-Chiu Ma, Raquel Urtasun
  • Publication number: 20210150410
    Abstract: Systems and methods for predicting instance geometry are provided. A method includes obtaining an input image depicting at least one object. The method includes determining an instance mask for the object by inputting the input image into a machine-learned instance segmentation model. The method includes determining an initial polygon with a number of initial vertices outlining the border of the object within the input image. The method includes obtaining a feature embedding for one or more pixels of the input image and determining a vertex embedding including a feature embedding for each pixel corresponding an initial vertex of the initial polygon. The method includes determining a vertex offset for each initial vertex of the initial polygon based on the vertex embedding and applying the vertex offset to the initial polygon to obtain one or more enhanced polygons.
    Type: Application
    Filed: August 31, 2020
    Publication date: May 20, 2021
    Inventors: Justin Liang, Namdar Homayounfar, Wei-Chiu Ma, Yuwen Xiong, Raquel Urtasun
  • Publication number: 20210152831
    Abstract: The present disclosure is directed to video compression using conditional entropy coding. An ordered sequence of image frames can be transformed to produce an entropy coding for each image frame. Each of the entropy codings provide a compressed form of image information based on a prior image frame and a current image frame (the current image frame occurring after the prior image frame). In this manner, the compression model can capture temporal relationships between image frames or encoded representations of the image frames using a conditional entropy encoder trained to approximate the joint entropy between frames in the image frame sequence.
    Type: Application
    Filed: September 10, 2020
    Publication date: May 20, 2021
    Inventors: Jerry Junkai Liu, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
  • Patent number: 10859384
    Abstract: Systems and methods for autonomous vehicle localization are provided. In one example embodiment, a computer-implemented method includes obtaining, by a computing system that includes one or more computing devices onboard an autonomous vehicle, sensor data indicative of one or more geographic cues within the surrounding environment of the autonomous vehicle. The method includes obtaining, by the computing system, sparse geographic data associated with the surrounding environment of the autonomous vehicle. The sparse geographic data is indicative of the one or more geographic cues. The method includes determining, by the computing system, a location of the autonomous vehicle within the surrounding environment based at least in part on the sensor data indicative of the one or more geographic cues and the sparse geographic data. The method includes outputting, by the computing system, data indicative of the location of the autonomous vehicle within the surrounding environment.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: December 8, 2020
    Assignee: UATC, LLC
    Inventors: Wei-Chiu Ma, Shenlong Wang, Namdar Homayounfar, Shrinidhi Kowshika Lakshmikanth, Raquel Urtasun
  • Patent number: 10803325
    Abstract: Systems and methods for facilitating communication with autonomous vehicles are provided. In one example embodiment, a computing system can obtain rasterized LIDAR data associated with a surrounding environment of an autonomous vehicle. The rasterized LIDAR data can include LIDAR image data that is rasterized from a LIDAR point cloud. The computing system can access data indicative of a machine-learned lane boundary detection model. The computing system can input the rasterized LIDAR data associated with the surrounding environment of the autonomous vehicle into the machine-learned lane boundary detection model. The computing system can obtain an output from the machine-learned lane boundary detection model. The output can be indicative of one or more lane boundaries within the surrounding environment of the autonomous vehicle.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: October 13, 2020
    Assignee: UATC, LLC
    Inventors: Min Bai, Gellert Sandor Mattyus, Namdar Homayounfar, Shenlong Wang, Shrindihi Kowshika Lakshmikanth, Raquel Urtasun, Wei-Chiu Ma
  • Publication number: 20200302662
    Abstract: The present disclosure is directed to generating high quality map data using obtained sensor data. In particular a computing system comprising one or more computing devices can obtain sensor data associated with a portion of a travel way. The computing system can identify, using a machine-learned model, feature data associated with one or more lane boundaries in the portion of the travel way based on the obtained sensor data. The computing system can generate a graph representing lane boundaries associated with the portion of the travel way by identifying a respective node location for the respective lane boundary based in part on identified feature data associated with lane boundary information, determining, for the respective node location, an estimated direction value and an estimated lane state, and generating, based on the respective node location, the estimated direction value, and the estimated lane state, a predicted next node location.
    Type: Application
    Filed: March 20, 2020
    Publication date: September 24, 2020
    Inventors: Namdar Homayounfar, Justin Liang, Wei-Chiu Ma, Raquel Urtasun
  • Publication number: 20200301799
    Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned model can predict one or more dropout probabilities for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
    Type: Application
    Filed: March 23, 2020
    Publication date: September 24, 2020
    Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Kelvin Ka Wing Wong, Wenyuan Zeng, Raquel Urtasun
  • Publication number: 20200302627
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with depth estimation are provided. For example, a feature representation associated with stereo images including a first and second plurality of points can be accessed. Sparse disparity estimates associated with disparities between the first and second plurality of points can be determined. The sparse disparity estimates can be based on machine-learned models that estimate disparities based on comparisons of the first plurality of points to the second plurality of points. Confidence ranges associated with the disparities between the first and second plurality of points can be determined based on the sparse disparity estimates and the machine-learned models. A disparity map for the stereo images can be generated based on using the confidence ranges and machine-learned models to prune the disparities outside the confidence ranges.
    Type: Application
    Filed: March 23, 2020
    Publication date: September 24, 2020
    Inventors: Shivam Duggal, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
  • Publication number: 20200160537
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with motion flow estimation are provided. For example, scene data including representations of an environment over a first set of time intervals can be accessed. Extracted visual cues can be generated based on the representations and machine-learned feature extraction models. At least one of the machine-learned feature extraction models can be configured to generate a portion of the extracted visual cues based on a first set of the representations of the environment from a first perspective and a second set of the representations of the environment from a second perspective. The extracted visual cues can be encoded using energy functions. Three-dimensional motion estimates of object instances at time intervals subsequent to the first set of time intervals can be determined based on the energy functions and machine-learned inference models.
    Type: Application
    Filed: August 5, 2019
    Publication date: May 21, 2020
    Inventors: Raquel Urtasun, Wei-Chiu Ma, Shenlong Wang, Yuwen Xiong, Rui Hu
  • Publication number: 20200160598
    Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned geometry model can predict one or more adjusted depths for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
    Type: Application
    Filed: September 11, 2019
    Publication date: May 21, 2020
    Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
  • Publication number: 20200025935
    Abstract: Generally, the disclosed systems and methods implement improved detection of objects in three-dimensional (3D) space. More particularly, an improved 3D object detection system can exploit continuous fusion of multiple sensors and/or integrated geographic prior map data to enhance effectiveness and robustness of object detection in applications such as autonomous driving. In some implementations, geographic prior data (e.g., geometric ground and/or semantic road features) can be exploited to enhance three-dimensional object detection for autonomous vehicle applications. In some implementations, object detection systems and methods can be improved based on dynamic utilization of multiple sensor modalities. More particularly, an improved 3D object detection system can exploit both LIDAR systems and cameras to perform very accurate localization of objects within three-dimensional space relative to an autonomous vehicle.
    Type: Application
    Filed: March 14, 2019
    Publication date: January 23, 2020
    Inventors: Ming Liang, Bin Yang, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
  • Publication number: 20200025931
    Abstract: Generally, the disclosed systems and methods implement improved detection of objects in three-dimensional (3D) space. More particularly, an improved 3D object detection system can exploit continuous fusion of multiple sensors and/or integrated geographic prior map data to enhance effectiveness and robustness of object detection in applications such as autonomous driving. In some implementations, geographic prior data (e.g., geometric ground and/or semantic road features) can be exploited to enhance three-dimensional object detection for autonomous vehicle applications. In some implementations, object detection systems and methods can be improved based on dynamic utilization of multiple sensor modalities. More particularly, an improved 3D object detection system can exploit both LIDAR systems and cameras to perform very accurate localization of objects within three-dimensional space relative to an autonomous vehicle.
    Type: Application
    Filed: March 14, 2019
    Publication date: January 23, 2020
    Inventors: Ming Liang, Bin Yang, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
  • Publication number: 20190147253
    Abstract: Systems and methods for facilitating communication with autonomous vehicles are provided. In one example embodiment, a computing system can obtain rasterized LIDAR data associated with a surrounding environment of an autonomous vehicle. The rasterized LIDAR data can include LIDAR image data that is rasterized from a LIDAR point cloud. The computing system can access data indicative of a machine-learned lane boundary detection model. The computing system can input the rasterized LIDAR data associated with the surrounding environment of the autonomous vehicle into the machine-learned lane boundary detection model. The computing system can obtain an output from the machine-learned lane boundary detection model. The output can be indicative of one or more lane boundaries within the surrounding environment of the autonomous vehicle.
    Type: Application
    Filed: September 5, 2018
    Publication date: May 16, 2019
    Inventors: Min Bai, Gellert Sandor Mattyus, Namdar Homayounfar, Shenlong Wang, Shrindihi Kowshika Lakshmikanth, Raquel Urtasun, Wei-Chiu Ma
  • Publication number: 20190147255
    Abstract: Systems and methods for generating sparse geographic data for autonomous vehicles are provided. In one example embodiment, a computing system can obtain sensor data associated with at least a portion of a surrounding environment of an autonomous vehicle. The computing system can identify a plurality of lane boundaries within the portion of the surrounding environment of the autonomous vehicle based at least in part on the sensor data and a first machine-learned model. The computing system can generate a plurality of polylines indicative of the plurality of lane boundaries based at least in part on a second machine-learned model. Each polyline of the plurality of polylines can be indicative of a lane boundary of the plurality of lane boundaries. The computing system can output a lane graph including the plurality of polylines.
    Type: Application
    Filed: September 6, 2018
    Publication date: May 16, 2019
    Inventors: Namdar Homayounfar, Wei-Chiu Ma, Shrinidhi Kowshika Lakshmikanth, Raquel Urtasun
  • Publication number: 20190145784
    Abstract: Systems and methods for autonomous vehicle localization are provided. In one example embodiment, a computer-implemented method includes obtaining, by a computing system that includes one or more computing devices onboard an autonomous vehicle, sensor data indicative of one or more geographic cues within the surrounding environment of the autonomous vehicle. The method includes obtaining, by the computing system, sparse geographic data associated with the surrounding environment of the autonomous vehicle. The sparse geographic data is indicative of the one or more geographic cues. The method includes determining, by the computing system, a location of the autonomous vehicle within the surrounding environment based at least in part on the sensor data indicative of the one or more geographic cues and the sparse geographic data. The method includes outputting, by the computing system, data indicative of the location of the autonomous vehicle within the surrounding environment.
    Type: Application
    Filed: September 6, 2018
    Publication date: May 16, 2019
    Inventors: Wei-Chiu Ma, Shenlong Wang, Namdar Homayounfar, Shrinidhi Kowshika Lakshmikanth, Raquel Urtasun
  • Publication number: 20190147335
    Abstract: Systems and methods are provided for machine-learned models including convolutional neural networks that generate predictions using continuous convolution techniques. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can perform, with a machine-learned convolutional neural network, one or more convolutions over input data using a continuous filter relative to a support domain associated with the input data, and receive a prediction from the machine-learned convolutional neural network. A machine-learned convolutional neural network in some examples includes at least one continuous convolution layer configured to perform convolutions over input data with a parametric continuous kernel.
    Type: Application
    Filed: October 30, 2018
    Publication date: May 16, 2019
    Inventors: Shenlong Wang, Wei-Chiu Ma, Shun Da Suo, Raquel Urtasun, Ming Liang