Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a driving scenario machine learning network and providing a simulated driving environment. One of the operations is performed by receiving video data that includes multiple video frames depicting an aerial view of vehicles moving about an area. The video data is processed and driving scenario data is generated which includes information about the dynamic objects identified in the video. A machine learning network is trained using the generated driving scenario data. A 3-dimensional simulated environment is provided which is configured to allow an autonomous vehicle to interact with one or more of the dynamic objects.
Abstract: Methods that identify local motions in point cloud data that are generated by one or more rounds of LIDAR scans are disclosed. The point cloud data describes an environment with points with each point having a scanned time and scanned coordinates. From the point cloud data, a subset of points is selected. A surface is reconstructed at a common reference time using the subset of points. The reconstructed surface includes points that are moved from the scanned coordinates of the points in the subset. The moved points are derived from a projected movement under a projected motion parameter in duration between the scanned time and the common reference time. The surface quality of the reconstructed surface is determined. If the surface has a high quality, the projected motion parameter is determined to be the final motion parameter that is used as an indication whether an object is moving.
Abstract: The present invention generally relates to generating a three-dimensional representation of a physical environment, which includes dynamic scenarios.
Abstract: A method of using a drone that is equipped with a camera and an inertial measurement unit (IMU) to survey an environment to reconstruct a 3D map is described. A key frame location is first identified. A first image of the environment is captured by the camera from the key frame location. The drone is then moved away from the key frame location to another location. A second image of the environment is captured from the other location. The drone then returns to the key frame location. The drone may perform additional rounds of scans and returns to the key frame location between each round. By constantly requiring the drone to return to the key frame location, the precise location of the drone may be determined by the acceleration data of the IMU because the location information may be recalibrated each time at the key frame location.