Abstract: A method of automatically producing maps and measures that visualize and quantify placement and progress of construction elements, such as walls, ducts, etc. in images. From a set of images depicting a scene, element confidences per pixel in each of the images are produced using a classification model that assigns such confidences. Thereafter, element confidences for each respective one of a set of 3D points represented in the scene are determined by aggregating the per-pixel element confidences from corresponding pixels of each of the images that is known to observe the respective 3D points. These element confidences are then updated based on primitive templates representing element geometry to produce a 3D progress model of the scene.
Abstract: A 360 panoramic video sequence of an architectural or industrial site includes a plurality of 360 images. 3D poses of the 360 images are determined with respect to one another, and a subset of the images extracted for further processing according to selection criteria. The 3D poses of the extracted images is refined based on determined correspondences of the features in the images, and a 3D point cloud of the site is developed from the extracted images with refined pose estimates. An “as-built” representation of the site captured in the panoramic video sequence is created from the sparse 3D point cloud and aligned to a 2D or 3D plan of the site. Once so aligned, the extracted images may be presented to a user in conjunction with their plan positions (e.g., overlayed on the site plan). Optionally, a point cloud or mesh view of the site may also be returned.
Abstract: A 360 panoramic video sequence of an architectural or industrial site includes a plurality of 360 images. 3D poses of the 360 images are determined with respect to one another, and a subset of the images extracted for further processing according to selection criteria. The 3D poses of the extracted images is refined based on determined correspondences of the features in the images, and a 3D point cloud of the site is developed from the extracted images with refined pose estimates. An “as-built” representation of the site captured in the panoramic video sequence is created from the sparse 3D point cloud and aligned to a 2D or 3D plan of the site. Once so aligned, the extracted images may be presented to a user in conjunction with their plan positions (e.g., overlayed on the site plan). Optionally, a point cloud or mesh view of the site may also be returned.
Abstract: A system initializes a set of calibrated images with known 3D pose relative to a 3D building information model (BIM) to be anchor images, detects features within images of unknown position and orientation, and determines matches with features of the calibrated images. The system determines a subset of the images that have at least a threshold number of matching features, selects an image from the subset of the images having the largest number of matching features, and executes a reconstruction algorithm using the image and the anchor images to calibrate the image to the BIM and generate an initial 3D point cloud model. The system repeats the last steps to identify a second image from the subset and perform, starting with the initial 3D point cloud model and using the second image, 3D reconstruction to generate an updated 3D point cloud model that is displayable in a graphical user interface.
Type:
Application
Filed:
April 18, 2018
Publication date:
October 24, 2019
Applicant:
Reconstruct Inc.
Inventors:
Mani Golparvar-Fard, Derek Hoiem, Jacob Je-Chian Lin, Kook In Han, Joseph M. Degol