Patents by Inventor Derek Hoiem
Derek Hoiem has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220398808Abstract: A method of automatically producing maps and measures that visualize and quantify placement and progress of construction elements, such as walls, ducts, etc. in images. From a set of images depicting a scene, element confidences per pixel in each of the images are produced using a classification model that assigns such confidences. Thereafter, element confidences for each respective one of a set of 3D points represented in the scene are determined by aggregating the per-pixel element confidences from corresponding pixels of each of the images that is known to observe the respective 3D points. These element confidences are then updated based on primitive templates representing element geometry to produce a 3D progress model of the scene.Type: ApplicationFiled: April 28, 2022Publication date: December 15, 2022Inventors: Abhishek Bhatia, Derek Hoiem
-
Patent number: 11443444Abstract: A 360 panoramic video sequence of an architectural or industrial site includes a plurality of 360 images. 3D poses of the 360 images are determined with respect to one another, and a subset of the images extracted for further processing according to selection criteria. The 3D poses of the extracted images is refined based on determined correspondences of the features in the images, and a 3D point cloud of the site is developed from the extracted images with refined pose estimates. An “as-built” representation of the site captured in the panoramic video sequence is created from the sparse 3D point cloud and aligned to a 2D or 3D plan of the site. Once so aligned, the extracted images may be presented to a user in conjunction with their plan positions (e.g., overlayed on the site plan). Optionally, a point cloud or mesh view of the site may also be returned.Type: GrantFiled: June 22, 2021Date of Patent: September 13, 2022Assignee: Reconstruct, Inc.Inventors: Derek Hoiem, Shengze Wang, Abhishek Bhatia
-
Patent number: 11288412Abstract: A system initializes a set of calibrated images with known 3D pose relative to a 3D building information model (BIM) to be anchor images, detects features within images of unknown position and orientation, and determines matches with features of the calibrated images. The system determines a subset of the images that have at least a threshold number of matching features, selects an image from the subset of the images having the largest number of matching features, and executes a reconstruction algorithm using the image and the anchor images to calibrate the image to the BIM and generate an initial 3D point cloud model. The system repeats the last steps to identify a second image from the subset and perform, starting with the initial 3D point cloud model and using the second image, 3D reconstruction to generate an updated 3D point cloud model that is displayable in a graphical user interface.Type: GrantFiled: April 18, 2018Date of Patent: March 29, 2022Assignee: The Board of Trustees of the University of IllinoisInventors: Mani Golparvar-Fard, Derek Hoiem, Jacob Je-Chian Lin, Kook In Han, Joseph M. Degol
-
Publication number: 20210343032Abstract: A 360 panoramic video sequence of an architectural or industrial site includes a plurality of 360 images. 3D poses of the 360 images are determined with respect to one another, and a subset of the images extracted for further processing according to selection criteria. The 3D poses of the extracted images is refined based on determined correspondences of the features in the images, and a 3D point cloud of the site is developed from the extracted images with refined pose estimates. An “as-built” representation of the site captured in the panoramic video sequence is created from the sparse 3D point cloud and aligned to a 2D or 3D plan of the site. Once so aligned, the extracted images may be presented to a user in conjunction with their plan positions (e.g., overlayed on the site plan). Optionally, a point cloud or mesh view of the site may also be returned.Type: ApplicationFiled: June 22, 2021Publication date: November 4, 2021Inventors: Derek Hoiem, Shengze Wang, Abhishek Bhatia
-
Patent number: 11074701Abstract: A 360 panoramic video sequence of an architectural or industrial site includes a plurality of 360 images. 3D poses of the 360 images are determined with respect to one another, and a subset of the images extracted for further processing according to selection criteria. The 3D poses of the extracted images is refined based on determined correspondences of the features in the images, and a 3D point cloud of the site is developed from the extracted images with refined pose estimates. An “as-built” representation of the site captured in the panoramic video sequence is created from the sparse 3D point cloud and aligned to a 2D or 3D plan of the site. Once so aligned, the extracted images may be presented to a user in conjunction with their plan positions (e.g., overlayed on the site plan). Optionally, a point cloud or mesh view of the site may also be returned.Type: GrantFiled: October 7, 2020Date of Patent: July 27, 2021Assignee: RECONSTRUCT INC.Inventors: Derek Hoiem, Shengze Wang, Abhishek Bhatia
-
Publication number: 20210183080Abstract: A 360 panoramic video sequence of an architectural or industrial site includes a plurality of 360 images. 3D poses of the 360 images are determined with respect to one another, and a subset of the images extracted for further processing according to selection criteria. The 3D poses of the extracted images is refined based on determined correspondences of the features in the images, and a 3D point cloud of the site is developed from the extracted images with refined pose estimates. An “as-built” representation of the site captured in the panoramic video sequence is created from the sparse 3D point cloud and aligned to a 2D or 3D plan of the site. Once so aligned, the extracted images may be presented to a user in conjunction with their plan positions (e.g., overlayed on the site plan). Optionally, a point cloud or mesh view of the site may also be returned.Type: ApplicationFiled: October 7, 2020Publication date: June 17, 2021Inventors: Derek Hoiem, Shengze Wang, Abhishek Bhatia
-
Publication number: 20190325089Abstract: A system initializes a set of calibrated images with known 3D pose relative to a 3D building information model (BIM) to be anchor images, detects features within images of unknown position and orientation, and determines matches with features of the calibrated images. The system determines a subset of the images that have at least a threshold number of matching features, selects an image from the subset of the images having the largest number of matching features, and executes a reconstruction algorithm using the image and the anchor images to calibrate the image to the BIM and generate an initial 3D point cloud model. The system repeats the last steps to identify a second image from the subset and perform, starting with the initial 3D point cloud model and using the second image, 3D reconstruction to generate an updated 3D point cloud model that is displayable in a graphical user interface.Type: ApplicationFiled: April 18, 2018Publication date: October 24, 2019Applicant: Reconstruct Inc.Inventors: Mani Golparvar-Fard, Derek Hoiem, Jacob Je-Chian Lin, Kook In Han, Joseph M. Degol
-
Patent number: 9330500Abstract: An image into which one or more objects are to be inserted is obtained. Based on the image, both a 3-dimensional (3D) representation and a light model of the scene in the image are generated. One or more objects are added to the 3D representation of the scene. The 3D representation of the scene is rendered, based on the light model, to generate a modified image that is the obtained image modified to include the one or more objects.Type: GrantFiled: December 8, 2011Date of Patent: May 3, 2016Assignee: The Board of Trustees of the University of IllinoisInventors: Kevin Karsch, Varsha Chandrashekhar Hedau, David A. Forsyth, Derek Hoiem
-
Patent number: 9117281Abstract: Surface segmentation from RGB and depth images is described. In one example, a computer receives an image of a scene. The image has pixels which each have an associated color value and an associated depth value representing a distance between from an image sensor to a surface in the scene. The computer uses the depth values to derive a set of three-dimensional planes present within the scene. A cost function is used to determine whether each pixel belongs to one of the planes, and the image elements are labeled accordingly. The cost function has terms dependent on the depth value of a pixel, and the color values of the pixels and at least one neighboring pixel. In various examples, the planes can be extended until they intersect to determine the extent of the scene, and pixels not belonging to a plane can be labeled as objects on the surfaces.Type: GrantFiled: November 2, 2011Date of Patent: August 25, 2015Assignee: Microsoft CorporationInventors: Derek Hoiem, Pushmeet Kohli
-
Publication number: 20130147798Abstract: An image into which one or more objects are to be inserted is obtained. Based on the image, both a 3-dimensional (3D) representation and a light model of the scene in the image are generated. One or more objects are added to the 3D representation of the scene. The 3D representation of the scene is rendered, based on the light model, to generate a modified image that is the obtained image modified to include the one or more objects.Type: ApplicationFiled: December 8, 2011Publication date: June 13, 2013Applicant: The Board of Trustees of the University of IllinoisInventors: Kevin Karsch, Varsha Chandrashekhar Hedau, David A. Forsyth, Derek Hoiem
-
Publication number: 20130107010Abstract: Surface segmentation from RGB and depth images is described. In one example, a computer receives an image of a scene. The image has pixels which each have an associated color value and an associated depth value representing a distance between from an image sensor to a surface in the scene. The computer uses the depth values to derive a set of three-dimensional planes present within the scene. A cost function is used to determine whether each pixel belongs to one of the planes, and the image elements are labeled accordingly. The cost function has terms dependent on the depth value of a pixel, and the color values of the pixels and at least one neighboring pixel. In various examples, the planes can be extended until they intersect to determine the extent of the scene, and pixels not belonging to a plane can be labeled as objects on the surfaces.Type: ApplicationFiled: November 2, 2011Publication date: May 2, 2013Applicant: MICROSOFT CORPORATIONInventors: Derek HOIEM, Pushmeet KOHLI
-
Patent number: 7512899Abstract: A unified user interface includes one or more component tables and a master table. The one or more component tables include resource information for the user interface with respect to a particular component. The master table includes resource information for the application and is merged from the one or more component tables. The component tables may be added or subtracted at any time and the master table is recreated by again merging the remaining or now existing component tables. The master table is used by a host application to build the user interface for a suite application. Components are only loaded and corresponding user interfaces built when appropriate commands are accessed. Thus, applications may be developed and components can be added or modified at a later time without rewriting the shell application or re-releasing a product.Type: GrantFiled: March 6, 2000Date of Patent: March 31, 2009Assignee: Microsoft CorporationInventors: Derek Hoiem, Martyn S. Lovell, Steve Seixeiro