Patents Assigned to Matterport, Inc.
  • Patent number: 10803208
    Abstract: Systems and techniques for processing three-dimensional (3D) data are presented. Captured three-dimensional (3D) data associated with a 3D model of an architectural environment is received and at least a portion of the captured 3D data associated with a flat surface is identified. Furthermore, missing data associated with the portion of the captured 3D data is identified and additional 3D data for the missing data is generated based on other data associated with the portion of the captured 3D data.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: October 13, 2020
    Assignee: Matterport, Inc.
    Inventors: Matthew Tschudy Bell, David Alan Gausebeck, Daniel Ford, Gregory William Coombe
  • Patent number: 10775959
    Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: September 15, 2020
    Assignee: Matterport, Inc.
    Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
  • Patent number: 10706615
    Abstract: Systems and techniques for determining and/or generating data for an architectural opening area associated with a three-dimensional (3D) model are presented. A portion of an image associated with a 3D model that corresponds to a window view or another architectural opening area is identified based at least in part on color data or depth data. Furthermore, a surface associated with the 3D model and visual data for the window view or the other architectural opening area is determined. The visual data for the window view or the other architectural opening area is applied to the surface associated with the 3D model.
    Type: Grant
    Filed: December 8, 2015
    Date of Patent: July 7, 2020
    Assignee: Matterport, Inc.
    Inventors: Daniel Ford, David Alan Gausebeck, Gunnar Hovden, Matthew Tschudy Bell
  • Patent number: 10666929
    Abstract: This disclosure is directed to a hardware system for inverse graphics capture. An inverse graphics capture system (IGCS) captures data regarding a physical space that can be used to generate a photorealistic graphical model of that physical space. In certain approaches, the system includes hardware and accompanying software used to create a photorealistic six degree of freedom (6DOF) graphical model of the physical space. In certain approaches, the system includes hardware and accompanying software used for projection mapping onto the physical space. In certain approaches, the model produced by the IGCS is built using data regarding the geometry, lighting, surfaces, and environment of the physical space. In certain approaches, the model produced by the IGCS is both photorealistic and fully modifiable.
    Type: Grant
    Filed: August 6, 2018
    Date of Patent: May 26, 2020
    Assignee: Matterport, Inc.
    Inventors: Gary Bradski, Moshe Benezra, Daniel A. Aden, Ethan Rublee
  • Patent number: 10655969
    Abstract: Systems and techniques for determining and/or generating a navigation path through a three-dimensional (3D) model are presented. At least one waypoint location within a captured 3D model of an architectural environment is determined. A path within the captured 3D model, to navigate between a first location associated with the captured 3D model and a second location associated with the captured 3D model, is determined based on the at least one waypoint location. Visual data indicative of 2D data or 3D data of the captured 3D model along the path is transmitted to a remote client device to simulate navigation of the path within the captured 3D model between the first location and the second location.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: May 19, 2020
    Assignee: Matterport, Inc.
    Inventors: Kevin Allen Bjorke, Matthew Tschudy Bell
  • Patent number: 10650106
    Abstract: Systems and methods are provided for automatically separating and reconstructing individual stories of a three-dimensional model of a multi-story structure based on captured image data of the multi-story structure. In an aspect, a system is provided that includes an analysis component configured to analyze a three-dimensional model of structure comprising a plurality of stories generated based on captured three-dimensional image data of the structure and identify respective stories of the plurality of stories to which features of a three-dimensional model are associated. The system further includes a separation component configured to separate the respective stories from one another based on the features respectively associated therewith, and an interface component configured to generate a graphical user interface that facilitates viewing the respective stories as separated from one another.
    Type: Grant
    Filed: January 28, 2016
    Date of Patent: May 12, 2020
    Assignee: Matterport, Inc.
    Inventors: Matthew Tschudy Bell, Haakon Erichsen, Mykhaylo Kurinnyy
  • Patent number: 10586386
    Abstract: Systems and techniques for processing and/or transmitting three-dimensional (3D) data are presented. A partitioning component receives captured 3D data associated with a 3D model of an interior environment and partitions the captured 3D data into at least one data chunk associated with at least a first level of detail and a second level of detail. A data component stores 3D data including at least the first level of detail and the second level of detail for the at least one data chunk. An output component transmits a portion of data from the at least one data chunk that is associated with the first level of detail or the second level of detail to a remote client device based on information associated with the first level of detail and the second level of detail.
    Type: Grant
    Filed: June 13, 2018
    Date of Patent: March 10, 2020
    Assignee: Matterport, Inc.
    Inventors: Matthew Tschudy Bell, David Alan Gausebeck, Gregory William Coombe, Daniel Ford
  • Patent number: 10540054
    Abstract: Techniques are disclosed for automated selection of navigation points for navigating through a computer-generated virtual environment such as, for example, a virtual reality (VR) environment. Specifically, an input set of connected navigation points in a virtual environment is automatically pruned to a connected subset thereof, according to one or more selection factors where at least one of the selection factors is whether the subset will continue to be “connected” after pruning a navigation point from the input set. In addition to whether the pruning of a navigation point will allow the remaining navigation points to remain connected, techniques for pruning based on one or more additional selection factors are also disclosed. According to one additional selection factor, navigation point pruning is based on the degree to which coverage of an input set of navigation points would be reduced by pruning any given navigation point from the input set.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: January 21, 2020
    Assignee: Matterport, Inc.
    Inventors: Gunnar Hovden, David V. Buchhofer, Jr., Matthew Bell
  • Patent number: 10534962
    Abstract: Techniques are provided for increasing the accuracy of automated classifications produced by a machine learning engine. Specifically, the classification produced by a machine learning engine for one photo-realistic image is adjusted based on the classifications produced by the machine learning engine for other photo-realistic images that correspond to the same portion of a 3D model that has been generated based on the photo-realistic images. Techniques are also provided for using the classifications of the photo-realistic images that were used to create a 3D model to automatically classify portions of the 3D model. The classifications assigned to the various portions of the 3D model in this manner may also be used as a factor for automatically segmenting the 3D model.
    Type: Grant
    Filed: June 17, 2017
    Date of Patent: January 14, 2020
    Assignee: Matterport, Inc.
    Inventors: Gunnar Hovden, Mykhaylo Kurinnyy, Matthew Bell
  • Patent number: 10529141
    Abstract: Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor.
    Type: Grant
    Filed: November 1, 2013
    Date of Patent: January 7, 2020
    Assignee: Matterport, Inc.
    Inventors: Matthew Bell, David Gausebeck, Michael Beebe
  • Patent number: 10529143
    Abstract: Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor.
    Type: Grant
    Filed: November 1, 2013
    Date of Patent: January 7, 2020
    Assignee: Matterport, Inc.
    Inventors: Matthew Bell, David Gausebeck, Michael Beebe
  • Patent number: 10529142
    Abstract: Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor.
    Type: Grant
    Filed: November 1, 2013
    Date of Patent: January 7, 2020
    Assignee: Matterport, Inc.
    Inventors: Matthew Bell, David Gausebeck, Michael Beebe
  • Patent number: 10482679
    Abstract: Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor.
    Type: Grant
    Filed: November 1, 2013
    Date of Patent: November 19, 2019
    Assignee: Matterport, Inc.
    Inventors: Matthew Bell, David Gausebeck, Michael Beebe
  • Patent number: 10325399
    Abstract: Systems and methods for generating three-dimensional models having regions of various resolutions are provided. In particular, imagery data can be captured and utilized to generate three-dimensional models. Regions of texture can be mapped to regions of a three-dimensional model when rendered. Resolutions of texture can be selectively altered and regions of texture can be selectively segmented to reduce texture memory cost. Texture can be algorithmically generated based on alternative texturing techniques. Models can be rendered having regions at various resolutions.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: June 18, 2019
    Assignee: Matterport, Inc.
    Inventors: Daniel Ford, Matthew Tschudy Bell, David Alan Gausebeck, Mykhaylo Kurinnyy
  • Patent number: 10304240
    Abstract: The present disclosure concerns a methodology that allows a user to “orbit” around a model on a specific axis of rotation and view an orthographic floor plan of the model. A user may view and “walk through” the model while staying at a specific height above the ground with smooth transitions between orbiting, floor plan, and walking modes.
    Type: Grant
    Filed: October 2, 2017
    Date of Patent: May 28, 2019
    Assignee: Matterport, Inc.
    Inventors: Matthew Bell, Michael Beebe
  • Patent number: 10163261
    Abstract: Systems and methods for generating three-dimensional models with correlated three-dimensional and two dimensional imagery data are provided. In particular, imagery data can be captured in two dimensions and three dimensions. Imagery data can be transformed into models. Two-dimensional data and three-dimensional data can be correlated within models. Two-dimensional data can be selected for display within a three-dimensional model. Modifications can be made to the three-dimensional model and can be displayed within a three-dimensional model or within two-dimensional data. Models can transition between two dimensional imagery data and three dimensional imagery data.
    Type: Grant
    Filed: March 19, 2014
    Date of Patent: December 25, 2018
    Assignee: Matterport, Inc.
    Inventors: Matthew Tschudy Bell, David Alan Gausebeck, Gregory William Coombe, Daniel Ford, William John Brown
  • Patent number: 10139985
    Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: November 27, 2018
    Assignee: Matterport, Inc.
    Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
  • Patent number: 10127722
    Abstract: This application generally relates to systems and methods for generating and rendering visualizations of an object or environment using 2D and 3D image data of the object or the environment captured by a mobile device. In one embodiment, a method includes providing, by the system, a representation of a 3D model of an environment from a first perspective of the virtual camera relative to the 3D model, receiving, by the system, input requesting movement of the virtual camera relative to the 3D model, and selecting, by the system, a first 2D image from a plurality of two dimensional images associated with different capture positions and orientations relative to the 3D model based on association of a capture position and orientation of the first 2D image with a second perspective of the virtual camera relative to the 3D model determined based on the movement.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: November 13, 2018
    Assignee: Matterport, Inc.
    Inventors: Babak Robert Shakib, Kevin Allen Bjorke, Matthew Tschudy Bell
  • Patent number: 10102639
    Abstract: The capture and alignment of multiple 3D scenes is disclosed. Three dimensional capture device data from different locations is received thereby allowing for different perspectives of 3D scenes. An algorithm uses the data to determine potential alignments between different 3D scenes via coordinate transformations. Potential alignments are evaluated for quality and subsequently aligned subject to the existence of sufficiently high relative or absolute quality. A global alignment of all or most of the input 3D scenes into a single coordinate frame may be achieved. The presentation of areas around a particular hole or holes takes place thereby allowing the user to capture the requisite 3D scene containing areas within the hole or holes as well as part of the surrounding area using, for example, the 3D capture device. The new 3D captured scene is aligned with existing 3D scenes and/or 3D composite scenes.
    Type: Grant
    Filed: August 29, 2017
    Date of Patent: October 16, 2018
    Assignee: Matterport, Inc.
    Inventors: Matthew Bell, David Gausebeck
  • Patent number: 10055876
    Abstract: Systems and methods for generating three-dimensional models having regions of various resolutions are provided. In particular, imagery data can be captured and utilized to generate three-dimensional models. Regions of texture can be mapped to regions of a three-dimensional model when rendered. Resolutions of texture can be selectively altered and regions of texture can be selectively segmented to reduce texture memory cost. Texture can be algorithmically generated based on alternative texturing techniques. Models can be rendered having regions at various resolutions.
    Type: Grant
    Filed: June 6, 2014
    Date of Patent: August 21, 2018
    Assignee: Matterport, Inc.
    Inventors: Daniel Ford, Matthew Tschudy Bell, David Alan Gausebeck, Mykhaylo Kurinnyy