Patents by Inventor Matthew Tschudy Bell

Matthew Tschudy Bell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10655969
    Abstract: Systems and techniques for determining and/or generating a navigation path through a three-dimensional (3D) model are presented. At least one waypoint location within a captured 3D model of an architectural environment is determined. A path within the captured 3D model, to navigate between a first location associated with the captured 3D model and a second location associated with the captured 3D model, is determined based on the at least one waypoint location. Visual data indicative of 2D data or 3D data of the captured 3D model along the path is transmitted to a remote client device to simulate navigation of the path within the captured 3D model between the first location and the second location.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: May 19, 2020
    Assignee: Matterport, Inc.
    Inventors: Kevin Allen Bjorke, Matthew Tschudy Bell
  • Patent number: 10650106
    Abstract: Systems and methods are provided for automatically separating and reconstructing individual stories of a three-dimensional model of a multi-story structure based on captured image data of the multi-story structure. In an aspect, a system is provided that includes an analysis component configured to analyze a three-dimensional model of structure comprising a plurality of stories generated based on captured three-dimensional image data of the structure and identify respective stories of the plurality of stories to which features of a three-dimensional model are associated. The system further includes a separation component configured to separate the respective stories from one another based on the features respectively associated therewith, and an interface component configured to generate a graphical user interface that facilitates viewing the respective stories as separated from one another.
    Type: Grant
    Filed: January 28, 2016
    Date of Patent: May 12, 2020
    Assignee: Matterport, Inc.
    Inventors: Matthew Tschudy Bell, Haakon Erichsen, Mykhaylo Kurinnyy
  • Patent number: 10586386
    Abstract: Systems and techniques for processing and/or transmitting three-dimensional (3D) data are presented. A partitioning component receives captured 3D data associated with a 3D model of an interior environment and partitions the captured 3D data into at least one data chunk associated with at least a first level of detail and a second level of detail. A data component stores 3D data including at least the first level of detail and the second level of detail for the at least one data chunk. An output component transmits a portion of data from the at least one data chunk that is associated with the first level of detail or the second level of detail to a remote client device based on information associated with the first level of detail and the second level of detail.
    Type: Grant
    Filed: June 13, 2018
    Date of Patent: March 10, 2020
    Assignee: Matterport, Inc.
    Inventors: Matthew Tschudy Bell, David Alan Gausebeck, Gregory William Coombe, Daniel Ford
  • Publication number: 20190394441
    Abstract: This application generally relates to capturing and aligning panoramic image and depth data. In one embodiment, a device is provided that comprises a housing and a plurality of cameras configured to capture two-dimensional images, wherein the cameras are arranged at different positions on the housing and have different azimuth orientations relative to a center point such that the cameras have a collective field-of-view spanning up to 360° horizontally. The device further comprises a plurality of depth detection components configured to capture depth data, wherein the depth detection components are arranged at different positions on the housing and have different azimuth orientations relative to the center point such that the depth detection components have the collective field-of-view spanning up to 360° horizontally.
    Type: Application
    Filed: September 3, 2019
    Publication date: December 26, 2019
    Inventors: Kyle Simek, David Gausebeck, Matthew Tschudy Bell
  • Publication number: 20190259194
    Abstract: Systems and methods for generating three-dimensional models having regions of various resolutions are provided. In particular, imagery data can be captured and utilized to generate three-dimensional models. Regions of texture can be mapped to regions of a three-dimensional model when rendered. Resolutions of texture can be selectively altered and regions of texture can be selectively segmented to reduce texture memory cost. Texture can be algorithmically generated based on alternative texturing techniques. Models can be rendered having regions at various resolutions.
    Type: Application
    Filed: May 6, 2019
    Publication date: August 22, 2019
    Inventors: Daniel Ford, Matthew Tschudy Bell, David Alan Gausebeck, Mykhaylo Kurinnyy
  • Patent number: 10325399
    Abstract: Systems and methods for generating three-dimensional models having regions of various resolutions are provided. In particular, imagery data can be captured and utilized to generate three-dimensional models. Regions of texture can be mapped to regions of a three-dimensional model when rendered. Resolutions of texture can be selectively altered and regions of texture can be selectively segmented to reduce texture memory cost. Texture can be algorithmically generated based on alternative texturing techniques. Models can be rendered having regions at various resolutions.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: June 18, 2019
    Assignee: Matterport, Inc.
    Inventors: Daniel Ford, Matthew Tschudy Bell, David Alan Gausebeck, Mykhaylo Kurinnyy
  • Publication number: 20190051050
    Abstract: Systems and methods for generating three-dimensional models with correlated three-dimensional and two dimensional imagery data are provided. In particular, imagery data can be captured in two dimensions and three dimensions. Imagery data can be transformed into models. Two-dimensional data and three-dimensional data can be correlated within models. Two-dimensional data can be selected for display within a three-dimensional model. Modifications can be made to the three-dimensional model and can be displayed within a three-dimensional model or within two-dimensional data. Models can transition between two dimensional imagery data and three dimensional imagery data.
    Type: Application
    Filed: October 9, 2018
    Publication date: February 14, 2019
    Inventors: Matthew Tschudy Bell, David Alan Gausebeck, Gregory William Coombe, Daniel Ford, William John Brown
  • Publication number: 20190050137
    Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
    Type: Application
    Filed: October 17, 2018
    Publication date: February 14, 2019
    Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
  • Publication number: 20190026956
    Abstract: The disclosed subject matter is directed to employing machine learning models configured to predict 3D data from 2D images using deep learning techniques to derive 3D data for the 2D images. In some embodiments, a system is described comprising a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory. The computer executable components comprise a reception component configured to receive two-dimensional images, and a three-dimensional data derivation component configured to employ one or more three-dimensional data from two-dimensional data (3D-from-2D) neural network models to derive three-dimensional data for the two-dimensional images.
    Type: Application
    Filed: September 25, 2018
    Publication date: January 24, 2019
    Inventors: David Alan Gausebeck, Matthew Tschudy Bell, Waleed K. Abdulla, Peter Kyuhee Hahn
  • Patent number: 10163261
    Abstract: Systems and methods for generating three-dimensional models with correlated three-dimensional and two dimensional imagery data are provided. In particular, imagery data can be captured in two dimensions and three dimensions. Imagery data can be transformed into models. Two-dimensional data and three-dimensional data can be correlated within models. Two-dimensional data can be selected for display within a three-dimensional model. Modifications can be made to the three-dimensional model and can be displayed within a three-dimensional model or within two-dimensional data. Models can transition between two dimensional imagery data and three dimensional imagery data.
    Type: Grant
    Filed: March 19, 2014
    Date of Patent: December 25, 2018
    Assignee: Matterport, Inc.
    Inventors: Matthew Tschudy Bell, David Alan Gausebeck, Gregory William Coombe, Daniel Ford, William John Brown
  • Patent number: 10139985
    Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: November 27, 2018
    Assignee: Matterport, Inc.
    Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
  • Patent number: 10127722
    Abstract: This application generally relates to systems and methods for generating and rendering visualizations of an object or environment using 2D and 3D image data of the object or the environment captured by a mobile device. In one embodiment, a method includes providing, by the system, a representation of a 3D model of an environment from a first perspective of the virtual camera relative to the 3D model, receiving, by the system, input requesting movement of the virtual camera relative to the 3D model, and selecting, by the system, a first 2D image from a plurality of two dimensional images associated with different capture positions and orientations relative to the 3D model based on association of a capture position and orientation of the first 2D image with a second perspective of the virtual camera relative to the 3D model determined based on the movement.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: November 13, 2018
    Assignee: Matterport, Inc.
    Inventors: Babak Robert Shakib, Kevin Allen Bjorke, Matthew Tschudy Bell
  • Publication number: 20180306588
    Abstract: Systems and techniques for determining and/or generating a navigation path through a three-dimensional (3D) model are presented. At least one waypoint location within a captured 3D model of an architectural environment is determined. A path within the captured 3D model, to navigate between a first location associated with the captured 3D model and a second location associated with the captured 3D model, is determined based on the at least one waypoint location. Visual data indicative of 2D data or 3D data of the captured 3D model along the path is transmitted to a remote client device to simulate navigation of the path within the captured 3D model between the first location and the second location.
    Type: Application
    Filed: June 26, 2018
    Publication date: October 25, 2018
    Inventors: Kevin Allen Bjorke, Matthew Tschudy Bell
  • Publication number: 20180300936
    Abstract: Systems and methods for generating three-dimensional models having regions of various resolutions are provided. In particular, imagery data can be captured and utilized to generate three-dimensional models. Regions of texture can be mapped to regions of a three-dimensional model when rendered. Resolutions of texture can be selectively altered and regions of texture can be selectively segmented to reduce texture memory cost. Texture can be algorithmically generated based on alternative texturing techniques. Models can be rendered having regions at various resolutions.
    Type: Application
    Filed: June 22, 2018
    Publication date: October 18, 2018
    Inventors: Daniel Ford, Matthew Tschudy Bell, David Alan Gausebeck, Mykhaylo Kurinnyy
  • Publication number: 20180293793
    Abstract: Systems and techniques for processing and/or transmitting three-dimensional (3D) data are presented. A partitioning component receives captured 3D data associated with a 3D model of an interior environment and partitions the captured 3D data into at least one data chunk associated with at least a first level of detail and a second level of detail. A data component stores 3D data including at least the first level of detail and the second level of detail for the at least one data chunk. An output component transmits a portion of data from the at least one data chunk that is associated with the first level of detail or the second level of detail to a remote client device based on information associated with the first level of detail and the second level of detail.
    Type: Application
    Filed: June 13, 2018
    Publication date: October 11, 2018
    Inventors: Matthew Tschudy Bell, David Alan Gausebeck, Gregory William Coombe, Daniel Ford
  • Patent number: 10055876
    Abstract: Systems and methods for generating three-dimensional models having regions of various resolutions are provided. In particular, imagery data can be captured and utilized to generate three-dimensional models. Regions of texture can be mapped to regions of a three-dimensional model when rendered. Resolutions of texture can be selectively altered and regions of texture can be selectively segmented to reduce texture memory cost. Texture can be algorithmically generated based on alternative texturing techniques. Models can be rendered having regions at various resolutions.
    Type: Grant
    Filed: June 6, 2014
    Date of Patent: August 21, 2018
    Assignee: Matterport, Inc.
    Inventors: Daniel Ford, Matthew Tschudy Bell, David Alan Gausebeck, Mykhaylo Kurinnyy
  • Patent number: 10030979
    Abstract: Systems and techniques for determining and/or generating a navigation path through a three-dimensional (3D) model are presented. At least one waypoint location within a captured 3D model of an architectural environment is determined. A path within the captured 3D model, to navigate between a first location associated with the captured 3D model and a second location associated with the captured 3D model, is determined based on the at least one waypoint location. Visual data indicative of 2D data or 3D data of the captured 3D model along the path is transmitted to a remote client device to simulate navigation of the path within the captured 3D model between the first location and the second location.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: July 24, 2018
    Assignee: Matterport, Inc.
    Inventors: Kevin Allen Bjorke, Matthew Tschudy Bell
  • Publication number: 20180203955
    Abstract: Systems and techniques for processing three-dimensional (3D) data are presented. Captured three-dimensional (3D) data associated with a 3D model of an architectural environment is received and at least a portion of the captured 3D data associated with a flat surface is identified. Furthermore, missing data associated with the portion of the captured 3D data is identified and additional 3D data for the missing data is generated based on other data associated with the portion of the captured 3D data.
    Type: Application
    Filed: March 16, 2018
    Publication date: July 19, 2018
    Inventors: Matthew Tschudy Bell, David Alan Gausebeck, Daniel Ford, Gregory William Coombe
  • Patent number: 10026224
    Abstract: Systems and techniques for processing and/or transmitting three-dimensional (3D) data are presented. A partitioning component receives captured 3D data associated with a 3D model of an interior environment and partitions the captured 3D data into at least one data chunk associated with at least a first level of detail and a second level of detail. A data component stores 3D data including at least the first level of detail and the second level of detail for the at least one data chunk. An output component transmits a portion of data from the at least one data chunk that is associated with the first level of detail or the second level of detail to a remote client device based on information associated with the first level of detail and the second level of detail.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: July 17, 2018
    Assignee: Matterport, Inc.
    Inventors: Matthew Tschudy Bell, David Alan Gausebeck, Gregory William Coombe, Daniel Ford
  • Publication number: 20180144547
    Abstract: This application generally relates to systems and methods for generating and rendering visualizations of an object or environment using 2D and 3D image data of the object or the environment captured by a mobile device. In one embodiment, a method includes providing, by the system, a representation of a 3D model of an environment from a first perspective of the virtual camera relative to the 3D model, receiving, by the system, input requesting movement of the virtual camera relative to the 3D model, and selecting, by the system, a first 2D image from a plurality of two dimensional images associated with different capture positions and orientations relative to the 3D model based on association of a capture position and orientation of the first 2D image with a second perspective of the virtual camera relative to the 3D model determined based on the movement.
    Type: Application
    Filed: June 30, 2016
    Publication date: May 24, 2018
    Inventors: Babak Robert Shakib, Kevin Allen Bjorke, Matthew Tschudy Bell