Patents Assigned to Matterport, Inc.
-
Patent number: 11640000Abstract: An apparatus comprising a housing, a mount configured to be coupled to a motor to horizontally move the apparatus, a wide-angle lens coupled to the housing, the wide-angle lens being positioned above the mount thereby being along an axis of rotation, the axis of rotation being the axis along which the apparatus rotates, an image capture device within the housing, the image capture device configured to receive two-dimensional images through the wide-angle lens of environment, and a LiDAR device within the housing, the LiDAR device configured to generate depth data based on the environment.Type: GrantFiled: May 23, 2022Date of Patent: May 2, 2023Assignee: Matterport, Inc.Inventors: David Alan Gausebeck, Kirk Stromberg, Louis D. Marzano, David Proctor, Naoto Sakakibara, Simeon Trieu, Kevin Kane, Simon Wynn
-
Publication number: 20230117227Abstract: An example method comprises applying learning (e.g., weights) developed for training a first model for a first data set of image data to training a second model for a second data set of sensor data. The first and the second data sets may be from the same environment. The second data set has a greater number of channels than the first data set. Weights of layers determined in the first model training may be initially applied to training the second model for the second set of data. Channels of the second data set equal to the number of channels of the first data set may be utilized for each of the layers, using the same weights from the first model. All or some of the channels may be applied in training the second model and using the layers, but determining new weights for the generation of the second trained model.Type: ApplicationFiled: October 20, 2022Publication date: April 20, 2023Applicant: Matterport, Inc.Inventor: Matthieu Francois Perrinel
-
Patent number: 11630214Abstract: An apparatus comprising a housing, a mount configured to be coupled to a motor to horizontally move the apparatus, a wide-angle lens coupled to the housing, the wide-angle lens being positioned above the mount thereby being along an axis of rotation, the axis of rotation being the axis along which the apparatus rotates, an image capture device within the housing, the image capture device configured to receive two-dimensional images through the wide-angle lens of environment, and a LiDAR device within the housing, the LiDAR device configured to generate depth data based on the environment.Type: GrantFiled: June 10, 2022Date of Patent: April 18, 2023Assignee: Matterport, Inc.Inventors: David Alan Gausebeck, Kirk Stromberg, Louis D. Marzano, David Proctor, Naoto Sakakibara, Simeon Trieu, Kevin Kane, Simon Wynn
-
Publication number: 20230104674Abstract: Example systems, methods, and non-transitory computer readable media are directed to obtaining a point cloud that represents an environment based at least in part on a plurality of points in three-dimensional space; determining corresponding classifications of points in the point cloud as ground or not-ground based at least in part on a plurality of ground classification algorithms; determining respective point cloud features associated with the points in the point cloud; determining respective cell features associated with a plurality of cells that segment the point cloud; generating feature data for a machine learning model based at least in part on one or more of: the classifications of the points based on the plurality of ground classification algorithms, the point cloud features, or the cell features; and classifying the points in the point cloud based at least in part on an output from the machine learning model.Type: ApplicationFiled: October 6, 2022Publication date: April 6, 2023Applicant: Matterport, Inc.Inventors: Kevin Balkoski, Edward Melcher, Eleanor Crane, Matthieu Francois Perrinel
-
Publication number: 20230098138Abstract: An example method comprises simulating an environment including at least one simulated object, a simulated ray source, and a simulated receiver, simulating a plurality of rays emitting from the simulated ray source, tracking each ray in the environment, detecting changes for at least one ray that interacts with the at least one simulated object, the changes for the at least one ray including a reflection from the at least one object, tracking the reflection of the at least part of the ray from the at least one object in the environment, determining measurements for any of the plurality of rays that interact with the simulated receiver in the simulation, the at least part of the ray being received by the receiver, the measurements including intensity for any of the plurality of rays that interact with the receiver, and generating synthetic point cloud data based on the measurements.Type: ApplicationFiled: September 28, 2022Publication date: March 30, 2023Applicant: Matterport, Inc.Inventors: Kevin Balkoski, Eleanor Crane
-
Publication number: 20230079307Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.Type: ApplicationFiled: August 16, 2022Publication date: March 16, 2023Applicant: Matterport, Inc.Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
-
Patent number: 11600046Abstract: Systems and methods for generating three-dimensional models with correlated three-dimensional and two dimensional imagery data are provided. In particular, imagery data can be captured in two dimensions and three dimensions. Imagery data can be transformed into models. Two-dimensional data and three-dimensional data can be correlated within models. Two-dimensional data can be selected for display within a three-dimensional model. Modifications can be made to the three-dimensional model and can be displayed within a three-dimensional model or within two-dimensional data. Models can transition between two dimensional imagery data and three dimensional imagery data.Type: GrantFiled: February 2, 2021Date of Patent: March 7, 2023Assignee: Matterport, Inc.Inventors: Matthew Tschudy Bell, David Alan Gausebeck, Gregory William Coombe, Daniel Ford, William John Brown
-
Patent number: 11551410Abstract: The present disclosure concerns a methodology that allows a user to “orbit” around a model on a specific axis of rotation and view an orthographic floor plan of the model. A user may view and “walk through” the model while staying at a specific height above the ground with smooth transitions between orbiting, floor plan, and walking modes.Type: GrantFiled: July 9, 2021Date of Patent: January 10, 2023Assignee: Matterport, Inc.Inventors: Matthew Bell, Michael Beebe
-
Publication number: 20220334262Abstract: An apparatus comprising a housing, a mount configured to be coupled to a motor to horizontally move the apparatus, a wide-angle lens coupled to the housing, the wide-angle lens being positioned above the mount thereby being along an axis of rotation, the axis of rotation being the axis along which the apparatus rotates, an image capture device within the housing, the image capture device configured to receive two-dimensional images through the wide-angle lens of environment, and a LiDAR device within the housing, the LiDAR device configured to generate depth data based on the environment.Type: ApplicationFiled: June 10, 2022Publication date: October 20, 2022Applicant: Matterport, Inc.Inventors: David Alan Gausebeck, Kirk Stromberg, Louis D. Marzano, David Proctor, Naoto Sakakibara, Simeon Trieu, Kevin Kane, Simon Wynn
-
Publication number: 20220317307Abstract: An apparatus comprising a housing, a mount configured to be coupled to a motor to horizontally move the apparatus, a wide-angle lens coupled to the housing, the wide-angle lens being positioned above the mount thereby being along an axis of rotation, the axis of rotation being the axis along which the apparatus rotates, an image capture device within the housing, the image capture device configured to receive two-dimensional images through the wide-angle lens of environment, and a LiDAR device within the housing, the LiDAR device configured to generate depth data based on the environment.Type: ApplicationFiled: May 23, 2022Publication date: October 6, 2022Applicant: Matterport, Inc.Inventors: David Alan Gausebeck, Kirk Stromberg, Louis D. Marzano, David Proctor, Naoto Sakakibara, Simeon Trieu, Kevin Kane, Simon Wynn
-
Patent number: 11422671Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.Type: GrantFiled: September 15, 2020Date of Patent: August 23, 2022Assignee: Matterport, Inc.Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
-
Patent number: 11379992Abstract: Systems and methods for frame and scene segmentation are disclosed herein. One method includes associating a first primary element from a first frame with a background tag, associating a second primary element from the first frame with a subject tag, generating a background texture using the first primary element, generating a foreground texture using the second primary element, and combining the background texture and the foreground texture into a synthesized frame. The method also includes training a segmentation network using the background tag, the foreground tag, and the synthesized frame.Type: GrantFiled: May 14, 2019Date of Patent: July 5, 2022Assignee: Matterport, Inc.Inventors: Gary Bradski, Prasanna Krishnasamy, Mona Fathollahi, Michael Tetelman
-
Publication number: 20220207849Abstract: The disclosed subject matter is directed to employing machine learning models configured to predict 3D data from 2D images using deep learning techniques to derive 3D data for the 2D images. In some embodiments, a method is provided that comprises receiving, by a system comprising a processor, a panoramic image, and employing, by the system, a three-dimensional data from two-dimensional data (3D-from-2D) convolutional neural network model to derive three-dimensional data from the panoramic image, wherein the 3D-from-2D convolutional neural network model employs convolutional layers that wrap around the panoramic image as projected on a two-dimensional plane to facilitate deriving the three-dimensional data.Type: ApplicationFiled: March 15, 2022Publication date: June 30, 2022Applicant: Matterport, Inc.Inventor: David Alan Gausebeck
-
Patent number: 11282287Abstract: The disclosed subject matter is directed to employing machine learning models configured to predict 3D data from 2D images using deep learning techniques to derive 3D data for the 2D images. In some embodiments, a method is provided that comprises receiving, by a system comprising a processor, a panoramic image, and employing, by the system, a three-dimensional data from two-dimensional data (3D-from-2D) convolutional neural network model to derive three-dimensional data from the panoramic image, wherein the 3D-from-2D convolutional neural network model employs convolutional layers that wrap around the panoramic image as projected on a two-dimensional plane to facilitate deriving the three-dimensional data.Type: GrantFiled: September 25, 2018Date of Patent: March 22, 2022Assignee: Matterport, Inc.Inventor: David Alan Gausebeck
-
Publication number: 20220075080Abstract: Systems, computer-implemented methods, apparatus and/or computer program products are provided that facilitate improving the accuracy of global positioning system (GPS) coordinates of indoor photos. The disclosed subject matter further provides systems, computer-implemented methods, apparatus and/or computer program products that facilitate generating exterior photos of structures based on GPS coordinates of indoor photos.Type: ApplicationFiled: April 27, 2021Publication date: March 10, 2022Applicant: Matterport, Inc.Inventors: Gunnar Hovden, Scott Adams
-
Patent number: 11263823Abstract: The disclosed subject matter is directed to employing machine learning models configured to predict 3D data from 2D images using deep learning techniques to derive 3D data for the 2D images. In some embodiments, a method is provided that comprises employing, by a system comprising a processor, one or more three-dimensional data from two-dimensional data (3D-from-2D) neural network models to derive three-dimensional data from one or more two-dimensional images captured of an object or environment from a current perspective of the object or environment viewed on or through a display of the device. The method further comprises, determining, by the system, a position for integrating a graphical data object on or within a representation of the object or environment viewed on or through the display based on the current perspective and the three-dimensional data.Type: GrantFiled: September 25, 2018Date of Patent: March 1, 2022Assignee: Matterport, Inc.Inventors: David Alan Gausebeck, Babak Robert Shakib
-
Publication number: 20220058414Abstract: Systems and methods for registering arbitrary visual features for use as fiducial elements are disclosed. An example method includes aligning a geometric reference object and a visual feature and capturing an image of the reference object and feature. The method also includes identifying, in the image of the object and the visual feature, a set of at least four non-colinear feature points in the visual feature. The method also includes deriving, from the image, a coordinate system using the geometric object. The method also comprises providing a set of measures to each of the points in the set of at least four non-colinear feature points using the coordinate system. The measures can then be saved in a memory to represent the registered visual feature and serve as the basis for using the registered visual feature as a fiducial element.Type: ApplicationFiled: April 30, 2021Publication date: February 24, 2022Applicant: Matterport, Inc.Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
-
Publication number: 20210374410Abstract: Techniques are provided for increasing the accuracy of automated classifications produced by a machine learning engine. Specifically, the classification produced by a machine learning engine for one photo-realistic image is adjusted based on the classifications produced by the machine learning engine for other photo-realistic images that correspond to the same portion of a 3D model that has been generated based on the photo-realistic images. Techniques are also provided for using the classifications of the photo-realistic images that were used to create a 3D model to automatically classify portions of the 3D model. The classifications assigned to the various portions of the 3D model in this manner may also be used as a factor for automatically segmenting the 3D model.Type: ApplicationFiled: April 20, 2021Publication date: December 2, 2021Applicant: Matterport, Inc.Inventors: Gunnar Hovden, Mykhaylo Kurinnyy
-
Publication number: 20210375047Abstract: Systems and techniques for processing and/or transmitting three-dimensional (3D) data are presented. A partitioning component receives captured 3D data associated with a 3D model of an interior environment and partitions the captured 3D data into at least one data chunk associated with at least a first level of detail and a second level of detail. A data component stores 3D data including at least the first level of detail and the second level of detail for the at least one data chunk. An output component transmits a portion of data from the at least one data chunk that is associated with the first level of detail or the second level of detail to a remote client device based on information associated with the first level of detail and the second level of detail.Type: ApplicationFiled: August 17, 2021Publication date: December 2, 2021Applicant: Matterport, Inc.Inventors: Matthew Tschudy Bell, David Alan Gausebeck, Gregory William Coombe, Daniel Ford
-
Patent number: 11189031Abstract: Methods and systems regarding importance sampling for the modification of a training procedure used to train a segmentation network are disclosed herein. A disclosed method includes segmenting an image using a trainable directed graph to generate a segmentation, displaying the segmentation, receiving a first selection directed to the segmentation, and modifying a training procedure for the trainable directed graph using the first selection. In a more specific method, the training procedure alters a set of trainable values associated with the trainable directed graph based on a delta between the segmentation and a ground truth segmentation, the first selection is spatially indicative with respect to the segmentation, and the delta is calculated based on the first selection.Type: GrantFiled: May 14, 2019Date of Patent: November 30, 2021Assignee: Matterport, Inc.Inventors: Gary Bradski, Ethan Rublee, Mona Fathollahi, Michael Tetelman, Ian Meeder, Varsha Vivek, William Nguyen