Patents Assigned to Matterport, Inc.
-
Patent number: 11943539Abstract: An environmental capture system (ECS) captures image data and depth information in a 360-degree scene. The captured image data and depth information can be used to generate a 360-degree scene. The ECS comprises a frame, a drive train mounted to the frame, and an image capture device coupled to the drive train to capture, while pointed in a first direction, a plurality of images at different exposures in a first field of view (FOV) of the 360-degree scene. The ECS further comprises a depth information capture device coupled to the drive train. The depth information capture device and the image capture device are rotated by the drive train about a first, substantially vertical, axis from the first direction to a second direction. The depth information capture device, while being rotated from the first direction to the second direction, captures depth information for a first portion of the 360-degree scene.Type: GrantFiled: May 13, 2022Date of Patent: March 26, 2024Assignee: Matterport, Inc.Inventors: David Alan Gausebeck, Kirk Stromberg, Louis D. Marzano, David Proctor, Naoto Sakakibara, Simeon Trieu, Kevin Kane, Simon Wynn
-
Patent number: 11852732Abstract: An apparatus comprising a housing, a mount configured to be coupled to a motor to horizontally move the apparatus, a wide-angle lens coupled to the housing, the wide-angle lens being positioned above the mount thereby being along an axis of rotation, the axis of rotation being the axis along which the apparatus rotates, an image capture device within the housing, the image capture device configured to receive two-dimensional images through the wide-angle lens of environment, and a LiDAR device within the housing, the LiDAR device configured to generate depth data based on the environment.Type: GrantFiled: April 3, 2023Date of Patent: December 26, 2023Assignee: Matterport, Inc.Inventors: David Alan Gausebeck, Kirk Stromberg, Louis D. Marzano, David Proctor, Naoto Sakakibara, Simeon Trieu, Kevin Kane, Simon Wynn
-
Patent number: 11775788Abstract: Systems and methods for registering arbitrary visual features for use as fiducial elements are disclosed. An example method includes aligning a geometric reference object and a visual feature and capturing an image of the reference object and feature. The method also includes identifying, in the image of the object and the visual feature, a set of at least four non-colinear feature points in the visual feature. The method also includes deriving, from the image, a coordinate system using the geometric object. The method also comprises providing a set of measures to each of the points in the set of at least four non-colinear feature points using the coordinate system. The measures can then be saved in a memory to represent the registered visual feature and serve as the basis for using the registered visual feature as a fiducial element.Type: GrantFiled: April 30, 2021Date of Patent: October 3, 2023Assignee: Matterport, Inc.Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
-
Publication number: 20230306688Abstract: Systems and methods for generating three-dimensional models with correlated three-dimensional and two dimensional imagery data are provided. In particular, imagery data can be captured in two dimensions and three dimensions. Imagery data can be transformed into models. Two-dimensional data and three-dimensional data can be correlated within models. Two-dimensional data can be selected for display within a three-dimensional model. Modifications can be made to the three-dimensional model and can be displayed within a three-dimensional model or within two-dimensional data. Models can transition between two dimensional imagery data and three dimensional imagery data.Type: ApplicationFiled: February 17, 2023Publication date: September 28, 2023Applicant: Matterport, Inc.Inventors: Matthew Tschudy Bell, David Alan Gausebeck, Gregory William Coombe, Daniel Ford, William John Brown
-
Publication number: 20230290072Abstract: A system comprising: processors and memory containing instructions to control processors to: receive images representing an interior of a physical environment, identify, using neural network for object recognition, an object in an image, the object is associated with a location relative to the physical environment, identify, using neural network for object recognition, another object in another image, determine if objects in the images are located near or at a similar location based on location information associated with the objects, if the objects are located near or at a similar location, then objects are an instance of a single object, store similar location associated with the single object, display an interactive walkthrough visualization of a 3D model of the physical environment including the single object, receive request regarding object location through the interactive walkthrough visualization, and provide the similar location of the single object for display in the interactive walkthrough visualizType: ApplicationFiled: March 10, 2023Publication date: September 14, 2023Applicant: Matterport, Inc.Inventors: Gunnar Hovden, Azwad Sabik
-
Publication number: 20230274385Abstract: Systems, computer-implemented methods, apparatus and/or computer program products are provided that facilitate improving the accuracy of global positioning system (GPS) coordinates of indoor photos. The disclosed subject matter further provides systems, computer-implemented methods, apparatus and/or computer program products that facilitate generating exterior photos of structures based on GPS coordinates of indoor photos.Type: ApplicationFiled: May 5, 2023Publication date: August 31, 2023Applicant: Matterport, Inc.Inventors: Gunnar Hovden, Scott Adams
-
Patent number: 11741669Abstract: Systems and techniques for processing and/or transmitting three-dimensional (3D) data are presented. A partitioning component receives captured 3D data associated with a 3D model of an interior environment and partitions the captured 3D data into at least one data chunk associated with at least a first level of detail and a second level of detail. A data component stores 3D data including at least the first level of detail and the second level of detail for the at least one data chunk. An output component transmits a portion of data from the at least one data chunk that is associated with the first level of detail or the second level of detail to a remote client device based on information associated with the first level of detail and the second level of detail.Type: GrantFiled: August 17, 2021Date of Patent: August 29, 2023Assignee: Matterport, Inc.Inventors: Matthew Tschudy Bell, David Alan Gausebeck, Gregory William Coombe, Daniel Ford
-
Publication number: 20230269353Abstract: This application generally relates to capturing and aligning panoramic image and depth data. In one embodiment, a device is provided that comprises a housing and a plurality of cameras configured to capture two-dimensional images, wherein the cameras are arranged at different positions on the housing and have different azimuth orientations relative to a center point such that the cameras have a collective field-of-view spanning up to 360° horizontally. The device further comprises a plurality of depth detection components configured to capture depth data, wherein the depth detection components are arranged at different positions on the housing and have different azimuth orientations relative to the center point such that the depth detection components have the collective field-of-view spanning up to 360° horizontally.Type: ApplicationFiled: April 27, 2023Publication date: August 24, 2023Applicant: Matterport, Inc.Inventors: Kyle Simek, David Alan Gausebeck, Matthew Tschudy Bell
-
Patent number: 11734827Abstract: Systems and methods for user guided iterative frame and scene segmentation are disclosed herein. The systems and methods can rely on overtraining a segmentation network on a frame. A disclosed method includes selecting a frame from a scene and generating a frame segmentation using the frame and a segmentation network. The method also includes displaying the frame and frame segmentation overlain on the frame, receiving a correction input on the frame, and training the segmentation network using the correction input. The method includes overtraining the segmentation network for the scene by iterating the above steps on the same frame or a series of frames from the scene.Type: GrantFiled: May 11, 2021Date of Patent: August 22, 2023Assignee: Matterport, Inc.Inventor: Gary Bradski
-
Publication number: 20230260265Abstract: Techniques are provided for increasing the accuracy of automated classifications produced by a machine learning engine. Specifically, the classification produced by a machine learning engine for one photo-realistic image is adjusted based on the classifications produced by the machine learning engine for other photo-realistic images that correspond to the same portion of a 3D model that has been generated based on the photo-realistic images. Techniques are also provided for using the classifications of the photo-realistic images that were used to create a 3D model to automatically classify portions of the 3D model. The classifications assigned to the various portions of the 3D model in this manner may also be used as a factor for automatically segmenting the 3D model.Type: ApplicationFiled: April 18, 2023Publication date: August 17, 2023Applicant: Matterport, Inc.Inventors: Gunnar Hovden, Mykhaylo Kurinnyy
-
Publication number: 20230243978Abstract: An apparatus comprising a housing, a mount configured to be coupled to a motor to horizontally move the apparatus, a wide-angle lens coupled to the housing, the wide-angle lens being positioned above the mount thereby being along an axis of rotation, the axis of rotation being the axis along which the apparatus rotates, an image capture device within the housing, the image capture device configured to receive two-dimensional images through the wide-angle lens of environment, and a LiDAR device within the housing, the LiDAR device configured to generate depth data based on the environment.Type: ApplicationFiled: April 3, 2023Publication date: August 3, 2023Applicant: Matterport, Inc.Inventors: David Alan Gausebeck, Kirk Stromberg, Louis D. Marzano, David Proctor, Naoto Sakakibara, Simeon Trieu, Kevin Kane, Simon Wynn
-
Patent number: 11682103Abstract: Systems, computer-implemented methods, apparatus and/or computer program products are provided that facilitate improving the accuracy of global positioning system (GPS) coordinates of indoor photos. The disclosed subject matter further provides systems, computer-implemented methods, apparatus and/or computer program products that facilitate generating exterior photos of structures based on GPS coordinates of indoor photos.Type: GrantFiled: April 27, 2021Date of Patent: June 20, 2023Assignee: Matterport, Inc.Inventors: Gunnar Hovden, Scott Adams
-
Patent number: 11677920Abstract: This application generally relates to capturing and aligning panoramic image and depth data. In one embodiment, a device is provided that comprises a housing and a plurality of cameras configured to capture two-dimensional images, wherein the cameras are arranged at different positions on the housing and have different azimuth orientations relative to a center point such that the cameras have a collective field-of-view spanning up to 360° horizontally. The device further comprises a plurality of depth detection components configured to capture depth data, wherein the depth detection components are arranged at different positions on the housing and have different azimuth orientations relative to the center point such that the depth detection components have the collective field-of-view spanning up to 360° horizontally.Type: GrantFiled: September 3, 2019Date of Patent: June 13, 2023Assignee: Matterport, Inc.Inventors: Kyle Simek, David Gausebeck, Matthew Tschudy Bell
-
Patent number: 11670076Abstract: Techniques are provided for increasing the accuracy of automated classifications produced by a machine learning engine. Specifically, the classification produced by a machine learning engine for one photo-realistic image is adjusted based on the classifications produced by the machine learning engine for other photo-realistic images that correspond to the same portion of a 3D model that has been generated based on the photo-realistic images. Techniques are also provided for using the classifications of the photo-realistic images that were used to create a 3D model to automatically classify portions of the 3D model. The classifications assigned to the various portions of the 3D model in this manner may also be used as a factor for automatically segmenting the 3D model.Type: GrantFiled: April 20, 2021Date of Patent: June 6, 2023Assignee: Matterport, Inc.Inventors: Gunnar Hovden, Mykhaylo Kurinnyy
-
Patent number: 11640000Abstract: An apparatus comprising a housing, a mount configured to be coupled to a motor to horizontally move the apparatus, a wide-angle lens coupled to the housing, the wide-angle lens being positioned above the mount thereby being along an axis of rotation, the axis of rotation being the axis along which the apparatus rotates, an image capture device within the housing, the image capture device configured to receive two-dimensional images through the wide-angle lens of environment, and a LiDAR device within the housing, the LiDAR device configured to generate depth data based on the environment.Type: GrantFiled: May 23, 2022Date of Patent: May 2, 2023Assignee: Matterport, Inc.Inventors: David Alan Gausebeck, Kirk Stromberg, Louis D. Marzano, David Proctor, Naoto Sakakibara, Simeon Trieu, Kevin Kane, Simon Wynn
-
Publication number: 20230117227Abstract: An example method comprises applying learning (e.g., weights) developed for training a first model for a first data set of image data to training a second model for a second data set of sensor data. The first and the second data sets may be from the same environment. The second data set has a greater number of channels than the first data set. Weights of layers determined in the first model training may be initially applied to training the second model for the second set of data. Channels of the second data set equal to the number of channels of the first data set may be utilized for each of the layers, using the same weights from the first model. All or some of the channels may be applied in training the second model and using the layers, but determining new weights for the generation of the second trained model.Type: ApplicationFiled: October 20, 2022Publication date: April 20, 2023Applicant: Matterport, Inc.Inventor: Matthieu Francois Perrinel
-
Patent number: 11630214Abstract: An apparatus comprising a housing, a mount configured to be coupled to a motor to horizontally move the apparatus, a wide-angle lens coupled to the housing, the wide-angle lens being positioned above the mount thereby being along an axis of rotation, the axis of rotation being the axis along which the apparatus rotates, an image capture device within the housing, the image capture device configured to receive two-dimensional images through the wide-angle lens of environment, and a LiDAR device within the housing, the LiDAR device configured to generate depth data based on the environment.Type: GrantFiled: June 10, 2022Date of Patent: April 18, 2023Assignee: Matterport, Inc.Inventors: David Alan Gausebeck, Kirk Stromberg, Louis D. Marzano, David Proctor, Naoto Sakakibara, Simeon Trieu, Kevin Kane, Simon Wynn
-
Publication number: 20230104674Abstract: Example systems, methods, and non-transitory computer readable media are directed to obtaining a point cloud that represents an environment based at least in part on a plurality of points in three-dimensional space; determining corresponding classifications of points in the point cloud as ground or not-ground based at least in part on a plurality of ground classification algorithms; determining respective point cloud features associated with the points in the point cloud; determining respective cell features associated with a plurality of cells that segment the point cloud; generating feature data for a machine learning model based at least in part on one or more of: the classifications of the points based on the plurality of ground classification algorithms, the point cloud features, or the cell features; and classifying the points in the point cloud based at least in part on an output from the machine learning model.Type: ApplicationFiled: October 6, 2022Publication date: April 6, 2023Applicant: Matterport, Inc.Inventors: Kevin Balkoski, Edward Melcher, Eleanor Crane, Matthieu Francois Perrinel
-
Publication number: 20230098138Abstract: An example method comprises simulating an environment including at least one simulated object, a simulated ray source, and a simulated receiver, simulating a plurality of rays emitting from the simulated ray source, tracking each ray in the environment, detecting changes for at least one ray that interacts with the at least one simulated object, the changes for the at least one ray including a reflection from the at least one object, tracking the reflection of the at least part of the ray from the at least one object in the environment, determining measurements for any of the plurality of rays that interact with the simulated receiver in the simulation, the at least part of the ray being received by the receiver, the measurements including intensity for any of the plurality of rays that interact with the receiver, and generating synthetic point cloud data based on the measurements.Type: ApplicationFiled: September 28, 2022Publication date: March 30, 2023Applicant: Matterport, Inc.Inventors: Kevin Balkoski, Eleanor Crane
-
Publication number: 20230079307Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.Type: ApplicationFiled: August 16, 2022Publication date: March 16, 2023Applicant: Matterport, Inc.Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan