Patents by Inventor RYAN R. FINK

RYAN R. FINK has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240111927
    Abstract: Systems and methods for creating a digital twin of a structure, including facilitating communications during construction and during the structure's lifetime. Building documents such as construction plans along with vendor and contractor specifications and information may be ingested and used to automatically create a digital model of the structure. The digital model may be used to facilitate communication between various parties responsible for construction and subsequent maintenance of the structure. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: October 3, 2022
    Publication date: April 4, 2024
    Applicant: Digs Space, Inc.
    Inventors: Ryan R. Fink, Ty Frackiewicz
  • Publication number: 20240111928
    Abstract: Systems and methods for creating a digital twin of a structure, including facilitating communications during construction and during the structure's lifetime. Building documents such as construction plans along with vendor and contractor specifications and information may be ingested and used to automatically create a digital model of the structure. The digital model may be used to facilitate communication between various parties responsible for construction and subsequent maintenance of the structure. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: April 7, 2023
    Publication date: April 4, 2024
    Applicant: Digs Space, Inc.
    Inventors: Ryan R. Fink, Ty Frackiewicz
  • Publication number: 20240111914
    Abstract: Access to a digital twin of a building or other structure on a limited basis may be obtained by scanning a marker, such as an optical marker or radio frequency marker, e.g. RFID tag. In embodiments, the marker encodes an identifier that is associated with the building or structure, which allows access to the digital twin. A remote device can obtain the identifier and transmit it to a remote server hosting the digital twin to obtain access according to predetermined access permissions. The access may be limited to when the remote device is in geographic proximity to the building or structure. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: April 17, 2023
    Publication date: April 4, 2024
    Applicant: Digs Space, Inc.
    Inventors: Ryan R. Fink, Ty Frackiewicz
  • Publication number: 20240111929
    Abstract: Systems and methods for creating a digital twin of a structure, including facilitating communications during construction and during the structure's lifetime. Building documents such as construction plans along with vendor and contractor specifications and information may be ingested and used to automatically create a digital model of the structure. The digital model may be used to facilitate communication between various parties responsible for construction and subsequent maintenance of the structure. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: April 7, 2023
    Publication date: April 4, 2024
    Applicant: Digs Space, Inc.
    Inventors: Ryan R. Fink, Ty Frackiewicz
  • Publication number: 20240112420
    Abstract: A digital twin that is representative of a building may be viewed using augmented reality or virtual reality. In embodiments, a device may establish a connection with a remote server that hosts a digital twin and portfolio of a building or structure, and transmit its position to the remote server. In response, the remote server may transmit one or more augmented reality (AR) objects to the device that correspond to the device's position and view of the structure, the AR objects reflecting information tagged to the digital twin. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: April 17, 2023
    Publication date: April 4, 2024
    Applicant: Digs Space, Inc.
    Inventors: Ryan R. Fink, Ty Frackiewicz
  • Patent number: 11842444
    Abstract: Embodiments include systems and methods for visualizing the position of a capturing device within a 3D mesh, generated from a video stream from the capturing device. A capturing device may provide a video stream along with point cloud data and camera pose data. This video stream, point cloud data, and camera pose data are then used to progressively generate a 3D mesh. The camera pose data and point cloud data can further be used, in conjunction with a SLAM algorithm, to indicate the position and orientation of the capturing device within the generated 3D mesh.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: December 12, 2023
    Assignee: STREEM, LLC
    Inventors: Sean M. Adkinson, Teressa Chizeck, Ryan R. Fink
  • Patent number: 11830142
    Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.
    Type: Grant
    Filed: March 8, 2022
    Date of Patent: November 28, 2023
    Assignee: STREEM, LLC
    Inventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
  • Patent number: 11830213
    Abstract: Embodiments include systems and methods for remotely measuring distances in an environment captured by a device. A device captures a video stream of a device along with AR data that may include camera pose information and/or depth information, and transmits the video stream and AR data to a remote device. The remote device receives a selection of a first point and a second point within the video stream and, using the AR data, calculates a distance between the first and second points. The first and second points may be at different locations not simultaneously in view of the device. Other embodiments may capture additional points to compute areas and/or volumes.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: November 28, 2023
    Assignee: STREEM, LLC
    Inventors: Sean M. Adkinson, Ryan R. Fink, Brian Gram, Nicholas Degroot, Alexander Fallenstedt
  • Patent number: 11823310
    Abstract: Methods for replacing or obscuring objects detected in an image or video on the basis of image context are disclosed. Context of the image or video may be obtained via pattern recognition on audio associated with the image or video, by user-supplied context, and/or by context derived from image capture, such as the nature of an application used to capture the image. The image or video may be analyzed for object detection and recognition, and depending upon policy, the image or video context used to select objects related or unrelated to the context for replacement or obfuscation. The selected objects may then be replaced with generic objects rendered from 3D models, or blurred or otherwise obscured.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: November 21, 2023
    Assignee: STREEM, LLC.
    Inventors: Ryan R. Fink, Sean M. Adkinson
  • Patent number: 11790025
    Abstract: Methods and systems disclosed herein are directed to detection and recognition of items of data on labels applied to equipment and identifying metadata labels for the items of data using NLP. Embodiments may include identifying one or more items of data on an image of a label associated with a piece of equipment, determining, using NLP on the one or more items of data of the image, one or more metadata associated, respectively, with the identified one or more items of data, and outputting at least one of the one or more metadata and associated items of data.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: October 17, 2023
    Assignee: STREEM, LLC
    Inventors: Pavan K. Kamaraju, Ghislain Fouodji Tasse, Flora Ponjou Tasse, Sean M. Adkinson, Ryan R. Fink
  • Patent number: 11783546
    Abstract: A method for creating and storing a captured image and associated spatial data and augmented reality (AR) data in a file that allows subsequent manipulation and processing of AR objects is disclosed. In embodiments, one or more frames are extracted from a video stream, along with spatial information about the camera capturing the video stream. The one or more frames are analyzed in conjunction with the spatial information to calculate a point cloud of depth data. The one or more frames are stored in a file in a first layer, and the point cloud is stored in the file in a second layer. In some embodiments, one or more AR objects are stored in a third layer.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: October 10, 2023
    Assignee: STREEM, LLC
    Inventors: Ryan R. Fink, Sean M. Adkinson
  • Publication number: 20230245391
    Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.
    Type: Application
    Filed: April 5, 2023
    Publication date: August 3, 2023
    Inventors: SEAN M. ADKINSON, FLORA PONJOU TASSE, PAVAN K. KAMARAJU, GHISLAIN FOUODJI TASSE, RYAN R. FINK
  • Patent number: 11715302
    Abstract: Methods for automatically tagging one or more images and/or video clips using a audio stream are disclosed. The audio stream may be processed using an automatic speech recognition algorithm, to extract possible keywords. The image(s) and/or video clip(s) may then be tagged with the possible keywords. In some embodiments, the image(s) and/or video clip(s) may be tagged automatically. In other embodiments, a user may be presented with a list of possible keywords extracted from the audio stream, from which the user may then select to manually tag the image(s) and/or video clip(s).
    Type: Grant
    Filed: August 21, 2019
    Date of Patent: August 1, 2023
    Assignee: STREEM, LLC
    Inventors: Ryan R. Fink, Sean M. Adkinson
  • Patent number: 11640694
    Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: May 2, 2023
    Assignee: STREEM, LLC
    Inventors: Sean M. Adkinson, Flora Ponjou Tasse, Pavan K. Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink
  • Publication number: 20220392167
    Abstract: Embodiments include systems and methods for visualizing the position of a capturing device within a 3D mesh, generated from a video stream from the capturing device. A capturing device may provide a video stream along with point cloud data and camera pose data. This video stream, point cloud data, and camera pose data are then used to progressively generate a 3D mesh. The camera pose data and point cloud data can further be used, in conjunction with a SLAM algorithm, to indicate the position and orientation of the capturing device within the generated 3D mesh.
    Type: Application
    Filed: June 2, 2021
    Publication date: December 8, 2022
    Inventors: SEAN M. ADKINSON, TERESSA CHIZECK, RYAN R. FINK
  • Publication number: 20220189118
    Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.
    Type: Application
    Filed: March 8, 2022
    Publication date: June 16, 2022
    Applicant: STREEM, INC.
    Inventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
  • Publication number: 20220138979
    Abstract: Embodiments include systems and methods for remotely measuring distances in an environment captured by a device. A device captures a video stream of a device along with AR data that may include camera pose information and/or depth information, and transmits the video stream and AR data to a remote device. The remote device receives a selection of a first point and a second point within the video stream and, using the AR data, calculates a distance between the first and second points. The first and second points may be at different locations not simultaneously in view of the device. Other embodiments may capture additional points to compute areas and/or volumes.
    Type: Application
    Filed: November 5, 2020
    Publication date: May 5, 2022
    Inventors: SEAN M. ADKINSON, RYAN R. FINK, BRIAN GRAM, NICHOLAS DEGROOT, ALEXANDER FALLENSTEDT
  • Patent number: 11323657
    Abstract: Methods and systems for the remote delivery of professional services, using augmented reality (AR), are disclosed. In embodiments, a user scans or acquires a physical marker, such as an optical code or radio beacon. The physical marker provides information to the user's device to allow it to connect to a server. The physical marker may also provide contextual data, possibly in conjunction with contextual data from the user's device. The server then provides a list of professionals on the basis of the contextual data. The user selects a professional, and the server initiates a video session between a user device and a professional device, where the professional can superimpose one or more AR objects on the video, to be displayed on the user device.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: May 3, 2022
    Assignee: STREEM, INC.
    Inventors: Ryan R. Fink, Sean M. Adkinson
  • Patent number: 11270505
    Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: March 8, 2022
    Assignee: STREEM, INC
    Inventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
  • Publication number: 20210295599
    Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.
    Type: Application
    Filed: March 22, 2021
    Publication date: September 23, 2021
    Inventors: SEAN M. ADKINSON, FLORA PONJOU TASSE, PAVAN K. KAMAJARU, GHISLAIN FOUODJI TASSE, RYAN R. FINK