Patents by Inventor RYAN R. FINK
RYAN R. FINK has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240386684Abstract: A digital model of a structure may be aligned with a device view of the structure using a location marker. In embodiments, the location marker may be an optical marker, a radio beacon, or one or more objects that can be recognized to a unique pattern. Device orientation information is used in conjunction with the location marker to align the digital model, so that the device can overlay one or more AR objects or other information on a view of the structure with relative precision. Other embodiments may be described and/or claimed.Type: ApplicationFiled: May 16, 2023Publication date: November 21, 2024Applicant: Digs Space, Inc.Inventors: Ryan R. Fink, Ty Frackiewicz
-
Patent number: 12131297Abstract: Methods and systems for automatic detection and recognition of visual tags on equipment are disclosed. The make and model of an object such as an appliance or consumer device is recognized from an image or video using object detection. This make and model information may then be used to direct a user to locate an equipment information tag that includes model and serial number information. Object recognition and optical character recognition can then be employed to extract the model and serial number from the tag, along with any other relevant information. The extracted information may then be used to locate service and/or operation information.Type: GrantFiled: December 11, 2019Date of Patent: October 29, 2024Assignee: STREEM, LLCInventors: Ryan R. Fink, Sean M. Adkinson
-
Patent number: 12067683Abstract: Methods for placement of location-persistent 3D objects or annotations in an augmented reality scene are disclosed. By capturing location data along with device spatial orientation and the placement of 3D objects or annotations, the augmented reality scene can be recreated and manipulated. Placed 3D objects or annotations can reappear in a subsequent capture by the same or a different device when brought back to the location of the initial capture and placement of objects. Still further, the placed 3D objects or annotations may be supplemented with additional objects or annotations, or the placed objects or annotations may be removed or modified.Type: GrantFiled: September 13, 2019Date of Patent: August 20, 2024Assignee: STREEM, LLCInventors: Ryan R. Fink, Sean M. Adkinson
-
Patent number: 12039734Abstract: Methods for improving object recognition using depth data are disclosed. An image is captured of a 3-D scene along with depth data, such as in the form of a point cloud. The depth data is correlated with the image of the captured scene, such as by determining the frame of reference of each of the image and the depth data, thereby allowing the depth data to be mapped to the correct corresponding pixels of the image. Object recognition on the image is then improved by employing the correlated depth data. The depth data may be captured contemporaneously with the image of the 3-D scene, such as by using photogrammetry, or at a different time.Type: GrantFiled: August 21, 2019Date of Patent: July 16, 2024Assignee: STREEM, LLC.Inventors: Ryan R. Fink, Sean M. Adkinson
-
Publication number: 20240111927Abstract: Systems and methods for creating a digital twin of a structure, including facilitating communications during construction and during the structure's lifetime. Building documents such as construction plans along with vendor and contractor specifications and information may be ingested and used to automatically create a digital model of the structure. The digital model may be used to facilitate communication between various parties responsible for construction and subsequent maintenance of the structure. Other embodiments may be described and/or claimed.Type: ApplicationFiled: October 3, 2022Publication date: April 4, 2024Applicant: Digs Space, Inc.Inventors: Ryan R. Fink, Ty Frackiewicz
-
Publication number: 20240111928Abstract: Systems and methods for creating a digital twin of a structure, including facilitating communications during construction and during the structure's lifetime. Building documents such as construction plans along with vendor and contractor specifications and information may be ingested and used to automatically create a digital model of the structure. The digital model may be used to facilitate communication between various parties responsible for construction and subsequent maintenance of the structure. Other embodiments may be described and/or claimed.Type: ApplicationFiled: April 7, 2023Publication date: April 4, 2024Applicant: Digs Space, Inc.Inventors: Ryan R. Fink, Ty Frackiewicz
-
Publication number: 20240111929Abstract: Systems and methods for creating a digital twin of a structure, including facilitating communications during construction and during the structure's lifetime. Building documents such as construction plans along with vendor and contractor specifications and information may be ingested and used to automatically create a digital model of the structure. The digital model may be used to facilitate communication between various parties responsible for construction and subsequent maintenance of the structure. Other embodiments may be described and/or claimed.Type: ApplicationFiled: April 7, 2023Publication date: April 4, 2024Applicant: Digs Space, Inc.Inventors: Ryan R. Fink, Ty Frackiewicz
-
Publication number: 20240112420Abstract: A digital twin that is representative of a building may be viewed using augmented reality or virtual reality. In embodiments, a device may establish a connection with a remote server that hosts a digital twin and portfolio of a building or structure, and transmit its position to the remote server. In response, the remote server may transmit one or more augmented reality (AR) objects to the device that correspond to the device's position and view of the structure, the AR objects reflecting information tagged to the digital twin. Other embodiments may be described and/or claimed.Type: ApplicationFiled: April 17, 2023Publication date: April 4, 2024Applicant: Digs Space, Inc.Inventors: Ryan R. Fink, Ty Frackiewicz
-
Publication number: 20240111914Abstract: Access to a digital twin of a building or other structure on a limited basis may be obtained by scanning a marker, such as an optical marker or radio frequency marker, e.g. RFID tag. In embodiments, the marker encodes an identifier that is associated with the building or structure, which allows access to the digital twin. A remote device can obtain the identifier and transmit it to a remote server hosting the digital twin to obtain access according to predetermined access permissions. The access may be limited to when the remote device is in geographic proximity to the building or structure. Other embodiments may be described and/or claimed.Type: ApplicationFiled: April 17, 2023Publication date: April 4, 2024Applicant: Digs Space, Inc.Inventors: Ryan R. Fink, Ty Frackiewicz
-
Patent number: 11842444Abstract: Embodiments include systems and methods for visualizing the position of a capturing device within a 3D mesh, generated from a video stream from the capturing device. A capturing device may provide a video stream along with point cloud data and camera pose data. This video stream, point cloud data, and camera pose data are then used to progressively generate a 3D mesh. The camera pose data and point cloud data can further be used, in conjunction with a SLAM algorithm, to indicate the position and orientation of the capturing device within the generated 3D mesh.Type: GrantFiled: June 2, 2021Date of Patent: December 12, 2023Assignee: STREEM, LLCInventors: Sean M. Adkinson, Teressa Chizeck, Ryan R. Fink
-
Patent number: 11830213Abstract: Embodiments include systems and methods for remotely measuring distances in an environment captured by a device. A device captures a video stream of a device along with AR data that may include camera pose information and/or depth information, and transmits the video stream and AR data to a remote device. The remote device receives a selection of a first point and a second point within the video stream and, using the AR data, calculates a distance between the first and second points. The first and second points may be at different locations not simultaneously in view of the device. Other embodiments may capture additional points to compute areas and/or volumes.Type: GrantFiled: November 5, 2020Date of Patent: November 28, 2023Assignee: STREEM, LLCInventors: Sean M. Adkinson, Ryan R. Fink, Brian Gram, Nicholas Degroot, Alexander Fallenstedt
-
Patent number: 11830142Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.Type: GrantFiled: March 8, 2022Date of Patent: November 28, 2023Assignee: STREEM, LLCInventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
-
Patent number: 11823310Abstract: Methods for replacing or obscuring objects detected in an image or video on the basis of image context are disclosed. Context of the image or video may be obtained via pattern recognition on audio associated with the image or video, by user-supplied context, and/or by context derived from image capture, such as the nature of an application used to capture the image. The image or video may be analyzed for object detection and recognition, and depending upon policy, the image or video context used to select objects related or unrelated to the context for replacement or obfuscation. The selected objects may then be replaced with generic objects rendered from 3D models, or blurred or otherwise obscured.Type: GrantFiled: September 6, 2019Date of Patent: November 21, 2023Assignee: STREEM, LLC.Inventors: Ryan R. Fink, Sean M. Adkinson
-
Patent number: 11790025Abstract: Methods and systems disclosed herein are directed to detection and recognition of items of data on labels applied to equipment and identifying metadata labels for the items of data using NLP. Embodiments may include identifying one or more items of data on an image of a label associated with a piece of equipment, determining, using NLP on the one or more items of data of the image, one or more metadata associated, respectively, with the identified one or more items of data, and outputting at least one of the one or more metadata and associated items of data.Type: GrantFiled: March 30, 2021Date of Patent: October 17, 2023Assignee: STREEM, LLCInventors: Pavan K. Kamaraju, Ghislain Fouodji Tasse, Flora Ponjou Tasse, Sean M. Adkinson, Ryan R. Fink
-
Patent number: 11783546Abstract: A method for creating and storing a captured image and associated spatial data and augmented reality (AR) data in a file that allows subsequent manipulation and processing of AR objects is disclosed. In embodiments, one or more frames are extracted from a video stream, along with spatial information about the camera capturing the video stream. The one or more frames are analyzed in conjunction with the spatial information to calculate a point cloud of depth data. The one or more frames are stored in a file in a first layer, and the point cloud is stored in the file in a second layer. In some embodiments, one or more AR objects are stored in a third layer.Type: GrantFiled: December 17, 2018Date of Patent: October 10, 2023Assignee: STREEM, LLCInventors: Ryan R. Fink, Sean M. Adkinson
-
Publication number: 20230245391Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.Type: ApplicationFiled: April 5, 2023Publication date: August 3, 2023Inventors: SEAN M. ADKINSON, FLORA PONJOU TASSE, PAVAN K. KAMARAJU, GHISLAIN FOUODJI TASSE, RYAN R. FINK
-
Patent number: 11715302Abstract: Methods for automatically tagging one or more images and/or video clips using a audio stream are disclosed. The audio stream may be processed using an automatic speech recognition algorithm, to extract possible keywords. The image(s) and/or video clip(s) may then be tagged with the possible keywords. In some embodiments, the image(s) and/or video clip(s) may be tagged automatically. In other embodiments, a user may be presented with a list of possible keywords extracted from the audio stream, from which the user may then select to manually tag the image(s) and/or video clip(s).Type: GrantFiled: August 21, 2019Date of Patent: August 1, 2023Assignee: STREEM, LLCInventors: Ryan R. Fink, Sean M. Adkinson
-
Patent number: 11640694Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.Type: GrantFiled: March 22, 2021Date of Patent: May 2, 2023Assignee: STREEM, LLCInventors: Sean M. Adkinson, Flora Ponjou Tasse, Pavan K. Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink
-
Publication number: 20220392167Abstract: Embodiments include systems and methods for visualizing the position of a capturing device within a 3D mesh, generated from a video stream from the capturing device. A capturing device may provide a video stream along with point cloud data and camera pose data. This video stream, point cloud data, and camera pose data are then used to progressively generate a 3D mesh. The camera pose data and point cloud data can further be used, in conjunction with a SLAM algorithm, to indicate the position and orientation of the capturing device within the generated 3D mesh.Type: ApplicationFiled: June 2, 2021Publication date: December 8, 2022Inventors: SEAN M. ADKINSON, TERESSA CHIZECK, RYAN R. FINK
-
Publication number: 20220189118Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.Type: ApplicationFiled: March 8, 2022Publication date: June 16, 2022Applicant: STREEM, INC.Inventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson