Patents by Inventor SEAN M. ADKINSON
SEAN M. ADKINSON has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11842444Abstract: Embodiments include systems and methods for visualizing the position of a capturing device within a 3D mesh, generated from a video stream from the capturing device. A capturing device may provide a video stream along with point cloud data and camera pose data. This video stream, point cloud data, and camera pose data are then used to progressively generate a 3D mesh. The camera pose data and point cloud data can further be used, in conjunction with a SLAM algorithm, to indicate the position and orientation of the capturing device within the generated 3D mesh.Type: GrantFiled: June 2, 2021Date of Patent: December 12, 2023Assignee: STREEM, LLCInventors: Sean M. Adkinson, Teressa Chizeck, Ryan R. Fink
-
Patent number: 11830142Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.Type: GrantFiled: March 8, 2022Date of Patent: November 28, 2023Assignee: STREEM, LLCInventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
-
Patent number: 11830213Abstract: Embodiments include systems and methods for remotely measuring distances in an environment captured by a device. A device captures a video stream of a device along with AR data that may include camera pose information and/or depth information, and transmits the video stream and AR data to a remote device. The remote device receives a selection of a first point and a second point within the video stream and, using the AR data, calculates a distance between the first and second points. The first and second points may be at different locations not simultaneously in view of the device. Other embodiments may capture additional points to compute areas and/or volumes.Type: GrantFiled: November 5, 2020Date of Patent: November 28, 2023Assignee: STREEM, LLCInventors: Sean M. Adkinson, Ryan R. Fink, Brian Gram, Nicholas Degroot, Alexander Fallenstedt
-
Patent number: 11823310Abstract: Methods for replacing or obscuring objects detected in an image or video on the basis of image context are disclosed. Context of the image or video may be obtained via pattern recognition on audio associated with the image or video, by user-supplied context, and/or by context derived from image capture, such as the nature of an application used to capture the image. The image or video may be analyzed for object detection and recognition, and depending upon policy, the image or video context used to select objects related or unrelated to the context for replacement or obfuscation. The selected objects may then be replaced with generic objects rendered from 3D models, or blurred or otherwise obscured.Type: GrantFiled: September 6, 2019Date of Patent: November 21, 2023Assignee: STREEM, LLC.Inventors: Ryan R. Fink, Sean M. Adkinson
-
Patent number: 11790025Abstract: Methods and systems disclosed herein are directed to detection and recognition of items of data on labels applied to equipment and identifying metadata labels for the items of data using NLP. Embodiments may include identifying one or more items of data on an image of a label associated with a piece of equipment, determining, using NLP on the one or more items of data of the image, one or more metadata associated, respectively, with the identified one or more items of data, and outputting at least one of the one or more metadata and associated items of data.Type: GrantFiled: March 30, 2021Date of Patent: October 17, 2023Assignee: STREEM, LLCInventors: Pavan K. Kamaraju, Ghislain Fouodji Tasse, Flora Ponjou Tasse, Sean M. Adkinson, Ryan R. Fink
-
Patent number: 11783546Abstract: A method for creating and storing a captured image and associated spatial data and augmented reality (AR) data in a file that allows subsequent manipulation and processing of AR objects is disclosed. In embodiments, one or more frames are extracted from a video stream, along with spatial information about the camera capturing the video stream. The one or more frames are analyzed in conjunction with the spatial information to calculate a point cloud of depth data. The one or more frames are stored in a file in a first layer, and the point cloud is stored in the file in a second layer. In some embodiments, one or more AR objects are stored in a third layer.Type: GrantFiled: December 17, 2018Date of Patent: October 10, 2023Assignee: STREEM, LLCInventors: Ryan R. Fink, Sean M. Adkinson
-
Publication number: 20230245391Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.Type: ApplicationFiled: April 5, 2023Publication date: August 3, 2023Inventors: SEAN M. ADKINSON, FLORA PONJOU TASSE, PAVAN K. KAMARAJU, GHISLAIN FOUODJI TASSE, RYAN R. FINK
-
Patent number: 11715302Abstract: Methods for automatically tagging one or more images and/or video clips using a audio stream are disclosed. The audio stream may be processed using an automatic speech recognition algorithm, to extract possible keywords. The image(s) and/or video clip(s) may then be tagged with the possible keywords. In some embodiments, the image(s) and/or video clip(s) may be tagged automatically. In other embodiments, a user may be presented with a list of possible keywords extracted from the audio stream, from which the user may then select to manually tag the image(s) and/or video clip(s).Type: GrantFiled: August 21, 2019Date of Patent: August 1, 2023Assignee: STREEM, LLCInventors: Ryan R. Fink, Sean M. Adkinson
-
Patent number: 11640694Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.Type: GrantFiled: March 22, 2021Date of Patent: May 2, 2023Assignee: STREEM, LLCInventors: Sean M. Adkinson, Flora Ponjou Tasse, Pavan K. Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink
-
Patent number: 11600050Abstract: Embodiments include systems and methods for determining a 6D pose estimate associated with an image of a physical 3D object captured in a video stream. An initial 6D pose estimate is inferred and then further iteratively refined. The video stream may be frozen to allow the user to tap or touch a display to indicate a location of the user-input keypoints. The resulting 6D pose estimate is used to assist in replacing or superimposing the physical 3D object with digital or virtual content in an augmented reality (AR) frame.Type: GrantFiled: April 2, 2021Date of Patent: March 7, 2023Assignee: STREEM, LLCInventors: Flora Ponjou Tasse, Pavan K. Kamaraju, Ghislain Fouodji Tasse, Sean M. Adkinson
-
Publication number: 20220392167Abstract: Embodiments include systems and methods for visualizing the position of a capturing device within a 3D mesh, generated from a video stream from the capturing device. A capturing device may provide a video stream along with point cloud data and camera pose data. This video stream, point cloud data, and camera pose data are then used to progressively generate a 3D mesh. The camera pose data and point cloud data can further be used, in conjunction with a SLAM algorithm, to indicate the position and orientation of the capturing device within the generated 3D mesh.Type: ApplicationFiled: June 2, 2021Publication date: December 8, 2022Inventors: SEAN M. ADKINSON, TERESSA CHIZECK, RYAN R. FINK
-
Publication number: 20220319120Abstract: Embodiments include systems and methods for determining a 6D pose estimate associated with an image of a physical 3D object captured in a video stream. An initial 6D pose estimate is inferred and then further iteratively refined. The video stream may be frozen to allow the user to tap or touch a display to indicate a location of the user-input keypoints. The resulting 6D pose estimate is used to assist in replacing or superimposing the physical 3D object with digital or virtual content in an augmented reality (AR) frame.Type: ApplicationFiled: April 2, 2021Publication date: October 6, 2022Inventors: FLORA PONJOU TASSE, PAVAN K. KAMARAJU, GHISLAIN FOUODJI TASSE, SEAN M. ADKINSON
-
Publication number: 20220318322Abstract: Methods and systems disclosed herein are directed to detection and recognition of items of data on labels applied to equipment and identifying metadata labels for the items of data using NLP. Embodiments may include identifying one or more items of data on an image of a label associated with a piece of equipment, determining, using NLP on the one or more items of data of the image, one or more metadata associated, respectively, with the identified one or more items of data, and outputting at least one of the one or more metadata and associated items of data.Type: ApplicationFiled: March 30, 2021Publication date: October 6, 2022Inventors: PAVAN K. KAMARAJU, GHISLAIN FOUODJI TASSE, FLORA PONJOU TASSE, SEAN M. ADKINSON
-
Publication number: 20220189118Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.Type: ApplicationFiled: March 8, 2022Publication date: June 16, 2022Applicant: STREEM, INC.Inventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
-
Publication number: 20220138979Abstract: Embodiments include systems and methods for remotely measuring distances in an environment captured by a device. A device captures a video stream of a device along with AR data that may include camera pose information and/or depth information, and transmits the video stream and AR data to a remote device. The remote device receives a selection of a first point and a second point within the video stream and, using the AR data, calculates a distance between the first and second points. The first and second points may be at different locations not simultaneously in view of the device. Other embodiments may capture additional points to compute areas and/or volumes.Type: ApplicationFiled: November 5, 2020Publication date: May 5, 2022Inventors: SEAN M. ADKINSON, RYAN R. FINK, BRIAN GRAM, NICHOLAS DEGROOT, ALEXANDER FALLENSTEDT
-
Patent number: 11323657Abstract: Methods and systems for the remote delivery of professional services, using augmented reality (AR), are disclosed. In embodiments, a user scans or acquires a physical marker, such as an optical code or radio beacon. The physical marker provides information to the user's device to allow it to connect to a server. The physical marker may also provide contextual data, possibly in conjunction with contextual data from the user's device. The server then provides a list of professionals on the basis of the contextual data. The user selects a professional, and the server initiates a video session between a user device and a professional device, where the professional can superimpose one or more AR objects on the video, to be displayed on the user device.Type: GrantFiled: August 24, 2020Date of Patent: May 3, 2022Assignee: STREEM, INC.Inventors: Ryan R. Fink, Sean M. Adkinson
-
Patent number: 11270505Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.Type: GrantFiled: May 22, 2020Date of Patent: March 8, 2022Assignee: STREEM, INCInventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
-
Publication number: 20210295599Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.Type: ApplicationFiled: March 22, 2021Publication date: September 23, 2021Inventors: SEAN M. ADKINSON, FLORA PONJOU TASSE, PAVAN K. KAMAJARU, GHISLAIN FOUODJI TASSE, RYAN R. FINK
-
Publication number: 20200396418Abstract: Methods and systems for the remote delivery of professional services, using augmented reality (AR), are disclosed. In embodiments, a user scans or acquires a physical marker, such as an optical code or radio beacon. The physical marker provides information to the user's device to allow it to connect to a server. The physical marker may also provide contextual data, possibly in conjunction with contextual data from the user's device. The server then provides a list of professionals on the basis of the contextual data. The user selects a professional, and the server initiates a video session between a user device and a professional device, where the professional can superimpose one or more AR objects on the video, to be displayed on the user device.Type: ApplicationFiled: August 24, 2020Publication date: December 17, 2020Inventors: RYAN R. FINK, SEAN M. ADKINSON
-
Publication number: 20200372709Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.Type: ApplicationFiled: May 22, 2020Publication date: November 26, 2020Inventors: Flora PONJOU TASSE, Pavan Kumar KAMARAJU, Ghislain FOUODJI TASSE, Ryan R. FINK, Sean M. ADKINSON