Patents by Inventor Ghislain FOUODJI TASSE
Ghislain FOUODJI TASSE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11830142Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.Type: GrantFiled: March 8, 2022Date of Patent: November 28, 2023Assignee: STREEM, LLCInventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
-
Patent number: 11790025Abstract: Methods and systems disclosed herein are directed to detection and recognition of items of data on labels applied to equipment and identifying metadata labels for the items of data using NLP. Embodiments may include identifying one or more items of data on an image of a label associated with a piece of equipment, determining, using NLP on the one or more items of data of the image, one or more metadata associated, respectively, with the identified one or more items of data, and outputting at least one of the one or more metadata and associated items of data.Type: GrantFiled: March 30, 2021Date of Patent: October 17, 2023Assignee: STREEM, LLCInventors: Pavan K. Kamaraju, Ghislain Fouodji Tasse, Flora Ponjou Tasse, Sean M. Adkinson, Ryan R. Fink
-
Publication number: 20230290037Abstract: Embodiments include systems and methods for real-time progressive texture mapping of a 3d mesh. A sequence of frames of a scene captured by a capturing device, and keyframes that partially overlap in the sequence of frames are added to a queue of keyframes. A 3D mesh created from the sequence of frames is accessed. A computing device determines when changes to a property of the 3D mesh meet a predetermined threshold. One of the keyframes from the queue of keyframes is assigned to each face in the 3D mesh, and the 3D mesh is divided into mesh segments based on the assigned keyframes. The keyframe assigned to each of the mesh segments is used to compute texture coordinates for vertices in the respective mesh segment, and an image in the keyframe is assigned as a texture for the respective mesh segment.Type: ApplicationFiled: August 18, 2022Publication date: September 14, 2023Applicant: STREEM, LLCInventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse
-
Publication number: 20230290036Abstract: Real-time local live texturing of a 3D mesh includes adding keyframes that partially overlap in a sequence of frames to a queue of keyframes. When changes to a property of the 3D mesh created from the sequence of frames meet a predetermined threshold, the face vertices are project into RGB images of the keyframes to test visibility, and the keyframes from which the face is visible is added to a visible keyframe list for each of the faces. A most recently added keyframe from the queue of keyframes is assigned to each face in the 3D mesh, and the 3D mesh is divided into mesh segments based on the assigned keyframes. The keyframe assigned to each of the mesh segments is used to compute texture coordinates for vertices in the respective mesh segment, and an image in the keyframe is assigned as a texture for the respective mesh segment. Colors from the visible keyframe list associated with each of the vertices are averaged into a single RGB value, and the RGB value is assigned to the vertex.Type: ApplicationFiled: August 18, 2022Publication date: September 14, 2023Applicant: STREEM, LLCInventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse
-
Publication number: 20230245391Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.Type: ApplicationFiled: April 5, 2023Publication date: August 3, 2023Inventors: SEAN M. ADKINSON, FLORA PONJOU TASSE, PAVAN K. KAMARAJU, GHISLAIN FOUODJI TASSE, RYAN R. FINK
-
Patent number: 11640694Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.Type: GrantFiled: March 22, 2021Date of Patent: May 2, 2023Assignee: STREEM, LLCInventors: Sean M. Adkinson, Flora Ponjou Tasse, Pavan K. Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink
-
Patent number: 11600050Abstract: Embodiments include systems and methods for determining a 6D pose estimate associated with an image of a physical 3D object captured in a video stream. An initial 6D pose estimate is inferred and then further iteratively refined. The video stream may be frozen to allow the user to tap or touch a display to indicate a location of the user-input keypoints. The resulting 6D pose estimate is used to assist in replacing or superimposing the physical 3D object with digital or virtual content in an augmented reality (AR) frame.Type: GrantFiled: April 2, 2021Date of Patent: March 7, 2023Assignee: STREEM, LLCInventors: Flora Ponjou Tasse, Pavan K. Kamaraju, Ghislain Fouodji Tasse, Sean M. Adkinson
-
Publication number: 20220319120Abstract: Embodiments include systems and methods for determining a 6D pose estimate associated with an image of a physical 3D object captured in a video stream. An initial 6D pose estimate is inferred and then further iteratively refined. The video stream may be frozen to allow the user to tap or touch a display to indicate a location of the user-input keypoints. The resulting 6D pose estimate is used to assist in replacing or superimposing the physical 3D object with digital or virtual content in an augmented reality (AR) frame.Type: ApplicationFiled: April 2, 2021Publication date: October 6, 2022Inventors: FLORA PONJOU TASSE, PAVAN K. KAMARAJU, GHISLAIN FOUODJI TASSE, SEAN M. ADKINSON
-
Publication number: 20220318322Abstract: Methods and systems disclosed herein are directed to detection and recognition of items of data on labels applied to equipment and identifying metadata labels for the items of data using NLP. Embodiments may include identifying one or more items of data on an image of a label associated with a piece of equipment, determining, using NLP on the one or more items of data of the image, one or more metadata associated, respectively, with the identified one or more items of data, and outputting at least one of the one or more metadata and associated items of data.Type: ApplicationFiled: March 30, 2021Publication date: October 6, 2022Inventors: PAVAN K. KAMARAJU, GHISLAIN FOUODJI TASSE, FLORA PONJOU TASSE, SEAN M. ADKINSON
-
Publication number: 20220189118Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.Type: ApplicationFiled: March 8, 2022Publication date: June 16, 2022Applicant: STREEM, INC.Inventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
-
Patent number: 11270505Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.Type: GrantFiled: May 22, 2020Date of Patent: March 8, 2022Assignee: STREEM, INCInventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
-
Publication number: 20210295599Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.Type: ApplicationFiled: March 22, 2021Publication date: September 23, 2021Inventors: SEAN M. ADKINSON, FLORA PONJOU TASSE, PAVAN K. KAMAJARU, GHISLAIN FOUODJI TASSE, RYAN R. FINK
-
Publication number: 20200372709Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.Type: ApplicationFiled: May 22, 2020Publication date: November 26, 2020Inventors: Flora PONJOU TASSE, Pavan Kumar KAMARAJU, Ghislain FOUODJI TASSE, Ryan R. FINK, Sean M. ADKINSON
-
Publication number: 20200104318Abstract: The present invention relates to methods for searching for two-dimensional or three-dimensional objects. More particularly, the present invention relates to searching for two-dimensional or three-dimensional objects in a collection by using a multi-modal query of image and/or tag data. Aspects and/or embodiments seek to provide a method of searching for digital objects using any combination of images, three-dimensional shapes and text by embedding the vector representations for these multiple modes in the same space. Aspects and/or embodiments can be easily extensible to any other type of modality, making it more general.Type: ApplicationFiled: March 7, 2018Publication date: April 2, 2020Inventors: FLORA PONJOU TASSE, Ghislain FOUODJI TASSE