Patents by Inventor FLORA PONJOU TASSE

FLORA PONJOU TASSE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240144595
    Abstract: A neural network architecture is provided for reconstructing, in real-time, a 3D scene with additional attributes such as color and segmentation, from a stream of camera-tracked RGB images. The neural network can include a number of modules which process image data in sequence. In an example implementation, the processing can include capturing frames of color data, selecting key frames, processing a set of key frames to obtain partial 3D scene data, including a mesh model and associated voxels, fusing the partial 3D scene data into existing scene data, and extracting a 3D colored and segmented mesh from the 3D scene data.
    Type: Application
    Filed: October 26, 2022
    Publication date: May 2, 2024
    Applicant: STREEM, LLC
    Inventor: Flora PONJOU TASSE
  • Patent number: 11830142
    Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.
    Type: Grant
    Filed: March 8, 2022
    Date of Patent: November 28, 2023
    Assignee: STREEM, LLC
    Inventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
  • Patent number: 11790025
    Abstract: Methods and systems disclosed herein are directed to detection and recognition of items of data on labels applied to equipment and identifying metadata labels for the items of data using NLP. Embodiments may include identifying one or more items of data on an image of a label associated with a piece of equipment, determining, using NLP on the one or more items of data of the image, one or more metadata associated, respectively, with the identified one or more items of data, and outputting at least one of the one or more metadata and associated items of data.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: October 17, 2023
    Assignee: STREEM, LLC
    Inventors: Pavan K. Kamaraju, Ghislain Fouodji Tasse, Flora Ponjou Tasse, Sean M. Adkinson, Ryan R. Fink
  • Publication number: 20230290061
    Abstract: Frames for texturing a 3D mesh may be selected to minimize the number of frames required to completely texture the mesh, thus reducing the overhead of texturing. Keyframes are selected from a video stream on the basis of amount of overlap from previously selected keyframes, with the amount of overlap held below a predetermined threshold. The 3D mesh may also be refined and corrected to ensure a higher quality mesh application, including color correction of the selected keyframes. Other embodiments are described.
    Type: Application
    Filed: March 10, 2023
    Publication date: September 14, 2023
    Applicant: STREEM, LLC.
    Inventors: Pavan Kumar Kamaraju, Nikilesh Urella, Flora Ponjou Tasse
  • Publication number: 20230290037
    Abstract: Embodiments include systems and methods for real-time progressive texture mapping of a 3d mesh. A sequence of frames of a scene captured by a capturing device, and keyframes that partially overlap in the sequence of frames are added to a queue of keyframes. A 3D mesh created from the sequence of frames is accessed. A computing device determines when changes to a property of the 3D mesh meet a predetermined threshold. One of the keyframes from the queue of keyframes is assigned to each face in the 3D mesh, and the 3D mesh is divided into mesh segments based on the assigned keyframes. The keyframe assigned to each of the mesh segments is used to compute texture coordinates for vertices in the respective mesh segment, and an image in the keyframe is assigned as a texture for the respective mesh segment.
    Type: Application
    Filed: August 18, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse
  • Publication number: 20230290036
    Abstract: Real-time local live texturing of a 3D mesh includes adding keyframes that partially overlap in a sequence of frames to a queue of keyframes. When changes to a property of the 3D mesh created from the sequence of frames meet a predetermined threshold, the face vertices are project into RGB images of the keyframes to test visibility, and the keyframes from which the face is visible is added to a visible keyframe list for each of the faces. A most recently added keyframe from the queue of keyframes is assigned to each face in the 3D mesh, and the 3D mesh is divided into mesh segments based on the assigned keyframes. The keyframe assigned to each of the mesh segments is used to compute texture coordinates for vertices in the respective mesh segment, and an image in the keyframe is assigned as a texture for the respective mesh segment. Colors from the visible keyframe list associated with each of the vertices are averaged into a single RGB value, and the RGB value is assigned to the vertex.
    Type: Application
    Filed: August 18, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse
  • Publication number: 20230245391
    Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.
    Type: Application
    Filed: April 5, 2023
    Publication date: August 3, 2023
    Inventors: SEAN M. ADKINSON, FLORA PONJOU TASSE, PAVAN K. KAMARAJU, GHISLAIN FOUODJI TASSE, RYAN R. FINK
  • Patent number: 11640694
    Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: May 2, 2023
    Assignee: STREEM, LLC
    Inventors: Sean M. Adkinson, Flora Ponjou Tasse, Pavan K. Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink
  • Patent number: 11600050
    Abstract: Embodiments include systems and methods for determining a 6D pose estimate associated with an image of a physical 3D object captured in a video stream. An initial 6D pose estimate is inferred and then further iteratively refined. The video stream may be frozen to allow the user to tap or touch a display to indicate a location of the user-input keypoints. The resulting 6D pose estimate is used to assist in replacing or superimposing the physical 3D object with digital or virtual content in an augmented reality (AR) frame.
    Type: Grant
    Filed: April 2, 2021
    Date of Patent: March 7, 2023
    Assignee: STREEM, LLC
    Inventors: Flora Ponjou Tasse, Pavan K. Kamaraju, Ghislain Fouodji Tasse, Sean M. Adkinson
  • Publication number: 20220319120
    Abstract: Embodiments include systems and methods for determining a 6D pose estimate associated with an image of a physical 3D object captured in a video stream. An initial 6D pose estimate is inferred and then further iteratively refined. The video stream may be frozen to allow the user to tap or touch a display to indicate a location of the user-input keypoints. The resulting 6D pose estimate is used to assist in replacing or superimposing the physical 3D object with digital or virtual content in an augmented reality (AR) frame.
    Type: Application
    Filed: April 2, 2021
    Publication date: October 6, 2022
    Inventors: FLORA PONJOU TASSE, PAVAN K. KAMARAJU, GHISLAIN FOUODJI TASSE, SEAN M. ADKINSON
  • Publication number: 20220318322
    Abstract: Methods and systems disclosed herein are directed to detection and recognition of items of data on labels applied to equipment and identifying metadata labels for the items of data using NLP. Embodiments may include identifying one or more items of data on an image of a label associated with a piece of equipment, determining, using NLP on the one or more items of data of the image, one or more metadata associated, respectively, with the identified one or more items of data, and outputting at least one of the one or more metadata and associated items of data.
    Type: Application
    Filed: March 30, 2021
    Publication date: October 6, 2022
    Inventors: PAVAN K. KAMARAJU, GHISLAIN FOUODJI TASSE, FLORA PONJOU TASSE, SEAN M. ADKINSON
  • Publication number: 20220189118
    Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.
    Type: Application
    Filed: March 8, 2022
    Publication date: June 16, 2022
    Applicant: STREEM, INC.
    Inventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
  • Patent number: 11270505
    Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: March 8, 2022
    Assignee: STREEM, INC
    Inventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
  • Publication number: 20210295599
    Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.
    Type: Application
    Filed: March 22, 2021
    Publication date: September 23, 2021
    Inventors: SEAN M. ADKINSON, FLORA PONJOU TASSE, PAVAN K. KAMAJARU, GHISLAIN FOUODJI TASSE, RYAN R. FINK
  • Publication number: 20200372709
    Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Flora PONJOU TASSE, Pavan Kumar KAMARAJU, Ghislain FOUODJI TASSE, Ryan R. FINK, Sean M. ADKINSON
  • Publication number: 20200104318
    Abstract: The present invention relates to methods for searching for two-dimensional or three-dimensional objects. More particularly, the present invention relates to searching for two-dimensional or three-dimensional objects in a collection by using a multi-modal query of image and/or tag data. Aspects and/or embodiments seek to provide a method of searching for digital objects using any combination of images, three-dimensional shapes and text by embedding the vector representations for these multiple modes in the same space. Aspects and/or embodiments can be easily extensible to any other type of modality, making it more general.
    Type: Application
    Filed: March 7, 2018
    Publication date: April 2, 2020
    Inventors: FLORA PONJOU TASSE, Ghislain FOUODJI TASSE