Patents Assigned to STREEM, LLC
  • Publication number: 20240144595
    Abstract: A neural network architecture is provided for reconstructing, in real-time, a 3D scene with additional attributes such as color and segmentation, from a stream of camera-tracked RGB images. The neural network can include a number of modules which process image data in sequence. In an example implementation, the processing can include capturing frames of color data, selecting key frames, processing a set of key frames to obtain partial 3D scene data, including a mesh model and associated voxels, fusing the partial 3D scene data into existing scene data, and extracting a 3D colored and segmented mesh from the 3D scene data.
    Type: Application
    Filed: October 26, 2022
    Publication date: May 2, 2024
    Applicant: STREEM, LLC
    Inventor: Flora PONJOU TASSE
  • Publication number: 20240096019
    Abstract: A sequence of frames including color and depth data is processed to identify key frames while minimizing redundancy. A sparse 3D point cloud is obtained for each frame and represented by a set of voxels. Each voxel has associated data indicating, e.g., a depth and a camera viewing angle. When a new frame is processed, a new sparse 3D point cloud is obtained. For points which are not encompassed by the existing voxels, new voxels are created. For points which are encompassed by the existing voxels, a comparison determines whether the depth data of the new frame is more accurate than the existing depth data. A frame is selected as a key frame based on factors such as a number of new voxels which are created, a number of existing voxels for which the depth data is updated, and accuracy scores.
    Type: Application
    Filed: September 21, 2022
    Publication date: March 21, 2024
    Applicant: STREEM, LLC
    Inventor: Nikilesh Urella
  • Publication number: 20240070985
    Abstract: Embodiments herein may relate to a technique to be performed by surface partition logic. The technique may include identifying a first mesh portion that is related to a first plane of a three-dimensional (3D) space, and a second mesh portion that is related to a second plane of the 3D space. The technique may include identifying, based on a linear representation of a border between the first mesh portion and the second mesh portion, an element of the first mesh portion that at least partially overlaps the second mesh portion. The technique may further include altering the element of the first mesh portion to reduce the amount that the element overlaps the second mesh portion. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: August 29, 2022
    Publication date: February 29, 2024
    Applicant: STREEM, LLC
    Inventor: Nikilesh Urella
  • Publication number: 20240056491
    Abstract: Embodiments include systems and methods for offloading media service operation to one or graphical processing unit (GPUs). In some embodiments, a computer system, includes a first computer device to provide a media service that involves implementing media service operations and transmit a first media service request for a first media service operation of the media service operations. In addition, the computer system includes a second computer device that includes one or more GPUs. The second computer device is to implement the first media service operation with the one or more GPUs in response to the first computer device transmitting the first media service request for the first media service operation.
    Type: Application
    Filed: August 11, 2022
    Publication date: February 15, 2024
    Applicant: STREEM, LLC
    Inventors: Pavan K. Kamaraju, Steven Funasaki, Renganathan Veerasubramanian
  • Patent number: 11842444
    Abstract: Embodiments include systems and methods for visualizing the position of a capturing device within a 3D mesh, generated from a video stream from the capturing device. A capturing device may provide a video stream along with point cloud data and camera pose data. This video stream, point cloud data, and camera pose data are then used to progressively generate a 3D mesh. The camera pose data and point cloud data can further be used, in conjunction with a SLAM algorithm, to indicate the position and orientation of the capturing device within the generated 3D mesh.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: December 12, 2023
    Assignee: STREEM, LLC
    Inventors: Sean M. Adkinson, Teressa Chizeck, Ryan R. Fink
  • Patent number: 11830213
    Abstract: Embodiments include systems and methods for remotely measuring distances in an environment captured by a device. A device captures a video stream of a device along with AR data that may include camera pose information and/or depth information, and transmits the video stream and AR data to a remote device. The remote device receives a selection of a first point and a second point within the video stream and, using the AR data, calculates a distance between the first and second points. The first and second points may be at different locations not simultaneously in view of the device. Other embodiments may capture additional points to compute areas and/or volumes.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: November 28, 2023
    Assignee: STREEM, LLC
    Inventors: Sean M. Adkinson, Ryan R. Fink, Brian Gram, Nicholas Degroot, Alexander Fallenstedt
  • Patent number: 11831965
    Abstract: Embodiments include systems and methods for filtering augmented reality (“AR”) data streams, and in particular, video and/or audio streams to change or remove undesirable content, such as personally identifiable information, embarrassing content, financial data, etc. For example, a handheld device may capture video using a built in camera, and as needed superimpose AR objects on the video to change its content. A cloud service may host an AR session between the handheld device and a remote machine. The cloud service and remote machine may also operate their own filter on the AR data to remove and/or replace content to comply with their interests, regulations, policies, etc. Filtering may be cumulative. For example, the handheld device may remove financial data before it leaves the device, and then the cloud service may replace commercial logos before sharing AR data between session participants.
    Type: Grant
    Filed: July 6, 2022
    Date of Patent: November 28, 2023
    Assignee: STREEM, LLC
    Inventor: Pavan K. Kamaraju
  • Patent number: 11830142
    Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.
    Type: Grant
    Filed: March 8, 2022
    Date of Patent: November 28, 2023
    Assignee: STREEM, LLC
    Inventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
  • Patent number: 11823310
    Abstract: Methods for replacing or obscuring objects detected in an image or video on the basis of image context are disclosed. Context of the image or video may be obtained via pattern recognition on audio associated with the image or video, by user-supplied context, and/or by context derived from image capture, such as the nature of an application used to capture the image. The image or video may be analyzed for object detection and recognition, and depending upon policy, the image or video context used to select objects related or unrelated to the context for replacement or obfuscation. The selected objects may then be replaced with generic objects rendered from 3D models, or blurred or otherwise obscured.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: November 21, 2023
    Assignee: STREEM, LLC.
    Inventors: Ryan R. Fink, Sean M. Adkinson
  • Patent number: 11790025
    Abstract: Methods and systems disclosed herein are directed to detection and recognition of items of data on labels applied to equipment and identifying metadata labels for the items of data using NLP. Embodiments may include identifying one or more items of data on an image of a label associated with a piece of equipment, determining, using NLP on the one or more items of data of the image, one or more metadata associated, respectively, with the identified one or more items of data, and outputting at least one of the one or more metadata and associated items of data.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: October 17, 2023
    Assignee: STREEM, LLC
    Inventors: Pavan K. Kamaraju, Ghislain Fouodji Tasse, Flora Ponjou Tasse, Sean M. Adkinson, Ryan R. Fink
  • Patent number: 11783546
    Abstract: A method for creating and storing a captured image and associated spatial data and augmented reality (AR) data in a file that allows subsequent manipulation and processing of AR objects is disclosed. In embodiments, one or more frames are extracted from a video stream, along with spatial information about the camera capturing the video stream. The one or more frames are analyzed in conjunction with the spatial information to calculate a point cloud of depth data. The one or more frames are stored in a file in a first layer, and the point cloud is stored in the file in a second layer. In some embodiments, one or more AR objects are stored in a third layer.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: October 10, 2023
    Assignee: STREEM, LLC
    Inventors: Ryan R. Fink, Sean M. Adkinson
  • Publication number: 20230290068
    Abstract: A mesh model of a 3D space is provided with improved accuracy based on user inputs. In one aspect, a triangle face of the mesh is divided into three smaller triangle faces base on a user-selected point in a 3D space. A user can select the point on a display screen, for example, where a corresponding vertex in the mesh is a point in the mesh which is intersected by a ray cast from the selected point. This process can be repeated to provide new vertices in the mesh model which more accurately represent an object in the 3D space and therefore allow a more accurate measurement of the size or area of the object. For example, the user might select four points to identify a rectangular object.
    Type: Application
    Filed: August 1, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventor: Huapeng SU
  • Publication number: 20230290037
    Abstract: Embodiments include systems and methods for real-time progressive texture mapping of a 3d mesh. A sequence of frames of a scene captured by a capturing device, and keyframes that partially overlap in the sequence of frames are added to a queue of keyframes. A 3D mesh created from the sequence of frames is accessed. A computing device determines when changes to a property of the 3D mesh meet a predetermined threshold. One of the keyframes from the queue of keyframes is assigned to each face in the 3D mesh, and the 3D mesh is divided into mesh segments based on the assigned keyframes. The keyframe assigned to each of the mesh segments is used to compute texture coordinates for vertices in the respective mesh segment, and an image in the keyframe is assigned as a texture for the respective mesh segment.
    Type: Application
    Filed: August 18, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse
  • Publication number: 20230290036
    Abstract: Real-time local live texturing of a 3D mesh includes adding keyframes that partially overlap in a sequence of frames to a queue of keyframes. When changes to a property of the 3D mesh created from the sequence of frames meet a predetermined threshold, the face vertices are project into RGB images of the keyframes to test visibility, and the keyframes from which the face is visible is added to a visible keyframe list for each of the faces. A most recently added keyframe from the queue of keyframes is assigned to each face in the 3D mesh, and the 3D mesh is divided into mesh segments based on the assigned keyframes. The keyframe assigned to each of the mesh segments is used to compute texture coordinates for vertices in the respective mesh segment, and an image in the keyframe is assigned as a texture for the respective mesh segment. Colors from the visible keyframe list associated with each of the vertices are averaged into a single RGB value, and the RGB value is assigned to the vertex.
    Type: Application
    Filed: August 18, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse
  • Publication number: 20230290061
    Abstract: Frames for texturing a 3D mesh may be selected to minimize the number of frames required to completely texture the mesh, thus reducing the overhead of texturing. Keyframes are selected from a video stream on the basis of amount of overlap from previously selected keyframes, with the amount of overlap held below a predetermined threshold. The 3D mesh may also be refined and corrected to ensure a higher quality mesh application, including color correction of the selected keyframes. Other embodiments are described.
    Type: Application
    Filed: March 10, 2023
    Publication date: September 14, 2023
    Applicant: STREEM, LLC.
    Inventors: Pavan Kumar Kamaraju, Nikilesh Urella, Flora Ponjou Tasse
  • Publication number: 20230290070
    Abstract: Embodiments of devices and techniques of obtaining a three dimensional (3D) representation of an area are disclosed. In one embodiment, a two dimensional (2D) frame is obtained of an array of pixels of the area. Also, a depth frame of the area is obtained. The depth frame includes an array of depth estimation values. Each of the depth estimation values in the array of depth estimation values corresponds to one or more corresponding pixels in the array of pixels. Furthermore, an array of confidence scores is generated. Each confidence score in the array of confidence scores corresponds to one or more corresponding depth estimation values in the array of depth estimation values. Each of the confidence scores in the array of confidence scores indicates a confidence level that the one or more corresponding depth estimation values in the array of depth estimation values is accurate.
    Type: Application
    Filed: October 5, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventor: Nikilesh URELLA
  • Publication number: 20230290069
    Abstract: A mesh model of a 3D space is modified based on semantic segmentation data to more accurately represent boundaries of an object in the 3D space. In one aspect, semantic segmentation images define one or more boundaries of the object. The semantic segmentation images are projected to a 3D mesh representation of the 3D space, and the 3D mesh representation is updated based on the one or more boundaries in the projected semantic segmentation image. In another aspect, the 3D mesh representation is updated based on one or more boundaries defined by the semantic segmentation images as applied to a point cloud of the 3D space.
    Type: Application
    Filed: August 1, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventor: Huapeng Su
  • Publication number: 20230290062
    Abstract: Artificial neural networks (ANN) may be trained to output estimated floor plans from 3D spaces that would be challenging or impossible for existing techniques to estimate. In embodiments, an ANN may be trained using a supervised approach where top-down views of 3D meshes or point clouds are provided to the ANN as input, with ground truth floor plans provided as output for comparison. A suitably large training set may be used to fully train the ANN on challenging scenarios such as open loop scans and/or unusual geometries. The trained ANN may then be used to accurately estimate floor plans for such 3D spaces. Other embodiments are described.
    Type: Application
    Filed: March 10, 2023
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventor: Huapeng Su
  • Publication number: 20230290090
    Abstract: Embodiments herein may relate to generating, based on a three-dimensional (3D) graphical representation of a 3D space, a two-dimensional (2D) image that includes respective indications of respective locations of one or more objects in the 3D space. The 2D image may then be displayed to a user that provides user input related to selection of an object of the one or more objects. The graphical representation of the object in the 2D image may then be altered based on the user input. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: June 27, 2022
    Publication date: September 14, 2023
    Applicant: STREEM, LLC
    Inventor: Huapeng SU
  • Patent number: 11715302
    Abstract: Methods for automatically tagging one or more images and/or video clips using a audio stream are disclosed. The audio stream may be processed using an automatic speech recognition algorithm, to extract possible keywords. The image(s) and/or video clip(s) may then be tagged with the possible keywords. In some embodiments, the image(s) and/or video clip(s) may be tagged automatically. In other embodiments, a user may be presented with a list of possible keywords extracted from the audio stream, from which the user may then select to manually tag the image(s) and/or video clip(s).
    Type: Grant
    Filed: August 21, 2019
    Date of Patent: August 1, 2023
    Assignee: STREEM, LLC
    Inventors: Ryan R. Fink, Sean M. Adkinson