Patents Assigned to STREEM, LLC
-
Patent number: 11830142Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.Type: GrantFiled: March 8, 2022Date of Patent: November 28, 2023Assignee: STREEM, LLCInventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
-
Patent number: 11831965Abstract: Embodiments include systems and methods for filtering augmented reality (“AR”) data streams, and in particular, video and/or audio streams to change or remove undesirable content, such as personally identifiable information, embarrassing content, financial data, etc. For example, a handheld device may capture video using a built in camera, and as needed superimpose AR objects on the video to change its content. A cloud service may host an AR session between the handheld device and a remote machine. The cloud service and remote machine may also operate their own filter on the AR data to remove and/or replace content to comply with their interests, regulations, policies, etc. Filtering may be cumulative. For example, the handheld device may remove financial data before it leaves the device, and then the cloud service may replace commercial logos before sharing AR data between session participants.Type: GrantFiled: July 6, 2022Date of Patent: November 28, 2023Assignee: STREEM, LLCInventor: Pavan K. Kamaraju
-
Patent number: 11823310Abstract: Methods for replacing or obscuring objects detected in an image or video on the basis of image context are disclosed. Context of the image or video may be obtained via pattern recognition on audio associated with the image or video, by user-supplied context, and/or by context derived from image capture, such as the nature of an application used to capture the image. The image or video may be analyzed for object detection and recognition, and depending upon policy, the image or video context used to select objects related or unrelated to the context for replacement or obfuscation. The selected objects may then be replaced with generic objects rendered from 3D models, or blurred or otherwise obscured.Type: GrantFiled: September 6, 2019Date of Patent: November 21, 2023Assignee: STREEM, LLC.Inventors: Ryan R. Fink, Sean M. Adkinson
-
Patent number: 11790025Abstract: Methods and systems disclosed herein are directed to detection and recognition of items of data on labels applied to equipment and identifying metadata labels for the items of data using NLP. Embodiments may include identifying one or more items of data on an image of a label associated with a piece of equipment, determining, using NLP on the one or more items of data of the image, one or more metadata associated, respectively, with the identified one or more items of data, and outputting at least one of the one or more metadata and associated items of data.Type: GrantFiled: March 30, 2021Date of Patent: October 17, 2023Assignee: STREEM, LLCInventors: Pavan K. Kamaraju, Ghislain Fouodji Tasse, Flora Ponjou Tasse, Sean M. Adkinson, Ryan R. Fink
-
Patent number: 11783546Abstract: A method for creating and storing a captured image and associated spatial data and augmented reality (AR) data in a file that allows subsequent manipulation and processing of AR objects is disclosed. In embodiments, one or more frames are extracted from a video stream, along with spatial information about the camera capturing the video stream. The one or more frames are analyzed in conjunction with the spatial information to calculate a point cloud of depth data. The one or more frames are stored in a file in a first layer, and the point cloud is stored in the file in a second layer. In some embodiments, one or more AR objects are stored in a third layer.Type: GrantFiled: December 17, 2018Date of Patent: October 10, 2023Assignee: STREEM, LLCInventors: Ryan R. Fink, Sean M. Adkinson
-
Publication number: 20230290061Abstract: Frames for texturing a 3D mesh may be selected to minimize the number of frames required to completely texture the mesh, thus reducing the overhead of texturing. Keyframes are selected from a video stream on the basis of amount of overlap from previously selected keyframes, with the amount of overlap held below a predetermined threshold. The 3D mesh may also be refined and corrected to ensure a higher quality mesh application, including color correction of the selected keyframes. Other embodiments are described.Type: ApplicationFiled: March 10, 2023Publication date: September 14, 2023Applicant: STREEM, LLC.Inventors: Pavan Kumar Kamaraju, Nikilesh Urella, Flora Ponjou Tasse
-
Publication number: 20230290037Abstract: Embodiments include systems and methods for real-time progressive texture mapping of a 3d mesh. A sequence of frames of a scene captured by a capturing device, and keyframes that partially overlap in the sequence of frames are added to a queue of keyframes. A 3D mesh created from the sequence of frames is accessed. A computing device determines when changes to a property of the 3D mesh meet a predetermined threshold. One of the keyframes from the queue of keyframes is assigned to each face in the 3D mesh, and the 3D mesh is divided into mesh segments based on the assigned keyframes. The keyframe assigned to each of the mesh segments is used to compute texture coordinates for vertices in the respective mesh segment, and an image in the keyframe is assigned as a texture for the respective mesh segment.Type: ApplicationFiled: August 18, 2022Publication date: September 14, 2023Applicant: STREEM, LLCInventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse
-
Publication number: 20230290068Abstract: A mesh model of a 3D space is provided with improved accuracy based on user inputs. In one aspect, a triangle face of the mesh is divided into three smaller triangle faces base on a user-selected point in a 3D space. A user can select the point on a display screen, for example, where a corresponding vertex in the mesh is a point in the mesh which is intersected by a ray cast from the selected point. This process can be repeated to provide new vertices in the mesh model which more accurately represent an object in the 3D space and therefore allow a more accurate measurement of the size or area of the object. For example, the user might select four points to identify a rectangular object.Type: ApplicationFiled: August 1, 2022Publication date: September 14, 2023Applicant: STREEM, LLCInventor: Huapeng SU
-
Publication number: 20230290069Abstract: A mesh model of a 3D space is modified based on semantic segmentation data to more accurately represent boundaries of an object in the 3D space. In one aspect, semantic segmentation images define one or more boundaries of the object. The semantic segmentation images are projected to a 3D mesh representation of the 3D space, and the 3D mesh representation is updated based on the one or more boundaries in the projected semantic segmentation image. In another aspect, the 3D mesh representation is updated based on one or more boundaries defined by the semantic segmentation images as applied to a point cloud of the 3D space.Type: ApplicationFiled: August 1, 2022Publication date: September 14, 2023Applicant: STREEM, LLCInventor: Huapeng Su
-
Publication number: 20230290062Abstract: Artificial neural networks (ANN) may be trained to output estimated floor plans from 3D spaces that would be challenging or impossible for existing techniques to estimate. In embodiments, an ANN may be trained using a supervised approach where top-down views of 3D meshes or point clouds are provided to the ANN as input, with ground truth floor plans provided as output for comparison. A suitably large training set may be used to fully train the ANN on challenging scenarios such as open loop scans and/or unusual geometries. The trained ANN may then be used to accurately estimate floor plans for such 3D spaces. Other embodiments are described.Type: ApplicationFiled: March 10, 2023Publication date: September 14, 2023Applicant: STREEM, LLCInventor: Huapeng Su
-
Publication number: 20230290090Abstract: Embodiments herein may relate to generating, based on a three-dimensional (3D) graphical representation of a 3D space, a two-dimensional (2D) image that includes respective indications of respective locations of one or more objects in the 3D space. The 2D image may then be displayed to a user that provides user input related to selection of an object of the one or more objects. The graphical representation of the object in the 2D image may then be altered based on the user input. Other embodiments may be described and/or claimed.Type: ApplicationFiled: June 27, 2022Publication date: September 14, 2023Applicant: STREEM, LLCInventor: Huapeng SU
-
Publication number: 20230290070Abstract: Embodiments of devices and techniques of obtaining a three dimensional (3D) representation of an area are disclosed. In one embodiment, a two dimensional (2D) frame is obtained of an array of pixels of the area. Also, a depth frame of the area is obtained. The depth frame includes an array of depth estimation values. Each of the depth estimation values in the array of depth estimation values corresponds to one or more corresponding pixels in the array of pixels. Furthermore, an array of confidence scores is generated. Each confidence score in the array of confidence scores corresponds to one or more corresponding depth estimation values in the array of depth estimation values. Each of the confidence scores in the array of confidence scores indicates a confidence level that the one or more corresponding depth estimation values in the array of depth estimation values is accurate.Type: ApplicationFiled: October 5, 2022Publication date: September 14, 2023Applicant: STREEM, LLCInventor: Nikilesh URELLA
-
Publication number: 20230290036Abstract: Real-time local live texturing of a 3D mesh includes adding keyframes that partially overlap in a sequence of frames to a queue of keyframes. When changes to a property of the 3D mesh created from the sequence of frames meet a predetermined threshold, the face vertices are project into RGB images of the keyframes to test visibility, and the keyframes from which the face is visible is added to a visible keyframe list for each of the faces. A most recently added keyframe from the queue of keyframes is assigned to each face in the 3D mesh, and the 3D mesh is divided into mesh segments based on the assigned keyframes. The keyframe assigned to each of the mesh segments is used to compute texture coordinates for vertices in the respective mesh segment, and an image in the keyframe is assigned as a texture for the respective mesh segment. Colors from the visible keyframe list associated with each of the vertices are averaged into a single RGB value, and the RGB value is assigned to the vertex.Type: ApplicationFiled: August 18, 2022Publication date: September 14, 2023Applicant: STREEM, LLCInventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse
-
Patent number: 11715302Abstract: Methods for automatically tagging one or more images and/or video clips using a audio stream are disclosed. The audio stream may be processed using an automatic speech recognition algorithm, to extract possible keywords. The image(s) and/or video clip(s) may then be tagged with the possible keywords. In some embodiments, the image(s) and/or video clip(s) may be tagged automatically. In other embodiments, a user may be presented with a list of possible keywords extracted from the audio stream, from which the user may then select to manually tag the image(s) and/or video clip(s).Type: GrantFiled: August 21, 2019Date of Patent: August 1, 2023Assignee: STREEM, LLCInventors: Ryan R. Fink, Sean M. Adkinson
-
Patent number: 11640694Abstract: Embodiments include systems and methods for creation of a 3D mesh from a video stream or a sequence of frames. A sparse point cloud is first created from the video stream, which is then densified per frame by comparison with spatially proximate frames. A 3D mesh is then created from the densified depth maps, and the mesh is textured by projecting the images from the video stream or sequence of frames onto the mesh. Metric scale of the depth maps may be estimated where direct measurements are not able to be measured or calculated using a machine learning depth estimation network.Type: GrantFiled: March 22, 2021Date of Patent: May 2, 2023Assignee: STREEM, LLCInventors: Sean M. Adkinson, Flora Ponjou Tasse, Pavan K. Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink
-
Patent number: 11600050Abstract: Embodiments include systems and methods for determining a 6D pose estimate associated with an image of a physical 3D object captured in a video stream. An initial 6D pose estimate is inferred and then further iteratively refined. The video stream may be frozen to allow the user to tap or touch a display to indicate a location of the user-input keypoints. The resulting 6D pose estimate is used to assist in replacing or superimposing the physical 3D object with digital or virtual content in an augmented reality (AR) frame.Type: GrantFiled: April 2, 2021Date of Patent: March 7, 2023Assignee: STREEM, LLCInventors: Flora Ponjou Tasse, Pavan K. Kamaraju, Ghislain Fouodji Tasse, Sean M. Adkinson
-
Patent number: 11385856Abstract: Multiple devices may use cameras, sensors, and other inputs to record characteristics of an environment and generate an Augmented Reality (AR) model of the environment. Although devices may implement AR models in different incompatible systems, such as using positioning systems with different scales, unit sizes, etc., one or more devices may coordinate to determine one or more transform to be applied by one or more devices to establish a common framework for referencing AR objects in the models for the environment. With the common framework, and a language to facilitate establishing the common framework, devices may share rich content between the devices. This, for example, allows a tablet device presenting an AR space to “grab” an object out of a display present in the environment, and place the grabbed object into the AR space while maintaining proper relative dimensions and other characteristics of the object after the transfer from one space to the other.Type: GrantFiled: October 23, 2020Date of Patent: July 12, 2022Assignee: STREEM, LLCInventor: Zachary Babb