Patents Assigned to STREEM, LLC
-
Patent number: 12154289Abstract: Embodiments include systems and methods for remotely measuring distances in an environment captured by a device. A device captures a video stream of a device along with AR data that may include camera pose information and/or depth information, and transmits the video stream and AR data to a remote device. The environment may be analyzed to identify objects such as lines, edges, curves, shapes, anchors/corners, products, e.g., appliances, and other things of interest. Some or all of objects may be identified to the remote device to facilitate selecting an object or region of interest. Selected points for an object may be more precisely located by snapping them to corresponding anchor points for an object. Using anchor points facilitates more precise identification and/or measurement of an aspect of an object, such as one of its dimensions, the volume of a space, or performing other actions such as replacement of an object.Type: GrantFiled: November 5, 2021Date of Patent: November 26, 2024Assignee: STREEM, LLCInventors: Pavan K. Kamaraju, Nicholas Degroot
-
Patent number: 12137129Abstract: Embodiments include systems and methods for offloading media service operation to one or graphical processing unit (GPUs). In some embodiments, a computer system, includes a first computer device to provide a media service that involves implementing media service operations and transmit a first media service request for a first media service operation of the media service operations. In addition, the computer system includes a second computer device that includes one or more GPUs. The second computer device is to implement the first media service operation with the one or more GPUs in response to the first computer device transmitting the first media service request for the first media service operation.Type: GrantFiled: August 11, 2022Date of Patent: November 5, 2024Assignee: STREEM, LLCInventors: Pavan K. Kamaraju, Steven Funasaki, Renganathan Veerasubramanian
-
Patent number: 12131426Abstract: A mesh model of a 3D space is provided with improved accuracy based on user inputs. In one aspect, a triangle face of the mesh is divided into three smaller triangle faces base on a user-selected point in a 3D space. A user can select the point on a display screen, for example, where a corresponding vertex in the mesh is a point in the mesh which is intersected by a ray cast from the selected point. This process can be repeated to provide new vertices in the mesh model which more accurately represent an object in the 3D space and therefore allow a more accurate measurement of the size or area of the object. For example, the user might select four points to identify a rectangular object.Type: GrantFiled: August 1, 2022Date of Patent: October 29, 2024Assignee: STREEM LLCInventor: Huapeng Su
-
Patent number: 12131297Abstract: Methods and systems for automatic detection and recognition of visual tags on equipment are disclosed. The make and model of an object such as an appliance or consumer device is recognized from an image or video using object detection. This make and model information may then be used to direct a user to locate an equipment information tag that includes model and serial number information. Object recognition and optical character recognition can then be employed to extract the model and serial number from the tag, along with any other relevant information. The extracted information may then be used to locate service and/or operation information.Type: GrantFiled: December 11, 2019Date of Patent: October 29, 2024Assignee: STREEM, LLCInventors: Ryan R. Fink, Sean M. Adkinson
-
Patent number: 12131427Abstract: A mesh model of a 3D space is modified based on semantic segmentation data to more accurately represent boundaries of an object in the 3D space. In one aspect, semantic segmentation images define one or more boundaries of the object. The semantic segmentation images are projected to a 3D mesh representation of the 3D space, and the 3D mesh representation is updated based on the one or more boundaries in the projected semantic segmentation image. In another aspect, the 3D mesh representation is updated based on one or more boundaries defined by the semantic segmentation images as applied to a point cloud of the 3D space.Type: GrantFiled: August 1, 2022Date of Patent: October 29, 2024Assignee: STREEM, LLCInventor: Huapeng Su
-
Publication number: 20240331292Abstract: Embodiments include systems, processes, and/or techniques for creating a semantically segmented 3D mesh of a physical environment, where the semantically segmented 3D mesh may be at least partially created by a first user, and where a second user, for example a service professional, may view and modify one of the segments of the semantically segmented 3D mesh for subsequent viewing by the first user. Other embodiments may be described and/or claimed.Type: ApplicationFiled: March 30, 2023Publication date: October 3, 2024Applicant: STREEM, LLC.Inventors: Ghislain Tasse, Nikilesh Urella, Pavan K. Kamaraju
-
Patent number: 12093310Abstract: The present invention relates to methods for searching for two-dimensional or three-dimensional objects. More particularly, the present invention relates to searching for two-dimensional or three-dimensional objects in a collection by using a multi-modal query of image and/or tag data. Aspects and/or embodiments seek to provide a method of searching for digital objects using any combination of images, three-dimensional shapes and text by embedding the vector representations for these multiple modes in the same space. Aspects and/or embodiments can be easily extensible to any other type of modality, making it more general.Type: GrantFiled: March 7, 2018Date of Patent: September 17, 2024Assignee: STREEM, LLCInventors: Flora Ponjou Tasse, Ghislain Fouodji Tasse
-
Patent number: 12073512Abstract: A sequence of frames including color and depth data is processed to identify key frames while minimizing redundancy. A sparse 3D point cloud is obtained for each frame and represented by a set of voxels. Each voxel has associated data indicating, e.g., a depth and a camera viewing angle. When a new frame is processed, a new sparse 3D point cloud is obtained. For points which are not encompassed by the existing voxels, new voxels are created. For points which are encompassed by the existing voxels, a comparison determines whether the depth data of the new frame is more accurate than the existing depth data. A frame is selected as a key frame based on factors such as a number of new voxels which are created, a number of existing voxels for which the depth data is updated, and accuracy scores.Type: GrantFiled: September 21, 2022Date of Patent: August 27, 2024Assignee: STREEM, LLCInventor: Nikilesh Urella
-
Patent number: 12067683Abstract: Methods for placement of location-persistent 3D objects or annotations in an augmented reality scene are disclosed. By capturing location data along with device spatial orientation and the placement of 3D objects or annotations, the augmented reality scene can be recreated and manipulated. Placed 3D objects or annotations can reappear in a subsequent capture by the same or a different device when brought back to the location of the initial capture and placement of objects. Still further, the placed 3D objects or annotations may be supplemented with additional objects or annotations, or the placed objects or annotations may be removed or modified.Type: GrantFiled: September 13, 2019Date of Patent: August 20, 2024Assignee: STREEM, LLCInventors: Ryan R. Fink, Sean M. Adkinson
-
Publication number: 20240257461Abstract: A mesh model of a 3D space is provided with improved accuracy by refining the locations of edges of objects in the space. The mesh model includes vertices which define surfaces of triangles. Triangles are identified which have two vertices in one plane and another, outlier vertex in another, adjacent plane. A line is fitted to the outlier vertices to define an edge of an object, and the outlier vertices are moved to the line, referred to as a mesh-based line. Texture data from images of the space can be used to further refine the edge. In one approach, gradients in grayscale pixels which correspond the vertices of the mesh-based line are used to define a grayscale-based line. The two line definitions can be combined or otherwise used to provide a final definition of the edge. The object can be measured based on the length and position of the edge.Type: ApplicationFiled: February 1, 2023Publication date: August 1, 2024Applicant: STREEM, LLCInventor: Nikilesh Urella
-
Patent number: 12039734Abstract: Methods for improving object recognition using depth data are disclosed. An image is captured of a 3-D scene along with depth data, such as in the form of a point cloud. The depth data is correlated with the image of the captured scene, such as by determining the frame of reference of each of the image and the depth data, thereby allowing the depth data to be mapped to the correct corresponding pixels of the image. Object recognition on the image is then improved by employing the correlated depth data. The depth data may be captured contemporaneously with the image of the 3-D scene, such as by using photogrammetry, or at a different time.Type: GrantFiled: August 21, 2019Date of Patent: July 16, 2024Assignee: STREEM, LLC.Inventors: Ryan R. Fink, Sean M. Adkinson
-
Publication number: 20240144595Abstract: A neural network architecture is provided for reconstructing, in real-time, a 3D scene with additional attributes such as color and segmentation, from a stream of camera-tracked RGB images. The neural network can include a number of modules which process image data in sequence. In an example implementation, the processing can include capturing frames of color data, selecting key frames, processing a set of key frames to obtain partial 3D scene data, including a mesh model and associated voxels, fusing the partial 3D scene data into existing scene data, and extracting a 3D colored and segmented mesh from the 3D scene data.Type: ApplicationFiled: October 26, 2022Publication date: May 2, 2024Applicant: STREEM, LLCInventor: Flora PONJOU TASSE
-
Publication number: 20240096019Abstract: A sequence of frames including color and depth data is processed to identify key frames while minimizing redundancy. A sparse 3D point cloud is obtained for each frame and represented by a set of voxels. Each voxel has associated data indicating, e.g., a depth and a camera viewing angle. When a new frame is processed, a new sparse 3D point cloud is obtained. For points which are not encompassed by the existing voxels, new voxels are created. For points which are encompassed by the existing voxels, a comparison determines whether the depth data of the new frame is more accurate than the existing depth data. A frame is selected as a key frame based on factors such as a number of new voxels which are created, a number of existing voxels for which the depth data is updated, and accuracy scores.Type: ApplicationFiled: September 21, 2022Publication date: March 21, 2024Applicant: STREEM, LLCInventor: Nikilesh Urella
-
Publication number: 20240070985Abstract: Embodiments herein may relate to a technique to be performed by surface partition logic. The technique may include identifying a first mesh portion that is related to a first plane of a three-dimensional (3D) space, and a second mesh portion that is related to a second plane of the 3D space. The technique may include identifying, based on a linear representation of a border between the first mesh portion and the second mesh portion, an element of the first mesh portion that at least partially overlaps the second mesh portion. The technique may further include altering the element of the first mesh portion to reduce the amount that the element overlaps the second mesh portion. Other embodiments may be described and/or claimed.Type: ApplicationFiled: August 29, 2022Publication date: February 29, 2024Applicant: STREEM, LLCInventor: Nikilesh Urella
-
Publication number: 20240056491Abstract: Embodiments include systems and methods for offloading media service operation to one or graphical processing unit (GPUs). In some embodiments, a computer system, includes a first computer device to provide a media service that involves implementing media service operations and transmit a first media service request for a first media service operation of the media service operations. In addition, the computer system includes a second computer device that includes one or more GPUs. The second computer device is to implement the first media service operation with the one or more GPUs in response to the first computer device transmitting the first media service request for the first media service operation.Type: ApplicationFiled: August 11, 2022Publication date: February 15, 2024Applicant: STREEM, LLCInventors: Pavan K. Kamaraju, Steven Funasaki, Renganathan Veerasubramanian
-
Patent number: 11842444Abstract: Embodiments include systems and methods for visualizing the position of a capturing device within a 3D mesh, generated from a video stream from the capturing device. A capturing device may provide a video stream along with point cloud data and camera pose data. This video stream, point cloud data, and camera pose data are then used to progressively generate a 3D mesh. The camera pose data and point cloud data can further be used, in conjunction with a SLAM algorithm, to indicate the position and orientation of the capturing device within the generated 3D mesh.Type: GrantFiled: June 2, 2021Date of Patent: December 12, 2023Assignee: STREEM, LLCInventors: Sean M. Adkinson, Teressa Chizeck, Ryan R. Fink
-
Patent number: 11830213Abstract: Embodiments include systems and methods for remotely measuring distances in an environment captured by a device. A device captures a video stream of a device along with AR data that may include camera pose information and/or depth information, and transmits the video stream and AR data to a remote device. The remote device receives a selection of a first point and a second point within the video stream and, using the AR data, calculates a distance between the first and second points. The first and second points may be at different locations not simultaneously in view of the device. Other embodiments may capture additional points to compute areas and/or volumes.Type: GrantFiled: November 5, 2020Date of Patent: November 28, 2023Assignee: STREEM, LLCInventors: Sean M. Adkinson, Ryan R. Fink, Brian Gram, Nicholas Degroot, Alexander Fallenstedt
-
Patent number: 11831965Abstract: Embodiments include systems and methods for filtering augmented reality (“AR”) data streams, and in particular, video and/or audio streams to change or remove undesirable content, such as personally identifiable information, embarrassing content, financial data, etc. For example, a handheld device may capture video using a built in camera, and as needed superimpose AR objects on the video to change its content. A cloud service may host an AR session between the handheld device and a remote machine. The cloud service and remote machine may also operate their own filter on the AR data to remove and/or replace content to comply with their interests, regulations, policies, etc. Filtering may be cumulative. For example, the handheld device may remove financial data before it leaves the device, and then the cloud service may replace commercial logos before sharing AR data between session participants.Type: GrantFiled: July 6, 2022Date of Patent: November 28, 2023Assignee: STREEM, LLCInventor: Pavan K. Kamaraju
-
Patent number: 11830142Abstract: Embodiments include systems and methods for generating a 3D mesh from a video stream or other image captured contemporaneously with AR data. The AR data is used to create a depth map, which is then fused with images from frames of the video to form a full 3D mesh. The images and depth map can also be used with an object detection algorithm to recognize 3D objects within the 3D mesh. Methods for fingerprinting the video with AR data captured contemporaneously with each frame are disclosed.Type: GrantFiled: March 8, 2022Date of Patent: November 28, 2023Assignee: STREEM, LLCInventors: Flora Ponjou Tasse, Pavan Kumar Kamaraju, Ghislain Fouodji Tasse, Ryan R. Fink, Sean M. Adkinson
-
Patent number: 11823310Abstract: Methods for replacing or obscuring objects detected in an image or video on the basis of image context are disclosed. Context of the image or video may be obtained via pattern recognition on audio associated with the image or video, by user-supplied context, and/or by context derived from image capture, such as the nature of an application used to capture the image. The image or video may be analyzed for object detection and recognition, and depending upon policy, the image or video context used to select objects related or unrelated to the context for replacement or obfuscation. The selected objects may then be replaced with generic objects rendered from 3D models, or blurred or otherwise obscured.Type: GrantFiled: September 6, 2019Date of Patent: November 21, 2023Assignee: STREEM, LLC.Inventors: Ryan R. Fink, Sean M. Adkinson