Patents Examined by Anh-Tuan V Nguyen
-
Patent number: 11978149Abstract: A method, computer system, and a computer program product for projecting a 3D model defined by x, y, z coordinates onto the surface of a 2D image defined by u, v coordinates is provided. The present invention may include receiving a 3D model having a plurality of polygons, wherein certain edges are marked as seams. The present invention may include receiving input from a user, wherein the input comprises painting one or more parts of the 3D model in different colors, wherein the colors correspond with a weight of the area painted. The present invention may include unwrapping, by a processor, a 2D texture from the 3D model using a projection algorithm. The present invention may include generating a rectangular boundary around each island. The present invention may include scaling each island according to a gradient score.Type: GrantFiled: June 21, 2021Date of Patent: May 7, 2024Assignee: The Weather Company, LLCInventors: Cindy Han Lu, Angela Monique Lloyd, Thai Quoc Tran, Weiwei Liu
-
Patent number: 11972609Abstract: Systems, apparatuses, interfaces, and methods for implementing the systems, apparatuses, and interfaces include capturing an image, displaying the image on a display devise, scanning and identifying objects and/or attributes associated with the image and/or objects therein, generating a 3D AR environment within the display overlaid on the image, generating a ray pointer for improved interaction with the image and the generated 3D AR environment, where the environment includes virtual constructs corresponding to the image objects and/or attributes, and selecting, activating, animating, and/or manipulating the virtual constructs within the 3D AR environment.Type: GrantFiled: May 17, 2023Date of Patent: April 30, 2024Assignee: Quantum Interface LLCInventors: Jonathan Josephson, Robert W. Strozier
-
Patent number: 11941749Abstract: A processor causes a storage medium to store three-dimensional data of a subject in a storage step. The processor selects a reference image in a first selection step. The processor selects a selected image that is a two-dimensional image used for generating the three-dimensional data on the basis of the reference image in a second selection step. The processor estimates a second camera coordinate regarding the reference image on the basis of a first camera coordinate regarding the selected image in an estimation step. The processor displays an image of the subject on a display in a display step. The image of the subject visualizes at least one of the second camera coordinate and a set of three-dimensional coordinates of one or more points of the subject calculated on the basis of the second camera coordinate.Type: GrantFiled: October 19, 2021Date of Patent: March 26, 2024Assignee: Evident CorporationInventor: Yohei Sakamoto
-
Patent number: 11941774Abstract: The present disclosure is directed to automatically generating a 360 Virtual Photographic Representation (“spin”) of an object using multiple images of the object. The system uses machine learning to automatically differentiate between images of the object taken from different angles. A user supplies multiple images and/or videos of an object and the system automatically analyzes and classifies the images into the proper order before incorporating the images into an interactive spin. The system automatically classifies the images using features identified in the images. The classifications are based on predetermined classifications associated with the object to facilitate proper ordering of the images in the resulting spin.Type: GrantFiled: September 27, 2022Date of Patent: March 26, 2024Assignee: Freddy Technologies LLCInventor: Sudheer Kumar Pamuru
-
Patent number: 11936840Abstract: A method of generating a composite image includes capturing a video image of a physical scene with a camera, identifying a green-screen region within the video image, identifying a viewpoint and a position and/or orientation of the green-screen region relative to the viewpoint, and generating a modified video image rendered from the viewpoint onto a display surface in which the green-screen region is replaced with an image of a virtual object. The image of the virtual object is generated by projection rendering of a model of the virtual object based on the position and/or orientation of the green-screen region relative to the viewpoint such that the virtual object is constrained within the green-screen region.Type: GrantFiled: March 9, 2023Date of Patent: March 19, 2024Assignee: Tanzle, Inc.Inventors: Nancy L. Clemens, Michael A. Vesely
-
Patent number: 11936839Abstract: Disclosed are systems and methods for the out-of-order predictive streaming of elements from a three-dimensional (“3D”) image file so that a recipient device is able to produce a first visualization of at least a first streamed element from a particular perspective, similar to the instant transfer of two-dimensional (“2D”) images, while the additional elements and perspectives of the 3D image are streamed. The sending device prioritizes the 3D image elements based on a predicted viewing order, streams a particular element from a particular perspective with a priority that is greater than a priority associated with other elements and other perspectives, determines a next element to stream after the particular element based on the next element being positioned adjacent to the particular element and having a priority that is greater than adjacent elements, and streams the next element to the recipient device.Type: GrantFiled: August 7, 2023Date of Patent: March 19, 2024Assignee: Illuscio, Inc.Inventor: Robert Monaghan
-
Patent number: 11924393Abstract: Described are techniques for shared viewing of video among remote users, including methods and corresponding systems and apparatuses. A first computer device operated by a first user can receive an input video stream representing a three-dimensional (3D) space. From the input video stream, an output video stream corresponding to the 3D space as seen from a viewing direction specified by the first user is generated. The output video stream includes images that cover a narrower field of view compared to the input video stream. The first computer device can receive information indicating a viewing direction specified by a second user, who is a user of a second computer device. The first computer device can update the output video stream based on the information indicating the viewing direction specified by the second user. The updated output video stream can then be presented on a display of the first user.Type: GrantFiled: January 22, 2021Date of Patent: March 5, 2024Assignee: VALEO COMFORT AND DRIVING ASSISTANCEInventors: Nicodemus Estee, Jaime Almeida
-
Patent number: 11893744Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to determining a two-dimensional (2D) profile of a portion of a three-dimensional (3D) point cloud. A 3D region of interest is determined that includes a width along a first axis, a height along a second axis, and a depth along a third axis. The 3D points within the 3D region of interest are represented as a set of 2D points based on coordinate values of the first and second axes. The 2D points are grouped into a plurality of 2D bins arranged along the first axis. For each 2D bin, a representative 2D position is determined based on the associated set of 2D points. Each of the representative 2D positions are connected to neighboring representative 2D positions to generate the 2D profile.Type: GrantFiled: May 10, 2021Date of Patent: February 6, 2024Inventors: Hongwei Zhu, Nathaniel Bogan, David J. Michael
-
Patent number: 11893690Abstract: A computer-implemented method for 3D reconstruction including obtaining 2D images and, for each 2D image, camera parameters which define a perspective projection. The 2D images all represent a same real object. The real object is fixed. The method also includes obtaining, for each 2D image, a smooth map. The smooth map has pixel values, and each pixel value represents a measurement of contour presence. The method also includes determining a 3D modeled object that represents the real object. The determining iteratively optimizes energy. The energy rewards, for each smooth map, projections of silhouette vertices of the 3D modeled object having pixel values representing a high measurement of contour presence. This forms an improved solution for 3D reconstruction.Type: GrantFiled: September 21, 2022Date of Patent: February 6, 2024Assignee: DASSAULT SYSTEMESInventors: Serban Alexandru State, Eloi Mehr, Yoan Souty
-
Patent number: 11887249Abstract: A method includes receiving video data of a user, the video data comprising a first captured image and a second captured image, generating a two-dimensional planar proxy of the user, determining a pose comprising a location and orientation of the two-dimensional planar proxy within a three-dimensional virtual environment, rendering one or more display images for one or more displays of an artificial-reality device based on the two-dimensional planar proxy having the determined pose and at least one of the first and second captured images, displaying the rendered one or more display images using the one or more displays, respectively, determining that a viewing angle of the artificial-reality device relative to the two-dimensional planar proxy exceeds a predetermined maximum threshold, and based on the determination that the viewing angle exceeds the predetermined maximum threshold, ceasing to display the one or more display images.Type: GrantFiled: December 22, 2022Date of Patent: January 30, 2024Assignee: Meta Platforms Technologies, LLCInventors: Alexander Sorkine Hornung, Panya Inversin
-
Patent number: 11880935Abstract: An image-based method of modeling and rendering a three-dimensional model of an object is provided. The method comprises: obtaining a three-dimensional point cloud at each frame of a synchronized, multi-view video of an object, wherein the video comprises a plurality of frames; extracting a feature descriptor for each point in the point cloud for the plurality of frames without storing the feature descriptor for each frame; producing a two-dimensional feature map for a target camera; and using an anti-aliased convolutional neural network to decode the feature map into an image and a foreground mask.Type: GrantFiled: September 23, 2022Date of Patent: January 23, 2024Assignee: SHANGHAITECH UNIVERSITYInventors: Minye Wu, Jingyi Yu
-
Patent number: 11875457Abstract: Techniques for three-dimensional product reconstruction using multiple images collected at checkout lanes are disclosed herein. An example method includes capturing, by a barcode reader associated with a point of sale (POS) workstation, first image data associated with each of a plurality of products passing through a product scanning region of the POS workstation; analyzing barcode data from the first image data captured by the barcode reader to identify each product of the plurality of products passing through the product scanning region of the POS workstation; capturing, by one or more color cameras associated with the POS workstation, second image data associated with the identified product, of the plurality of products passing through the product scanning region of the POS workstation; and generating, by a processor, a textured three-dimensional mesh reconstruction for the identified product based on the second image data associated with the identified product.Type: GrantFiled: February 22, 2022Date of Patent: January 16, 2024Assignee: Zebra Technologies CorporationInventors: Alessandro Bay, Andrea Mirabile
-
Patent number: 11869141Abstract: Techniques related to validating an image based 3D model of a scene are discussed. Such techniques include detecting an object within a captured image used to generate the scene, projecting the 3D model to a view corresponding to the captured image to generate a reconstructed image, and comparing image regions of the captured and reconstructed images corresponding to the object to validate the 3D model.Type: GrantFiled: May 14, 2019Date of Patent: January 9, 2024Assignee: Intel CorporationInventors: Xiaofeng Tong, Wenlong Li
-
Patent number: 11869135Abstract: A three-dimensional representation of a scene captured in an action shot base video may be determined. The three-dimensional representation may identify a camera pose. A representation of an object may be determined from a multi-view representation of the object that includes images of the object and that is navigable in one or more dimensions. An action shot video of the scene that includes a rendering of the object determined based on the representation and the camera pose may be generated.Type: GrantFiled: January 8, 2021Date of Patent: January 9, 2024Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Julius Santiago, Milos Vlaski, Endre Ajandi, Radu Bogdan Rusu
-
Patent number: 11830210Abstract: A method and device for generating points of a 3D scene from a depth image. To reach that aim, depth information associated with a current pixel is compared with depth information associated with pixels spatially adjacent to the current pixel in the depth image. When the difference of depth between the current pixel and an adjacent pixel is greater than a first value and less than a second value, at least an additional point of said 3D scene is generated, in addition to a current point associated with the current pixel of the depth image, the number of additional points depending on the depth difference.Type: GrantFiled: October 3, 2018Date of Patent: November 28, 2023Assignee: InterDigital VC Holdings, Inc.Inventors: Sebastien Lasserre, Julien Ricard, Remi Jullian
-
Patent number: 11808941Abstract: Systems, devices, and methods relate to generation of augmented images using virtual content that is part of an augmented reality presentation and images of a scene captured via an image sensor. A wearable heads-up display (WHUD) may present an augmented reality presentation with virtual content projected into a field of view of a scene, while an image sensor captures images of the scene. The image sensor may be part of the WHUD or part of a separate device, for instance part of a smartphone. A wearer may view a scene via the WHUD and capture an image of all or a portion of the scene, for instance via a camera of either the WHUD or a separate device. An application monitors the virtual content and can generate an augmented image, which can be transmitted/printed, replicating the augmented reality experience and/or adding customized messages to the resulting augmented image.Type: GrantFiled: December 2, 2019Date of Patent: November 7, 2023Assignee: GOOGLE LLCInventors: Stephen Lake, Samuel Legge, Lee Payne, Lahiru Maramba Kodippilige, Eric A. Patterson
-
Patent number: 11803936Abstract: A graphics processing unit configured to process graphics data using a rendering space which is sub-divided into a plurality of tiles, the graphics processing unit comprising: a plurality of processing cores configured to render graphics data; cost indication logic configured to obtain a cost indication for each of a plurality of sets of one or more tiles of the rendering space, wherein the cost indication for a set of one or more tiles is suggestive of a cost of processing the set of one or more tiles; similarity indication logic configured to obtain similarity indications between sets of one or more tiles of the rendering space, wherein the similarity indication between two sets of one or more tiles is indicative of a level of similarity between the two sets of tiles according to at least one processing metric; and scheduling logic configured to assign the sets of one or more tiles to the processing cores for rendering in dependence on the cost indications and the similarity indications.Type: GrantFiled: October 13, 2022Date of Patent: October 31, 2023Assignee: Imagination Technologies LimitedInventors: Rudi Bonfiglioli, Richard Broadhurst
-
Patent number: 11804007Abstract: An image processing system includes a computing platform having processing hardware, a display, and a system memory storing a software code. The processing hardware is configured to execute the software code to receive a three-dimensional (3D) digital model, surround the 3D digital model with multiple virtual cameras oriented toward the 3D digital model, and generate, using the virtual cameras, a multiple renders of the 3D digital model. The processing hardware is further configured to execute the software code to generate a UV texture coordinate space for a surface projection of the 3D digital model, and to transfer, using the multiple renders, lighting color values for each of multiple surface portions of the 3D digital model to the UV texture coordinate space.Type: GrantFiled: March 31, 2021Date of Patent: October 31, 2023Assignee: Disney Enterprises, Inc.Inventors: Dane M. Coffey, Siroberto Scerbo, Evan M. Goldberg, Christopher Richard Schroers, Daniel L Baker, Mark R. Mine, Erika Varis Doggett
-
Patent number: 11798219Abstract: A graphics processing engine has a geometry shading stage having two modes of operation. In the first mode of operation, each primitive output by the geometry shading stage is independent, whereas in the second mode of operation, connectivity between input primitives is maintained by the geometry shading stage. The mode of operation of the geometry shading stage can be determined based on the value of control state data which may be generated at compile-time for a geometry shader based on analysis of that geometry shader.Type: GrantFiled: September 6, 2022Date of Patent: October 24, 2023Assignee: Imagination Technologies LimitedInventor: John W. Howson
-
Patent number: 11790598Abstract: A three-dimensional (3D) density volume of an object is constructed from tomography images (e.g., x-ray images) of the object. The tomography images are projection images that capture all structures of an object (e.g., human body) between a beam source and imaging sensor. The beam effectively integrates along a path through the object producing a tomography image at the imaging sensor, where each pixel represents attenuation. A 3D reconstruction pipeline includes a first neural network model, a fixed function backprojection unit, and a second neural network model. Given information for the capture environment, the tomography images are processed by the reconstruction pipeline to produce a reconstructed 3D density volume of the object. In contrast with a set of 2D slices, the entire 3D density volume is reconstructed, so two-dimensional (2D) density images may be produced by slicing through any portion of the 3D density volume at any angle.Type: GrantFiled: July 1, 2021Date of Patent: October 17, 2023Assignee: NVIDIA CorporationInventors: Onni August Kosomaa, Jaakko T. Lehtinen, Samuli Matias Laine, Tero Tapani Karras, Miika Samuli Aittala