Patents Examined by Anh-Tuan V Nguyen
  • Patent number: 11783527
    Abstract: An apparatus comprises receivers (201, 203) receiving texture maps and meshes representing a scene from a first and second view point. An image generator (205) determines a light intensity image for a third view point based on the received data. A first view transformer (207) determines first image positions and depth values in the image for vertices of the first mesh and a second view transformer (209) determines second image positions and depth values for vertices of the second mesh. A first shader (211) determines a first light intensity value and a first depth value based on the first image positions and depth value, and a second shader (213) determines a second light intensity value and a second depth value from the second image positions depth values. A combiner (215) generates an output value as a weighted combination of the first and second light intensity values where the weighting of a light intensity value increases for an increasing depth value.
    Type: Grant
    Filed: February 10, 2020
    Date of Patent: October 10, 2023
    Assignee: Koninklijke Philips N.V.
    Inventor: Christiaan Varekamp
  • Patent number: 11776217
    Abstract: Disclosed in the embodiments of the present disclosure are a method for planning three-dimensional scanning viewpoint, a device for planning three-dimensional scanning viewpoint and a computer readable storage medium. After a low-precision digitalized model of an object to be scanned is acquired, viewpoint planning calculation is performed, on the basis of a viewpoint planning algorithm, on point cloud data in the low-precision digitalized model, and then the positions and line-of-sight directions of a plurality of viewpoints in space are calculated when a three-dimensional sensor needs to perform three-dimensional scanning on said object. Calculating viewpoints of a three-dimensional sensor by means of a viewpoint planning algorithm can effectively improve the accuracy and scientific nature of sensor posture determination, greatly improving the efficiency of viewpoint planning, and reducing the time consumed in the whole three-dimensional measurement process.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: October 3, 2023
    Assignee: SHENZHEN UNIVERSITY
    Inventors: Xiaoli Liu, Yifan Liao, Hailong Chen, Xiang Peng, Qijian Tang
  • Patent number: 11776105
    Abstract: A contaminant detection system includes a light source configured to emit excitation light on an object to be inspected; a detector configured to detect fluorescence emitted from a contaminant adhering to the object to be inspected; and a processor. The fluorescence is caused by emission of the excitation light from the light source onto the object to be inspected. The processor is configured to perform a determination of a location of the contaminant and a type of the contaminant, based on the fluorescence emitted from the contaminant; and output a result of the determination.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: October 3, 2023
    Assignee: Tokyo Electron Limited
    Inventors: Tsuyoshi Moriya, Yoshitaka Enoki, Tokio Toyama, Michihiro Takahashi, Takuya Mori
  • Patent number: 11763427
    Abstract: A method of intelligently transforming a digital asset for a target environment is disclosed. Asset data describing the digital asset is received. The received asset data is analyzed to determine a classification type for the digital asset. A database is communicated with to request additional data associated with the determined classification type. The additional data includes semantic data associated with the classification type. The additional data is compared to the received asset data. The comparing includes determining missing data and conflicting data. The missing data includes data from the additional data which is absent from the asset data. The missing data is added to the asset data.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: September 19, 2023
    Assignee: Unity IPR ApS
    Inventors: Gregory Lionel Xavier Jean Palmaro, Charles Janusz Migos, Patrick Gustaevel, Gerald James William Orban
  • Patent number: 11758108
    Abstract: Provided is an image transmission method. Left-eye image data and right-eye image data corresponding to a same original image frame in a target video are divided into sub-images on the image processing device side, multiple sets of image data in one-to-one correspondence with the sub-images are generated, and transmitted to the image display device through transmission threads. Each of the multiple sets of image data includes a sub-image, and first and second sequence numbers that correspond to the sub-image. Left-eye image data and right-eye image data corresponding to a same original image frame are obtained by combining sub-images in the multiple sets of image data, by the image display device, based on first and second sequence numbers in the multiple sets of image data. Finally, a left-eye image corresponding to the left-eye image data and a right-eye image corresponding to the right-eye image data are played.
    Type: Grant
    Filed: August 5, 2022
    Date of Patent: September 12, 2023
    Assignee: QINGDAO PICO TECHNOLOGY CO., LTD.
    Inventor: Jian Wu
  • Patent number: 11758104
    Abstract: Disclosed are systems and methods for the out-of-order predictive streaming of elements from a three-dimensional (ā€œ3Dā€) image file so that a recipient device is able to produce a first visualization of at least a first streamed element from a particular perspective, similar to the instant transfer of two-dimensional (ā€œ2Dā€) images, while the additional elements and perspectives of the 3D image are streamed. The sending device prioritizes the 3D image elements based on a predicted viewing order, streams a particular element from a particular perspective with a priority that is greater than a priority associated with other elements and other perspectives, determines a next element to stream after the particular element based on the next element being positioned adjacent to the particular element and having a priority that is greater than adjacent elements, and streams the next element to the recipient device.
    Type: Grant
    Filed: October 18, 2022
    Date of Patent: September 12, 2023
    Assignee: Illuscio, Inc.
    Inventor: Robert Monaghan
  • Patent number: 11756262
    Abstract: The present invention relates to a mobile terminal and a method for controlling same, the mobile terminal comprising: a camera for recording a video; a display unit for displaying a recording screen of the video being recorded through the camera; and a control unit for, when a portion of the video being recorded satisfies a video upload requirement of at least one video upload service that can be provided from the mobile terminal, controlling to display an item indicating the video upload services for which the requirement is satisfied. The present invention provides the effect of allowing a user to quickly recognize an SNS to which a recorded portion of a video currently being recorded in real time can be uploaded, and easily upload the recorded portion to the SNS.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: September 12, 2023
    Assignee: LG ELECTRONICS INC.
    Inventors: Changhwan Lee, Yunsup Shin, Salkmann Ji
  • Patent number: 11748932
    Abstract: In various examples there is a method of image processing comprising: storing a real image of an object in memory, the object being a specified type of object. The method involves computing, using a first encoder, a factorized embedding of the real image. The method receives a value of at least one parameter of a synthetic image rendering apparatus for rendering synthetic images of objects of the specified type. The parameter controls an attribute of synthetic images of objects rendered by the rendering apparatus. The method computes an embedding factor of the received value using a second encoder. The factorized embedding is modified with the computed embedding factor. The method computes, using a decoder with the modified embedding as input, an output image of an object which is substantially the same as the real image except for the attribute controlled by the parameter.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: September 5, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Marek Adam Kowalski, Stephan Joachim Garbin, Matthew Alastair Johnson, Tadas Baltrusaitis, Martin De la Gorce, Virginia Estellers Casas, Sebastian Karol Dziadzio
  • Patent number: 11748940
    Abstract: In one embodiment, a computing system may determine a view position, a view direction, and a time with respect to a scene. The system may access a spatiotemporal representation of the scene generated based on (1) a monocular video including images each capturing at least a portion of the scene at a corresponding time and (2) depth values of the portion of the scene captured by each image. The system may generate an image based on the view position, the view direction, the time, and the spatiotemporal representation. A pixel value of the image corresponding to the view position may be determined based on volume densities and color values at sampling locations along the view direction and at the time in the spatiotemporal representation. The system may output the image to the display, representing the scene at the time as viewed from the view position and in the view direction.
    Type: Grant
    Filed: October 11, 2021
    Date of Patent: September 5, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Wenqi Xian, Jia-Bin Huang, Johannes Peter Kopf, Changil Kim
  • Patent number: 11734872
    Abstract: The disclosure presents a technique for utilizing ray tracing to produce high quality visual scenes with shadows while minimizing computing costs. The disclosed technique can lower the number of rays needed for shadow region rendering and still maintain a targeted visual quality for the scene. In one example, a method for denoising a ray traced scene is disclosed that includes: (1) applying a pixel mask to a data structure of data from the scene, wherein the applying uses the scene at full resolution and pixels at the edge of a depth boundary change are identified using the pixel mask, (2) generating a penumbra mask using the data structure, (3) adjusting HitT values in the packed data buffer utilizing the penumbra mask, and (4) denoising the scene by reducing scene noise in the data of the data structure with adjusted HitT values.
    Type: Grant
    Filed: June 21, 2022
    Date of Patent: August 22, 2023
    Assignee: NVIDIA Corporation
    Inventor: Jon Story
  • Patent number: 11727644
    Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: August 15, 2023
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLC
    Inventors: Roger Cordes, Nicholas Rasmussen, Kevin Wooley, Rachel Rose
  • Patent number: 11715263
    Abstract: A file generation apparatus generates a file which includes material data used for generation of a virtual viewpoint image that is based on a multi-viewpoint image and type information for specifying a type of the material data, and outputs the generated file.
    Type: Grant
    Filed: January 13, 2021
    Date of Patent: August 1, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Hironao Ito, Kazufumi Onuma, Mitsuru Maeda
  • Patent number: 11710213
    Abstract: An application processor includes a reconfigurable hardware scaler which includes dedicated circuits configured to perform different scaling techniques, respectively and a shared circuit configured to be shared by the dedicated circuits. One of the different scaling techniques is performed by one of the dedicated circuits and the shared circuit.
    Type: Grant
    Filed: February 24, 2022
    Date of Patent: July 25, 2023
    Inventors: Sung Chul Yoon, Ha Na Yang
  • Patent number: 11694089
    Abstract: A simulation environment is disclosed. In embodiments, the simulation environment includes deep learning neural networks trained to generate photorealistic geotypical image content while preserving spatial coherence. The deep learning networks are trained to correlate geo-specific datasets with input images of increasing detail and resolution to iteratively generate output images until the desired level of detail is reached. The correlation of image input with geo-specific data (e.g., labeled data elements, land use data, biome data, elevational data, near-IR imagery) preserves spatial coherence of objects and image elements between output images, e.g., between adjacent levels of detail and/or between adjacent image tiles sharing a common level of detail.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: July 4, 2023
    Assignee: Rockwell Collins, Inc.
    Inventors: Daniel J. Lowe, Rishabh Kaushik
  • Patent number: 11687691
    Abstract: Systems, computer-implemented methods, and computer program products that can facilitate a transformation of a model of an entity by a model of a plurality of entities are provided. According to an embodiment, a computer-implemented method can comprise identifying a plurality of parameters of a model of a plurality of entities; generating a model of an entity based on collected data of an operation of the entity, wherein the model of the entity comprises a subset of the plurality of parameters; and transforming the model of the entity based the model of the plurality of entities such that a first result from the model of the plurality of entities and a second result from the model of the entity have a relationship that satisfies a defined criterion, given same values used for the subset of the plurality of parameters.
    Type: Grant
    Filed: January 3, 2019
    Date of Patent: June 27, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Viatcheslav Gurev, Paolo Di Achille, Jaimit Parikh
  • Patent number: 11688137
    Abstract: An apparatus for rendering a three-dimensional (3D) polygon mesh includes a processor, and a memory storing instructions, wherein the processor is configured to execute the instructions to obtain a plurality of multi-view images corresponding to a plurality of captured images obtained by capturing an object at different capture camera positions, obtain a 3D polygon mesh for the object, select one or more of the plurality of multi-view images as one or more texture images based on a render camera position and capture camera positions of the plurality of multi-view images, and render the 3D polygon mesh as a two-dimensional (2D) image based on the selected one or more texture images.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: June 27, 2023
    Assignee: FXGEAR INC.
    Inventor: Kyung Gun Na
  • Patent number: 11682162
    Abstract: A system, method or compute program product for generating stereoscopic images. One of the methods includes identifying, in a first three-dimensional coordinate system of a first three-dimensional virtual environment, a location and orientation of a first virtual object that is a virtual stereoscopic display object; identifying an eyepoint pair in the first virtual environment; identifying, in a second three-dimensional coordinate system of a second three-dimensional virtual environment, a location and orientation of a second virtual object that is in the second virtual environment; for each eyepoint of the eyepoint pair, rendering an inferior image of the second virtual object; for each eyepoint of the eyepoint pair, render a superior image of the first virtual environment, comprising rendering, in the superior image for each eyepoint, the corresponding inferior image onto the virtual stereoscopic display object; and display, on a physical stereoscopic display, the first virtual environment.
    Type: Grant
    Filed: October 8, 2021
    Date of Patent: June 20, 2023
    Assignee: Tanzle, Inc.
    Inventor: Michael T. Mayers
  • Patent number: 11676335
    Abstract: A tessellation method uses vertex tessellation factors. For a quad patch, the method involves comparing the vertex tessellation factors for each vertex of the quad patch to a threshold value and if none exceed the threshold, the quad is sub-divided into two or four triangles. If at least one of the four vertex tessellation factors exceeds the threshold, a recursive or iterative method is used which considers each vertex of the quad patch and determines how to further tessellate the patch dependent upon the value of the vertex tessellation factor of the selected vertex or dependent upon values of the vertex tessellation factors of the selected vertex and a neighbor vertex. A similar method is described for a triangle patch.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: June 13, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Simon Fenney, Vasiliki Simaiaki
  • Patent number: 11669986
    Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: June 6, 2023
    Assignees: ADOBE INC., THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, David Jay Kriegman, Ravi Ramamoorthi
  • Patent number: 11663820
    Abstract: Systems, apparatuses, interfaces, and methods for implementing the systems, apparatuses, and interfaces include capturing an image, displaying the image on a display devise, scanning and identifying objects and/or attributes associated with the image and/or objects therein, generating a 3D AR environment within the display overlaid on the image, generating a ray pointer for improved interaction with the image and the generated 3D AR environment, where the environment includes virtual constructs corresponding to the image objects and/or attributes, and selecting, activating, animating, and/or manipulating the virtual constructs within the 3D AR environment.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: May 30, 2023
    Assignee: Quantum Interface LLC
    Inventors: Jonathan Josephson, Robert W. Strozier