Patents Examined by Ryan McCulley
  • Patent number: 11756254
    Abstract: Light contribution information can be determined and cached for use in rendering image frames for a scene. In at least one embodiment, a spatial hash data structure can be used to split the scene into regions, such as octahedral voxels. Using cast light rays, an average light contribution can be computed for each individual voxel. Those light values can then be used to build a cumulative distribution function for each voxel that can be used to select which lights to sample for a given frame during rendering. The sampling for a region or voxel can be based at least in part upon the number of contributing lights for that region, as well as the relative contributions of those lights. Such an approach can be very bandwidth and cache efficient, while providing high image quality.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: September 12, 2023
    Assignee: Nvidia Corporation
    Inventors: Blagovest Borislavov Taskov, Apollo Ellis
  • Patent number: 11756256
    Abstract: A ray tracing unit implemented in a graphics rendering system includes processing logic configured to perform ray tracing operations on rays, a dedicated ray memory coupled to the processing logic and configured to store ray data for rays to be processed by the processing logic, an interface to a memory system, and control logic configured to manage allocation of ray data to either the dedicated ray memory or the memory system. Core ray data for rays to be processed by the processing logic is stored in the dedicated ray memory, and at least some non-core ray data for the rays is stored in the memory system. This allows core ray data for many rays to be stored in the dedicated ray memory without the size of the dedicated ray memory becoming too wasteful when the ray tracing unit is not in use.
    Type: Grant
    Filed: March 17, 2022
    Date of Patent: September 12, 2023
    Assignee: Imagination Technologies Limited
    Inventors: John W. Howson, Steven J. Clohset, Ali Rabbani
  • Patent number: 11748948
    Abstract: Disclosed are a mesh reconstruction method and apparatus for a transparent object, a computer device and a storage medium. The method includes: acquiring object images of the transparent object at multiple capture view angles and calibration information corresponding to an image capture device, the image capture device configured to capture being the object images; generating an initial mesh model corresponding to the transparent object according to the object images acquired at the multiple capture view angles; determining a light ray refraction loss corresponding to an emergent light ray of the image capture device according to the calibration information, and determining a model loss corresponding to the initial mesh model according to the light ray refraction loss; and reconstructing the initial mesh model according to the model loss, to obtain a target mesh model corresponding to the transparent object.
    Type: Grant
    Filed: October 12, 2020
    Date of Patent: September 5, 2023
    Assignee: Shenzhen University
    Inventors: Hui Huang, Jiahui Lyu
  • Patent number: 11744643
    Abstract: Systems and methods that facilitate the pre-operative prediction of post-operative tissue function to assist a clinician in planning for and carrying out a surgical procedure. In particular, systems and methods that facilitate the pre-operative prediction of post-resection lung tissue function, thus assisting a clinician in determining the location(s) and volume(s) of lung tissue to be resected.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: September 5, 2023
    Assignee: COVIDIEN LP
    Inventors: Francesca Rossetto, Joe D. Sartor
  • Patent number: 11735065
    Abstract: A method for building a vehicle crash dummy skull model, includes: building an original skull model according to computed tomography scan data of a human skull; inserting a new first node according to a distance between adjacent first nodes on a hole boundary contour line, to obtain a target contour line; inserting a plurality of second nodes into holes according to the target contour line to obtain an intermediate boundary contour line; forming triangular nets according to the nodes on the target contour line and the intermediate boundary contour line; and with the intermediate boundary contour line as a new target contour line, returning to the operation of inserting a plurality of second nodes until the number of nodes of a final target contour line is less than or equal to a preset value. This embodiment is conducive to building of a skull model with a smooth cranial surface.
    Type: Grant
    Filed: March 26, 2023
    Date of Patent: August 22, 2023
    Assignees: CHINA AUTOMOTIVE TECHNOLOGY AND RESEARCH CENTER CO., LTD, CATARC AUTOMOTIVE TEST CENTER (TIANJIN) CO., LTD
    Inventors: Zhixin Wu, Zhixin Liu, Hengxu Lv, Weidong Liu, Zhengqi Fan, Yongqiang Wu, Hai Liu, Kai Wang
  • Patent number: 11734892
    Abstract: The present application relates to methods and apparatuses for three-dimensional reconstruction of a transparent object, computer devices, and storage mediums.
    Type: Grant
    Filed: July 26, 2018
    Date of Patent: August 22, 2023
    Assignee: SHENZHEN UNIVERSITY
    Inventors: Hui Huang, Bojian Wu
  • Patent number: 11734845
    Abstract: Systems and methods for extracting ground plane information directly from monocular images using self-supervised depth networks are disclosed. Self-supervised depth networks are used to generate a three-dimensional reconstruction of observed structures. From this reconstruction the system may generate surface normals. The surface normals can be calculated directly from depth maps in a way that is much less computationally expensive and accurate than surface normals extraction from standard LiDAR data. Surface normals facing substantially the same direction and facing upwards may be determined to reflect a ground plane.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: August 22, 2023
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Vitor Guizilini, Rares A. Ambrus, Adrien David Gaidon
  • Patent number: 11727645
    Abstract: In a device (100) and a method implemented by said device, two immersive systems are connected such that a virtual environment generated on a source immersive system (10) is reproduced on a target immersive system (20). The images of the virtual environment displayed on the display system of the source immersive system are transformed in order to be displayed on the display system of the target immersive system, such that a virtual reproduction of the virtual environment is correctly represented on the target immersive system for an observer, irrespective of the structural and software differences between the two immersive systems. Freezing certain display data and observation conditions of the source system results in a temporary stabilisation of the representation of the virtual environment on the target system without any negative effect on the coherence of the representation on said target system.
    Type: Grant
    Filed: April 27, 2017
    Date of Patent: August 15, 2023
    Assignee: IMMERSION
    Inventors: Jean-Baptiste De La Riviere, Valentin Logeais, Cédric Kervegant
  • Patent number: 11721081
    Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing (live-action/animation/cinematic) virtual reality (VR) experiences or content. A script can be parsed to identify one or more elements in a script, and a VR representation of the one or more elements can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner along with the VR representation. The user may edit the VR representation, and the visual and temporal presentation of the script can be commensurately represented. The script may be analyzed for consistency and/or cohesiveness in the context of the VR representation or experience. A preview of the VR experience or content can be generated from the script and/or the VR representation.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: August 8, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Sasha Anna Schriber, Isa Simo, Merada Richter, Mubbasir Kapadia, Markus Gross
  • Patent number: 11715261
    Abstract: A method for detecting and modelling of an object on a surface of a road by first scanning the road and generating a 3D model of the scanned road (which 3D model of the scanned road contains a description of a 3D surface of the road) and then creating a top-view image of the road. The object is detected on the surface of the road by evaluating the top-view image of the road. The detected object is projected on the surface of the road in the 3D model of the scanned road. The object projected on the surface of the road in the 3D model of the scanned road is modelled.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: August 1, 2023
    Assignee: Continental Automotive GmbH
    Inventors: Haitao Xue, Dongbing Quan, Changhong Yang, James Herbst
  • Patent number: 11710465
    Abstract: A method and apparatus analyze a difference of at least two gradings of an image on the basis of: obtaining a first graded picture (LDR) with a first luminance dynamic range; obtaining data encoding a grading of a second graded picture (HDR) with a second luminance dynamic range, different from the first luminance dynamic range; and determining a grading difference data structure (DATGRAD) on the basis of at least the data encoding the grading of the second graded picture (HDR), which allows more intelligently adaptive encoding of the imaged scenes, and consequently also better use of those pictures, such as higher quality rendering under various rendering scenarios.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: July 25, 2023
    Assignee: Koninklijke Philips N.V.
    Inventors: Remco Theodorus Johannes Muijs, Mark Jozef Willem Mertens, Wilhelmus Hendrikus Alfonsus Bruls, Chris Damkat, Martin Hammer, Cornelis Wilhelmus Kwisthout
  • Patent number: 11704916
    Abstract: A three-dimensional environment analysis method is disclosed. The method includes (i) receiving original point cloud data of a working environment, (ii) processing a map constructed on the basis of the original point cloud data in order to separate out a ground surface, a wall surface and an obstacle in the working environment, (iii) pairing the ground surface with the wall surface according to the degree of proximity between the ground surface and wall surface that are separated out to form one or more adjacent ground-wall pair sets, and (iv) subjecting the one or more adjacent ground-wall pair sets to ray tracing analysis in order to obtain a line-of-sight zone and a non-line-of-sight zone in the working environment. A three-dimensional environment analysis device, a computer storage medium and a wireless sensor system is also disclosed.
    Type: Grant
    Filed: June 27, 2021
    Date of Patent: July 18, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Marc Patrick Zapf, William Wang, Hao Sun
  • Patent number: 11690675
    Abstract: A process according to certain embodiments includes generating a distal femur model including an intercondylar surface model, receiving information related to user-selected points on the intercondylar surface model, generating a datum line extending between the points, generating an axis line, and determining an AP axis based upon the axis line. Generating the axis line includes performing an axis line procedure including generating a plurality of planes along the datum line, generating a plurality of contours at intersections between the intercondylar surface model and the planes, generating saddle points at local extrema of the contours, and fitting the axis line to the saddle points. The process may further include generating an updated datum line based upon the axis line, and performing a subsequent iteration of the axis line procedure using the updated datum line.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: July 4, 2023
    Assignee: SMITH & NEPHEW, INC.
    Inventors: Stephen Mirdo, Yangqiu Hu
  • Patent number: 11676334
    Abstract: The present invention relates to a plenoptic cloud generation method, the method of generating a plenoptic point cloud according to one embodiment of the present invention, the method comprises, obtaining a two-dimensional (2D) image for each view and depth information obtained from a plurality of cameras, determining a method of generating a plenoptic point cloud and generating the plenoptic point cloud by applying the determined method of generating the plenoptic point cloud to at least one of the 2D image for each view or the depth information, wherein the method of generating the plenoptic point cloud includes at least one of a simultaneous generation method of the point cloud and a sequential generation method of the point cloud.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: June 13, 2023
    Assignees: Electronics and Telecommunications Research Institute, HANDONG GLOBAL UNIVERCITY INDUSTRY-ACADEMIC COOPERATION FOUNDATION
    Inventors: Ha Hyun Lee, Jung Won Kang, Soo Woong Kim, Gun Bang, Jin Ho Lee, Sung Chang Lim, Sung Soo Hwang, Se Hui Kang, Ji Won Moon, Mu Hyun Back, Jin Kyu Lee, Hyun Min Han
  • Patent number: 11670052
    Abstract: Disclosed is a system and associated methods for generating a mutable tree to efficiently access data within a three-dimensional (“3D”) environment. The system generates the mutable tree with a root node defined at a root node position, a first branch with nodes for each of a first set of subdivided regions that are a first distance from the root node position, and a second branch with nodes for each of a second set of subdivided regions that are a second distance from the root node position. The system sorts the mutable tree in response to a request to access data from a first position within the 3D environment so that the first node in the first branch is the first subtree node that is closest to the first position, and the first node in the second branch is the second subtree node that is closest to the first position.
    Type: Grant
    Filed: December 28, 2022
    Date of Patent: June 6, 2023
    Assignee: Illuscio, Inc.
    Inventor: Kevin Edward Dean
  • Patent number: 11663467
    Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: May 30, 2023
    Assignee: ADOBE INC.
    Inventors: Long Mai, Yannick Hold-Geoffroy, Naoto Inoue, Daichi Ito, Brian Lynn Price
  • Patent number: 11663776
    Abstract: A computer system maintains structure data indicating geometrical constraints for each structure category of a plurality of structure categories. The computer system generates a virtual 3D representation of a structure based on a set of images depicting the structure. For each image in the set of images, one or more landmarks are identified. Based on the landmarks, a candidate structure category is selected. The virtual 3D representation is generated based on the geometrical constraints of the candidate structure category and the landmarks identified in the set of images.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: May 30, 2023
    Assignee: HOVER Inc.
    Inventors: Ajay Mishra, Manish Upendran, A. J. Altman, William Castillo
  • Patent number: 11640690
    Abstract: Methods and systems are provided for training a machine learning model to generate density values and radiance components based on positional data, along with a weighting scheme associated with a particular view direction based on directional data to compute a final RGB value for each point along a plurality of camera rays. The positional data and directional data are extracted from set of training images of a particular static scene. The radiance components, density values, and weighting schemes are cached for efficient image data processing to perform volume rendering for each point sampled. A novel viewpoint of a static scene is generated based on the volume rendering for each point sampled.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: May 2, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephan Joachim Garbin, Marek Adam Kowalski, Matthew Alastair Johnson
  • Patent number: 11631211
    Abstract: Disclosed is surface guided cropping in volume rendering of 3D volumetric data from intervening anatomical structures in the patient's body. A digital 3D representation expressing the topography of a first anatomical structure is used to define a clipping surface or a bounding volume which then is used in the volume rendering to exclude data from an intervening structure when generating a 2D projection of the first anatomical structure.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: April 18, 2023
    Assignee: 3SHAPE A/S
    Inventors: Anders Kjær-Nielsen, Adam Carsten Stoot
  • Patent number: 11631187
    Abstract: Systems, apparatuses, and methods for implementing a depth buffer pre-pass are disclosed. A rendering application uses a binning approach to render primitives of a virtual scene on a tile-by-tile basis, with each tile corresponding to a portion of the screen. The application causes a depth buffer pre-pass to be performed for the primitives of the tile before a pixel shader is invoked. During the depth buffer pre-pass, only the depth part of the virtual scene is rendered to determine which pixel samples are visible and which pixel samples are hidden. Then, the scene is redrawn, but the pixel samples that are hidden are not sent to the pixel shader. In cases where a relatively large percentage of primitives overlap, this technique increases the efficiency of the rendering application since pixel shading can be avoided for the pixel samples that are hidden.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: April 18, 2023
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Jan Henrik Achrenius, Mika Tuomi, Kiia Kallio, Pazhani Pillai, Laurent Lefebvre