Lighting/shading Patents (Class 345/426)
-
Patent number: 12141414Abstract: Various implementations disclosed herein include devices, systems, and methods that provide a CGR environment in which virtual objects from one or more apps are included. User interactions with the virtual objects are detected and interpreted by a system that is separate from the apps that provide the virtual objects. The system detects user interactions received via one or more input modalities and interprets those user interactions as events. These events provide a higher-level, input modality-independent, abstractions of the lower-level input-modality dependent user interactions that are detected. The system uses UI capability data provided by the apps to interpret user interactions with respect to the virtual object provided by the apps. For example, the UI capability data can identify whether a virtual object is moveable, actionable, hover-able, etc. and the system interprets user interactions at or near the virtual object accordingly.Type: GrantFiled: July 3, 2023Date of Patent: November 12, 2024Assignee: Apple Inc.Inventors: Edwin Iskandar, Ittinop Dumnernchanvanit, Samuel L. Iglesias, Timothy R. Oriol
-
Patent number: 12131503Abstract: Virtual creation and visualization of a physically based virtual material swatch includes identifying a capture device model to support for use in the generation of the virtual material swatch, determining camera response of the capture device model, determining lookup tables of linearly transformed cosines for a plurality of parameters of a shading model, identifying a sample physical material, and determining material description for the sample physical material. A system and method may also include scanning a user physical material, selecting one or more of material preset, geometry preset, environment preset, and camera preset, and creating a virtual material swatch for the user physical material based on the selections.Type: GrantFiled: December 21, 2021Date of Patent: October 29, 2024Assignee: Aurora Operations, IncInventors: Alex Harvill, Allen Hemberger, Michael Fu
-
Patent number: 12112008Abstract: Methods and systems for simulating light interaction and physical materials in a graphical user interface (GUI) of a resource-constrained device are provided. Simulating physical materials, such as glass and metal, in a GUI can allow a user to feel more natural in interacting with the GUI. The user experience can be further enhanced if the simulated physical materials in the GUI can interact with the device's environment in a manner similar to how the actual physical materials would interact. However, continually polling various sensors can be resource-intensive, especially for resource-constrained mobile devices. Accordingly, a mobile device can intelligently determine whether to begin a reduced detail mode, and then render user interface objects in the reduced detail mode to conserve resources.Type: GrantFiled: July 26, 2021Date of Patent: October 8, 2024Assignee: Apple Inc.Inventors: Jesse W. Boettcher, Michael I. Ingrassia, Jr., Jeri C. Mason, Anton M. Davydov, David J. Rempel, Imran Chaudhri
-
Patent number: 12112492Abstract: A three-dimensional (3D) sensing device is configured to sense an object. The 3D sensing device includes a flood light source, a structured light source, an image sensor, and a controller. The controller is configured to perform: commanding the flood light source and the structured light source to emit a flood light and a structured light in sequence; commanding the image sensor to sense a first reflective light and a second reflective light in sequence, so as to obtain a first image frame and a second image frame; combining the first image frame and the second image frame into a determination frame; and determining that the object is a specular reflection object in response to determining that the determination frame has at least two spots having gray levels satisfying a predetermined condition. A specular reflection object detection method is also provided.Type: GrantFiled: March 10, 2022Date of Patent: October 8, 2024Assignee: HIMAX TECHNOLOGIES LIMITEDInventors: Wu-Feng Chen, Pen-Hsin Chen, Cheng-Che Tsai, Hsueh-Tsung Lu
-
Patent number: 12111177Abstract: According to an aspect of an embodiment, operations may comprise receiving sensor data from one or more vehicles, determining, by combining the received sensor data, a high definition map comprising a point cloud, and labeling one or more objects in the point cloud. The operations may also comprise generating training data by receiving a new image captured by one of the vehicles, receiving a pose of the vehicle when the new image was captured, determining an object having a label in the point cloud that is observable from the pose of the vehicle, determining a position of the object in the new image, and labeling the new image by assigning the label of the object to the new image, the labeled new image comprising the training data. The operations may also comprise training a deep learning model using the training data.Type: GrantFiled: July 2, 2020Date of Patent: October 8, 2024Assignee: NVIDIA CORPORATIONInventors: Yu Zhang, Lin Yang
-
Patent number: 12106435Abstract: The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned geometry model can predict one or more adjusted depths for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.Type: GrantFiled: June 30, 2023Date of Patent: October 1, 2024Assignee: AURORA OPERATIONS, INC.Inventors: Sivabalan Manivasagam, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun
-
Patent number: 12102395Abstract: A system for positioning a medical object at a desired depth includes a light guiding facility and a processing unit. The processing unit is configured to receive planning information that specifies a desired depth for an arrangement of a predefined section of a medical object in an examination object with respect to an entry point of the medical object in the examination object. The medical object has a mark that has a predefined relative positioning with respect to the predefined section. The processing unit is configured to control the light guiding facility for emitting a light distribution as a function of the planning information and the predefined relative positioning such that the light distribution illuminates the mark if the predefined section is arranged at the desired depth.Type: GrantFiled: June 1, 2023Date of Patent: October 1, 2024Assignee: Siemens Healthineers AGInventor: Alois Regensburger
-
Patent number: 12106423Abstract: Techniques applicable to a ray tracing hardware accelerator for traversing a hierarchical acceleration structure with reduced false positive ray intersections are disclosed. The reduction of false positives may be based upon one or more of selectively performing a secondary higher precision intersection test for a bounding volume, identifying and culling bounding volumes that degenerate to a point, and parametrically clipping rays that exceed certain configured distance thresholds.Type: GrantFiled: September 16, 2022Date of Patent: October 1, 2024Assignee: NVIDIA CORPORATIONInventors: Gregory Muthler, John Burgess, Magnus Andersson, Ian Kwong, Edward Biddulph
-
Patent number: 12100074Abstract: Provided are systems and methods for synthesizing novel views of complex scenes (e.g., outdoor scenes). In some implementations, the systems and methods can include or use machine-learned models that are capable of learning from unstructured and/or unconstrained collections of imagery such as, for example, “in the wild” photographs. In particular, example implementations of the present disclosure can learn a volumetric scene density and radiance represented by a machine-learned model such as one or more multilayer perceptrons (MLPs).Type: GrantFiled: June 1, 2023Date of Patent: September 24, 2024Assignee: GOOGLE LLCInventors: Daniel Christopher Duckworth, Alexey Dosovitskiy, Ricardo Martin-Brualla, Jonathan Tilton Barron, Noha Radwan, Seyed Mohammad Mehdi Sajjadi
-
Patent number: 12094026Abstract: Apparatus and method for enhancing graphics rendering photorealism. For example, one embodiment of a graphics processor comprises: a graphics processing pipeline comprising a plurality of graphics processing stages to render a graphics image; a local storage to store intermediate rendering data to generate the graphics image; and machine-learning hardware logic to perform a refinement operation on the graphics image using at least a portion of the intermediate rendering data to generate a translated image.Type: GrantFiled: July 27, 2020Date of Patent: September 17, 2024Assignee: Intel CorporationInventors: Stephan Richter, Vladlen Koltun, Hassan Abu Alhaija
-
Patent number: 12094178Abstract: An encoding device and a method for point cloud encoding are disclosed. The method includes segmenting an area including points representing a three-dimensional (3D) point cloud into multiple voxels. The method also includes generating a patch information for each of the multiple voxels that include at least one of the points of the 3D point cloud. The method further includes assigning the patch information of the multiple voxels to the points included in each respective voxel, to generate patches that represent the 3D point cloud. Additionally, the method includes generating frames that include pixels that represent the patches. The method also includes encoding the frames to generate a bitstream and transmitting the bitstream.Type: GrantFiled: August 18, 2021Date of Patent: September 17, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Esmaeil Faramarzi, Madhukar Budagavi, Rajan Laxman Joshi
-
Patent number: 12094066Abstract: Methods and systems are disclosed for performing operations for applying augmented reality elements to a person depicted in an image. The operations include receiving an image that includes data representing a depiction of a person; generating a segmentation of the data representing the person depicted in the image; extracting a portion of the image corresponding to the segmentation of the data representing the person depicted in the image; applying a machine learning model to the portion of the image to predict a surface normal tensor for the data representing the depiction of the person, the surface normal tensor representing surface normals of each pixel within the portion of the image; and applying one or more augmented reality (AR) elements to the image based on the surface normal tensor.Type: GrantFiled: June 16, 2022Date of Patent: September 17, 2024Assignee: SNAP INC.Inventors: Madiyar Aitbayev, Brian Fulkerson, Riza Alp Guler, Georgios Papandreou, Himmy Tam
-
Patent number: 12086923Abstract: Examples described herein provide a method that includes obtaining, by a processing device, three-dimensional (3D) voxel data. The method further includes performing, by the processing device, gray value thresholding based at least in part on the 3D voxel data and assigning a classification value to at least one voxel of the 3D voxel data. The method further includes defining, by the processing device, segments based on the classification value. The method further includes filtering, by the processing device, the segments based on the classification value. The method further includes evaluating, by the processing device, the segments to identify a surface voxel per segment. The method further includes determining, by the processing device, a position of a surface point within the surface voxel.Type: GrantFiled: April 19, 2022Date of Patent: September 10, 2024Assignee: FARO Technologies, Inc.Inventors: Ariane Stiebeiner, Georgios Balatzis, Festim Xhohaj, Antonin Klopp-Tosser
-
Patent number: 12086899Abstract: Systems and methods related to run-time selection of a render mode in which to execute command buffers with a graphics processing unit (GPU) of a device based on performance data corresponding to the device are provided. A user mode driver (UMD) or kernel mode driver (KMD) executed at a central processing unit (CPU) selects abinning mode based on whether performance data that includes sensor data or performance counter data indicates that an associated binning condition or override condition has been met. The UMD or the KMD causes pending command buffers to be patched to execute in the selected binning mode based on whether the binning mode is enabled or disabled.Type: GrantFiled: September 25, 2020Date of Patent: September 10, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Anirudh R. Acharya, Ruijin Wu, Paul E. Ruggieri
-
Patent number: 12079920Abstract: A graphics processing unit (GPU) includes one or more processor cores adapted to execute a software-implemented shader program, and one or more hardware-implemented ray tracing units (RTU) adapted to traverse an acceleration structure to calculate intersections of rays with bounding volumes and graphics primitives asynchronously with shader operation. The RTU implements traversal logic to traverse the acceleration structure including transformation of rays as needed to account for variations in coordinate space between levels, stack management, and other tasks to relieve burden on the shader, communicating intersections to the shader which then calculates whether the intersection hit a transparent or opaque portion of the object intersected. Thus, one or more processing cores within the GPU perform accelerated ray tracing by offloading aspects of processing to the RTU, which traverses the acceleration structure within which the 3D environment is represented.Type: GrantFiled: November 2, 2022Date of Patent: September 3, 2024Assignee: Sony Interactive Entertainment LLCInventor: Mark Evan Cerny
-
Patent number: 12080018Abstract: A tracking apparatus, method, and non-transitory computer readable storage medium thereof are provided. The tracking apparatus generates a map information of simultaneous localization and mapping corresponding to a regional space based on a real-time image. The tracking apparatus calculates a first spatial position and a first orientation of a first display related to the image capturing device in the regional space based on the map information. The tracking apparatus calculates a human pose of a first operating user in the regional space. The tracking apparatus transforms the real-time image to generate a first transformed image corresponding to the first operating user based on the first spatial position, the first orientation, and the human pose, wherein the first transformed image is displayed on the first display.Type: GrantFiled: March 27, 2023Date of Patent: September 3, 2024Assignee: HTC CorporationInventors: Yen-Ting Liu, Meng-Ju Wu
-
Patent number: 12073504Abstract: A bounding volume is used to approximate the space an object occupies. If a more precise understanding beyond an approximation is required, the object itself is then inspected to determine what space it occupies. Often, a simple volume (such as an axis-aligned box) is used as bounding volume to approximate the space occupied by an object. But objects can be arbitrary, complicated shapes. So a simple volume often does not fit the object very well. That causes a lot of space that is not occupied by the object to be included in the approximation of the space being occupied by the object. Hardware-based techniques are disclosed herein, for example, for efficiently using multiple bounding volumes (such as axis-aligned bounding boxes) to represent, in effect, an arbitrarily shaped bounding volume to better fit the object, and for using such arbitrary bounding volumes to improve performance in applications such as ray tracing.Type: GrantFiled: April 20, 2023Date of Patent: August 27, 2024Assignee: NVIDIA CorporationInventors: Gregory Muthler, John Burgess
-
Patent number: 12066282Abstract: A lighting stage includes a plurality of lights that project alternating spherical color gradient illumination patterns onto an object or human performer at a predetermined frequency. The lighting stage also includes a plurality of cameras that capture images of an object or human performer corresponding to the alternating spherical color gradient illumination patterns. The lighting stage also includes a plurality of depth sensors that capture depth maps of the object or human performer at the predetermined frequency. The lighting stage also includes (or is associated with) one or more processors that implement a machine learning algorithm to produce a three-dimensional (3D) model of the object or human performer. The 3D model includes relighting parameters used to relight the 3D model under different lighting conditions.Type: GrantFiled: November 11, 2020Date of Patent: August 20, 2024Assignee: GOOGLE LLCInventors: Sean Ryan Francesco Fanello, Kaiwen Guo, Peter Christopher Lincoln, Philip Lindsley Davidson, Jessica L. Busch, Xueming Yu, Geoffrey Harvey, Sergio Orts Escolano, Rohit Kumar Pandey, Jason Dourgarian, Danhang Tang, Adarsh Prakash Murthy Kowdle, Emily B. Cooper, Mingsong Dou, Graham Fyffe, Christoph Rhemann, Jonathan James Taylor, Shahram Izadi, Paul Ernest Debevec
-
Patent number: 12067668Abstract: There is provided an instruction, or instructions, that can be included in a program to perform a ray tracing operation, with individual execution threads in a group of execution threads executing the program performing the ray tracing operation for a respective ray in a corresponding group of rays such that the group of rays performing the ray tracing operation together. The instruction(s), when executed by the execution threads will cause one or more rays from the group of plural rays to be tested for intersection with a set of primitives. A result of the ray-primitive intersection testing can then be returned for the traversal operation.Type: GrantFiled: June 3, 2022Date of Patent: August 20, 2024Assignee: Arm LimitedInventors: Richard Bruce, William Robert Stoye, Mathieu Jean Joseph Robart, Jørn Nystad
-
Patent number: 12056808Abstract: Methods, devices, and apparatuses are provided to facilitate a positioning of an item of virtual content in an extended reality environment. For example, a placement position for an item of virtual content can be transmitted to one or more of a first device and a second device. The placement position can be based on correlated map data generated based on first map data obtained from the first device and second map data obtained from the second device. In some examples, the first device can transmit the placement position to the second device.Type: GrantFiled: May 4, 2023Date of Patent: August 6, 2024Assignee: QUALCOMM IncorporatedInventors: Pushkar Gorur Sheshagiri, Pawan Kumar Baheti, Ajit Deepak Gupte, Sandeep Kanakapura Lakshmikantha
-
Ray clustering learning method based on weakly-supervised learning for denoising through ray tracing
Patent number: 12051146Abstract: Disclosed is a ray clustering learning method based on weakly-supervised learning for denoising using ray tracing. The ray clustering learning method is for learning a denoising model for removing noise from a rendered image through ray tracing, and includes extracting a feature of a simulated ray through the ray tracing and clustering the ray through contrastive learning for the feature.Type: GrantFiled: June 24, 2022Date of Patent: July 30, 2024Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGYInventors: Sung-Eui Yoon, In Young Cho, Yuchi Huo -
Patent number: 12042727Abstract: The present invention provides an image processing method, wherein a computer executes: an acquisition step S10 of acquiring image information indicating, on a per-pixel basis, distance information indicating the distance from a camera and color information; a determination step S20 of determining, on a per-pixel basis and on the basis of the distance information of individual pixels, settings of a modulation filter that converts the color information of the individual pixels to modulate an image into the style of a painting; and a conversion step S30 of converting the color information on a per-pixel basis on the basis of the settings of the modulation filter determined for the individual pixels.Type: GrantFiled: June 24, 2021Date of Patent: July 23, 2024Assignee: CYGAMES, INC.Inventors: Akira Horibata, Kensaku Fujita, Sotaro Hori
-
Patent number: 12045451Abstract: An electronic device, with a touch-sensitive surface, displays a respective control, which is associated with respective contact intensity criteria, used to determine whether or not a function associated with the respective control will be performed. The device detects a gesture on the touch-sensitive surface, corresponding to an interaction with the respective control. In accordance with a determination that the gesture does not include a contact that meets the respective contact intensity criteria, the device changes the appearance of the respective control to indicate progress toward meeting the respective contact intensity criteria that is used to determine whether or not a function associated with the respective control will be performed. In response to detecting activation of the control, the device performs the function associated with the respective control in accordance with the detected gesture including a contact that meets the respective contact intensity criteria.Type: GrantFiled: May 28, 2021Date of Patent: July 23, 2024Assignee: APPLE INC.Inventors: May-Li Khoe, Matthew I. Brown, Bianca C. Costanzo, Avi E. Cieplinski, Jeffrey T. Bernstein, Julian K. Missig
-
Patent number: 12038328Abstract: Described herein is a method for generating a bi-directional texture function (BTF) of an object, the method including at least the following steps: measuring an initial BTF for the object using a camera-based measurement device, capturing spectral reflectance data for the object for a pre-given number of different measurement geometries using a spectrophotometer, and adapting the initial BTF to the captured spectral reflectance data), thus, gaining an optimized BTF. Also described herein are respective systems for generating a bi-directional texture function of an object.Type: GrantFiled: March 25, 2020Date of Patent: July 16, 2024Assignee: BASF COATINGS GMBHInventors: Benjamin Lanfer, Guido Bischoff, Thomas Kantimm
-
Patent number: 12039654Abstract: Systems and methods for super sampling and viewport shifting of non-real time 3D applications are disclosed. In one embodiment, a graphics processing unit includes a processing resource to execute graphics commands to provide graphics for an application, a capture tool to capture the graphics commands, and a data generator to generate a dataset including at least one frame based on the captured graphics commands and to modify viewport settings for each frame of interest to generate a conditioned dataset.Type: GrantFiled: October 15, 2021Date of Patent: July 16, 2024Assignee: Intel CorporationInventors: Joanna Douglas, Michal Taryma, Mario Garcia, Carlos Dominguez
-
Patent number: 12033230Abstract: One embodiment provides a method for recommending model characteristics to be used in developing a target geo-spatial physical model for a target geographic location utilizing historical lineage data corresponding to historical geo-spatial physical models, including: receiving information related to the target geographic location, wherein the information describes geographical and domain features of the target geographic location; identifying, using at least one similarity algorithm, at least one other geographic location that is similar to the target geographic location, wherein the at least one geographic location has at least one corresponding historical geo-spatial physical model; and recommending, using at least one machine-learning model and based upon the at least one other geographic location, initial model characteristics for developing and deploying the target geo-spatial physical model.Type: GrantFiled: February 18, 2020Date of Patent: July 9, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Andrew T. Penrose, Jitendra Singh, Himanshu Gupta, Vijay Arya
-
Patent number: 12025853Abstract: The invention relates to a 9 million pixel black light full-color lens comprising a first lens, a second lens, a third lens, a fourth lens, a fifth lens, a sixth lens, a diaphragm, a seventh lens, an eighth lens, a ninth lens, a tenth lens, an eleventh lens and an equivalent prism which are sequentially arranged from front to back along a light incident direction. The invention overcomes the poor resolution with visible light and infrared light, large chromatic aberration in imaging magnification and the like of the existing black light full-color lens, improves imaging effect (i.e. resolution) with visible light and infrared bands by adopting a structure of eleven spherical lenses, cooperated with the equivalent prism, simultaneously adopting a wide-spectrum optimization design for lenses, and provides a high-resolution video stream for image fusion, consequently a bright and colored image is output in a low illumination environment is obtained.Type: GrantFiled: June 8, 2021Date of Patent: July 2, 2024Assignee: FOCtek Photonics, Inc.Inventors: Xiaofeng Wen, Yongjie Lin, Jianfang Liu, Muwang Huang, Shaoqin Guo
-
Patent number: 12026849Abstract: According to example embodiments, an Image View Aggregator identifies a frontal view of an item within an image. The Image View Aggregator identifies at least one reflection view of the item within the image. Each reflection view of the item having been captured off a corresponding reflective physical surface. The Image View Aggregator extracts the frontal view of the item and each reflection view of the item from the image. The Image View Aggregator generates a representation of the item based at least on the extracted frontal view of the item and each extracted reflection view of the item.Type: GrantFiled: October 5, 2022Date of Patent: July 2, 2024Assignee: eBay Inc.Inventor: Sergio Pinzon Gonzales, Jr.
-
Patent number: 12026822Abstract: In various examples, the actual spatial properties of a virtual environment are used to produce, for a pixel, an anisotropic filter kernel for a filter having dimensions and weights that accurately reflect the spatial characteristics of the virtual environment. Geometry of the virtual environment may be computed based at least in part on a projection of a light source onto a surface through an occluder, in order to determine a footprint that reflects a contribution of the light source to lighting conditions of the pixel associated with a point on the surface. The footprint may define a size, orientation, and/or shape of the anisotropic filter kernel and corresponding filter weights. The anisotropic filter kernel may be applied to the pixel to produce a graphically-rendered image of the virtual environment.Type: GrantFiled: June 21, 2022Date of Patent: July 2, 2024Assignee: NVIDIA CorporationInventor: Shiqui Liu
-
Patent number: 12026825Abstract: Apparatus and method for efficient BVH construction. For example, one embodiment of an apparatus comprises: a memory to store graphics data for a scene including a plurality of primitives in a scene at a first precision; a geometry quantizer to read vertices of the primitives at the first precision and to adaptively quantize the vertices of the primitives to a second precision associated with a first local coordinate grid of a first BVH node positioned within a global coordinate grid, the second precision lower than the first precision; a BVH builder to determine coordinates of child nodes of the first BVH node by performing non-spatial-split binning or spatial-split binning for the first BVH node using primitives associated with the first BVH node, the BVH builder to determine final coordinates for the child nodes based, at least in part, on an evaluation of surface areas of different bounding boxes generated for each of the child node.Type: GrantFiled: April 25, 2023Date of Patent: July 2, 2024Assignee: Intel CorporationInventors: Michael Doyle, Karthik Vaidyanathan
-
Patent number: 12026833Abstract: Systems and methods are described for utilizing an image processing system with at least one processing device to perform operations including receiving a plurality of input images of a user, generating a three-dimensional mesh proxy based on a first set of features extracted from the plurality of input images and a second set of features extracted from the plurality of input images. The method may further include generating a neural texture based on a three-dimensional mesh proxy and the plurality of input images, generating a representation of the user including at least a neural texture, and sampling at least one portion of the neural texture from the three-dimensional mesh proxy. In response to providing the at least one sampled portion to a neural renderer, the method may include receiving, from the neural renderer, a synthesized image of the user that is previously not captured by the image processing system.Type: GrantFiled: October 28, 2020Date of Patent: July 2, 2024Assignee: Google LLCInventors: Ricardo Martin Brualla, Moustafa Meshry, Daniel Goldman, Rohit Kumar Pandey, Sofien Bouaziz, Ke Li
-
Patent number: 12020364Abstract: Techniques are provided for modifying coloring of images utilizing machine learning. A trained model is generated utilizing machine learning with training data that includes images of a plurality of different scenes with different illumination characteristics. New original images of a scene may each be downsampled and transformed to a corresponding output image utilizing the trained model. A color transformation from each original image to its corresponding output image may be determined. In an embodiment, the color transformation is determined utilizing a spline fitting approach. The determined color transformations may be applied to each of the original images to generate corrected images. Specifically, the color transformation that is applied to a particular original image is the color transformation determined for the input image that corresponds to the particular original image. The corrected images are utilized to generate a digital model of the scene, and the digital model has accurate model texture.Type: GrantFiled: April 7, 2022Date of Patent: June 25, 2024Assignee: Bentley Systems, IncorporatedInventors: Alexandrina Orzan, Hugo Lavezac, Prince Ngattai Lam, Luc Robert
-
Patent number: 12014456Abstract: A method of operating a graphics processor when rendering a frame representing a view of a scene using a ray tracing process in which part of the processing for a ray tracing operation is offloaded to a texture mapper unit of the graphics processor. Thus, when the graphics processor's execution unit is executing a program to perform a ray tracing operation the execution unit is able to message the texture mapper unit to perform one or more processing operations for the ray tracing operation. This operation can be triggered by including an appropriate instruction to message the texture mapper unit within the ray tracing program.Type: GrantFiled: July 22, 2022Date of Patent: June 18, 2024Assignee: Arm LimitedInventors: Edvard Fielding, Carmelo Giliberto
-
Patent number: 12008708Abstract: A method for creating a second series of individual images with a first series of individual images, the individual images of the first or the second series of individual images having been captured with an objective, includes determining the entrance pupil and the field of vision of the objective for the individual images of the first series and creating or adapting the individual images of the second series in accordance with the entrance pupil and the field of vision of the objective of the individual image in question of the first series.Type: GrantFiled: January 27, 2021Date of Patent: June 11, 2024Assignee: Carl Zeiss AGInventors: Michael Wick, Christian Wojek, Vladan Blahnik, Torsten Sievers
-
Patent number: 12008703Abstract: A computer-implemented method of creating a bounding volume hierarchy (BVH) for a model defined with respect to a local coordinate system for the model. The method includes defining BVH branch nodes within the model, establishing a plurality of local transformation matrices for the BVH; and for each BVH branch node, determining a first bounding volume and associating the branch node with one of the plurality of local transformation matrices that maps between the first bounding volume and a second bounding volume in the local coordinate system.Type: GrantFiled: July 14, 2021Date of Patent: June 11, 2024Assignee: Imagination Technologies LimitedInventor: Simon Fenney
-
Patent number: 12008917Abstract: A patient simulation system for healthcare training is provided. The system includes one or more interchangeable shells comprising a physical anatomical model of at least a portion of a patient's body, the shell adapted to be illuminated from within the shell to provide one or more dynamic images viewable on the outer surface of the shells; wherein the system comprises one or more imaging devices enclosed within the shell and adapted to render the one or more dynamic images on an inner surface of the shell and viewable on the outer surface of the shells; one or more interface devices located about the patient shells to receive input and provide output; and one or more computing units in communication with the image units and interface devices, the computing units adapted to provide an interactive simulation for healthcare training. In other embodiments, the shell is adapted to be illuminated from outside the shell.Type: GrantFiled: February 10, 2020Date of Patent: June 11, 2024Assignee: UNIVERSITY OF CENTRAL FLORIDA RESEARCH FOUNDATION, INC.Inventors: Gregory F. Welch, Gerd Bruder, Salam Daher, Jason Eric Hochreiter, Mindi A. Anderson, Laura Gonzalez, Desiree A. Diaz
-
Patent number: 12002144Abstract: This technology relates to rendering content from discrete applications. In this regard, one or more computing devices may receive a global scene graph containing resources provided by two or more discrete processes, wherein the global scene graph is instantiated by a first process of the two or more discrete processes. The one or more computing devices may render and output for display, the global scene graph in accordance with the resources contained there.Type: GrantFiled: April 14, 2023Date of Patent: June 4, 2024Assignee: Google LLCInventors: Joshua Gargus, Jeffrey Brown, Michael Jurka
-
Patent number: 12002150Abstract: Methods and systems are provided for rendering photo-realistic images of a subject or an object using a differentiable neural network for predicting indirect light behavior. In one example, the differentiable neural network outputs a volumetric light map comprising a plurality of spherical harmonic representations. Further, using a reflectance neural network, roughness and scattering coefficients associated with the subject or the object is computed. The volumetric light map, as well as the roughness and scattering coefficients are the utilized for rendering a final image under one or more of a desired lighting condition, desired camera view angle, and/or with a desired visual effect (e.g., expression change).Type: GrantFiled: May 3, 2022Date of Patent: June 4, 2024Assignee: UNIVERSITY OF SOUTHERN CALIFORNIAInventors: Yajie Zhao, Jing Yang, Hanyuan Xiao
-
Patent number: 11995759Abstract: In examples, the number of rays used to sample lighting conditions of a light source in a virtual environment with respect to particular locations in the virtual environment may be adapted to scene conditions. An additional ray(s) may be used for locations that tend to be associated with visual artifacts in rendered images. A determination may be made on whether to cast an additional ray(s) to a light source for a location and/or a quantity of rays to cast. To make the determination variables such as visibilities and/or hit distances of ray-traced samples of the light source may be analyzed for related locations in the virtual environment, such as those in a region around the location (e.g., within an N-by-N kernel centered at the location). Factors may include variability in visibilities and/or hit distances, differences between visibilities and/or hit distances relative to the location, and magnitudes of hit distances.Type: GrantFiled: September 7, 2021Date of Patent: May 28, 2024Assignee: NVIDIA CorporationInventor: Jonathan Paul Story
-
Patent number: 11989870Abstract: A method for detecting objects on systems, includes providing a three-dimensional representation of the system, wherein the position and orientation of the representation and the system are known, and capturing a first image and a second image of the system, the two images being captured from different positions above the system. For a plurality of sections of the system, a respective comparison of the first and the second image is carried out using a parallax effect. If the images in a region surrounding the system match, an object is detected on the system.Type: GrantFiled: August 20, 2019Date of Patent: May 21, 2024Assignee: Siemens Energy Global GmbH & Co. KGInventors: Josef Alois Birchbauer, Vlad Comanelea-Serban, Olaf Kähler
-
Patent number: 11989823Abstract: The invention discloses a method for rendering on the basis of hemispherical orthogonal function, the method comprising the following steps: selecting rendering fragments and establishing a local coordinate system; acquiring a bidirectional reflectance distribution function of a material; if global illumination is an orthogonal function, determining a rotation matrix of an orthogonal function coefficient according to the rotation angles of the global coordinate system and the local coordinate system, and calculating a local orthogonal function illumination coefficient; converting the local orthogonal function illumination coefficient into a hemispherical orthogonal function illumination coefficient; sampling to obtain the spatial distribution of a bidirectional reflection distribution function of a rendered material; obtaining a hemispherical orthogonal function of the bidirectional reflection distribution function of the rendered material; and using the dot product of a hemispherical orthogonal function coefType: GrantFiled: September 14, 2020Date of Patent: May 21, 2024Assignee: NANJING INSTITUTE OF ASTRONOMICAL OPTICS & TECHNOLOGY, NATIONAL ASTRONOMICAL OBSERVATORIES, CASInventors: Yi Zheng, Kai Wei, Bin Liang, Ying Li, Changpeng Ding
-
Patent number: 11989971Abstract: Techniques are disclosed for capturing facial appearance properties. In some examples, a facial capture system includes light source(s) that produce linearly polarized light, at least one camera that is cross-polarized with respect to the polarization of light produced by the light source(s), and at least one other camera that is not cross-polarized with respect to the polarization of the light produced by the light source(s). Images captured by the cross-polarized camera(s) are used to determine facial appearance properties other than specular intensity, such as diffuse albedo, while images captured by the camera(s) that are not cross-polarized are used to determine facial appearance properties including specular intensity. In addition, a coarse-to-fine optimization procedure is disclosed for determining appearance and detailed geometry maps based on images captured by the cross-polarized camera(s) and the camera(s) that are not cross-polarized.Type: GrantFiled: December 2, 2021Date of Patent: May 21, 2024Assignee: Disney Enterprises, Inc.Inventors: Jeremy Riviere, Paulo Fabiano Urnau Gotardo, Abhijeet Ghosh, Derek Edward Bradley, Dominik Thabo Beeler
-
Patent number: 11978156Abstract: In a tile-based graphics processor when rendering a tile of a render output, which sub-regions, of a plurality of sub-regions that the tile has been divided into for fragment tracking purposes, fragments generated by the rasterisation stage fall within is determined. Then, for at least one sub-region of the plurality of sub-regions that the tile has been divided into, the processing of fragments for the sub-region of the tile is tracked to determine when the processing of all fragments for the sub-region of the tile has been finished. The writing of rendered fragment data for the sub-region of the tile from the tile buffer to memory is controlled on the basis of the tracking of the processing of fragments for the sub-region of the tile.Type: GrantFiled: March 18, 2022Date of Patent: May 7, 2024Assignee: Arm LimitedInventor: Ole Magnus Ruud
-
Patent number: 11972586Abstract: A method to dynamically and adaptively sample the depths of a scene using the principle of triangulation light curtains is described. The approach directly detects the presence or absence of obstacles (or scene points) at specified 3D lines in a scene by sampling the scene. The scene can be sampled sparsely, non-uniformly, or densely at specified regions. The depth sampling can be varied in real-time, enabling quick object discovery or detailed exploration of areas of interest. Once an object is discovered in the scene, adaptive light curtains comprising dense sampling of a region of the scene containing the object, can be used to better define the position, shape and size of the discovered object.Type: GrantFiled: September 25, 2019Date of Patent: April 30, 2024Assignee: Carnegie Mellon UniversityInventors: Srinivasa Narasimhan, Joseph Bartels, William L Whittaker, Jian Wang
-
Patent number: 11966785Abstract: A method for controlling hardware resource configuration for a processing system comprises obtaining performance monitoring data indicative of processing performance associated with workloads to be executed on the processing system, providing a trained machine learning model with input data depending on the performance monitoring data; and based on an inference made from the input data by the trained machine learning model, setting control information for configuring the processing system to control an amount of hardware resource allocated for use by at least one processor core. A corresponding method of training the model is provided. This is particularly useful for controlling inter-core borrowing of resource between processor cores in a multi-core processing system, where resource is borrowed between respective cores, e.g. cores on different layers of a 3D integrated circuit.Type: GrantFiled: July 30, 2020Date of Patent: April 23, 2024Assignee: Arm LimitedInventors: Dam Sunwoo, Supreet Jeloka, Saurabh Pijuskumar Sinha, Jaekyu Lee, Jose Alberto Joao, Krishnendra Nathella
-
Patent number: 11962952Abstract: A method for retrieval of multi spectral bidirectional reflectance distribution function (BRDF) parameters by using red-green-blue-depth (RGB-D) data includes capturing, by an RGB-D camera, at least one image of one or more objects in a scene. The captured at least one image of the one or more objects includes RGB-D data including color and geometry information of the objects. A processing unit reconstructs the captured at least one image of the one or more objects to one or more 3D reconstructions by using the RGB-D data. A deep neural network classifies the BRDF of a surface of the one or more objects based on the 3D reconstructions. The deep neural network includes an input layer, an output layer, and at least one hidden layer between the input layer and the output layer. The multi spectral BRDF parameters are retrieved by approximating the classified BRDF by using an iterative optimization method.Type: GrantFiled: August 16, 2019Date of Patent: April 16, 2024Assignee: Siemens Industry Software NVInventors: Ahmet Bilgili, Serkan Ergun
-
Patent number: 11948485Abstract: An electronic apparatus and a method for controlling thereof are provided. The method includes acquiring a first Light field (LF) image of different viewpoints, inputting the first LF image to a first artificial intelligence model to acquire a pixel shift value for converting pixels in the first LF image, converting the pixels in the first LF image according to the pixel shift value to acquire a second LF image, inputting the first LF image and the second LF image to a second artificial intelligence model for converting the LF image to a layer image to acquire the layer image, inputting the acquired layer image to a simulation model for restoring the LF image to acquire a third LF image, and learning the first artificial intelligence model and the second artificial intelligence model based on the second LF image and the third LF image.Type: GrantFiled: January 3, 2022Date of Patent: April 2, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Bora Jin, Youngjin Yoon, Jihye Lee, Yeoul Lee, Sunil Lee, Jaesung Lee
-
Patent number: 11948246Abstract: Apparatuses, systems, and techniques to render computer graphics. In at least one embodiment, a first one or more lights are selected from among lights in a virtual scene to be rendered as a frame of graphics, and a second one or more lights are selected from among lights used to render one or more pixels in at least one of a prior frame or the current frame. A pixel of the current frame is rendered using the first and second one or more lights, and a light is selected for reuse in rendering a subsequent frame from among the first and second one or more lights.Type: GrantFiled: March 24, 2022Date of Patent: April 2, 2024Assignee: NVIDIA CORPORATIONInventor: Christopher Ryan Wyman
-
Patent number: 11941723Abstract: Systems, methods, and techniques dynamically utilize load balancing for workgroup assignments between a group of shader engines by a command processor of a graphics processing unit (GPU). Based on one or more commands received for execution, a plurality of workgroups is generated for assignment to a plurality of shader engines for processing, each shader engine including a respective quantity of active compute units. Each workgroup of the plurality of workgroups is dynamically assigned to a respective shader engine for execution based at least in part on indications of available resources respectively associated with each of the shader engines. In various embodiments, the indications of available resources may include physical parameters regarding each shader engine, as well as current status information regarding the processing of workgroups assigned to each shader engine.Type: GrantFiled: December 29, 2021Date of Patent: March 26, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Randy Ramsey, Yash Ukidave
-
Patent number: 11941743Abstract: A system and method for generating a set of samples stratified across two-dimensional elementary intervals of a two-dimensional space is disclosed within the application. A computer-implemented technique for generating the set of samples includes selecting an elementary interval associated with a stratification of the two-dimensional space, initializing at least one data structure that indicates valid regions within the elementary interface based on other samples previously placed within the two-dimensional space, and generating a sample in a valid region of the elementary interval utilizing the at least one data structure to identify the valid region prior to generating the sample. In some embodiments, the data structures comprise a pair of binary trees. The process can be repeated for each elementary interval of a selected stratification to generate the set of stratified two-dimensional samples.Type: GrantFiled: July 20, 2022Date of Patent: March 26, 2024Assignee: NVIDIA CorporationInventor: Matthew Milton Pharr