Patents by Inventor Paul Debevec
Paul Debevec has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230419600Abstract: Example embodiments relate to techniques for volumetric performance capture with neural rendering. A technique may involve initially obtaining images that depict a subject from multiple viewpoints and under various lighting conditions using a light stage and depth data corresponding to the subject using infrared cameras. A neural network may extract features of the subject from the images based on the depth data and map the features into a texture space (e.g., the UV texture space). A neural renderer can be used to generate an output image depicting the subject from a target view such that illumination of the subject in the output image aligns with the target view. The neural render may resample the features of the subject from the texture space to an image space to generate the output image.Type: ApplicationFiled: November 5, 2020Publication date: December 28, 2023Inventors: Sean Ryan Francesco FANELLO, Abhi MEKA, Rohit Kumar PANDEY, Christian HAENE, Sergio Orts ESCOLANO, Christoph RHEMANN, Paul DEBEVEC, Sofien BOUAZIZ, Thabo BEELER, Ryan OVERBECK, Peter BARNUM, Daniel ERICKSON, Philip DAVIDSON, Yinda ZHANG, Jonathan TAYLOR, Chloe LeGENDRE, Shahram IZADI
-
Publication number: 20230360182Abstract: Apparatus and methods related to applying lighting models to images of objects are provided. An example method includes applying a geometry model to an input image to determine a surface orientation map indicative of a distribution of lighting on an object based on a surface geometry. The method further includes applying an environmental light estimation model to the input image to determine a direction of synthetic lighting to be applied to the input image. The method also includes applying, based on the surface orientation map and the direction of synthetic lighting, a light energy model to determine a quotient image indicative of an amount of light energy to be applied to each pixel of the input image. The method additionally includes enhancing, based on the quotient image, a portion of the input image. One or more neural networks can be trained to perform one or more of the aforementioned aspects.Type: ApplicationFiled: May 17, 2021Publication date: November 9, 2023Inventors: Sean Ryan Francesco Fanello, Yun-Ta Tsai, Rohit Kumar Pandey, Paul Debevec, Michael Milne, Chloe LeGendre, Jonathan Tilton Barron, Christoph Rhemann, Sofien Bouaziz, Navin Padman Sarma
-
Publication number: 20230206511Abstract: Mechanisms for generating compressed images are provided. More particularly, methods, systems, and media for capturing, reconstructing, compressing, and rendering view-dependent immersive light field video with a layered mesh representation are provided.Type: ApplicationFiled: March 6, 2023Publication date: June 29, 2023Inventors: Ryan Overbeck, Michael Joseph Broxton, John Flynn, Daniel William Erickson, Lars Peter Johannes Hedman, Matthew Nowicki DuVall, Jason Angelo Dourgarian, Jessica Lynn Busch, Matthew Stephen Whalen, Paul Debevec
-
Patent number: 11601636Abstract: Mechanisms for generating compressed images are provided. More particularly, methods, systems, and media for capturing, reconstructing, compressing, and rendering view-dependent immersive light field video with a layered mesh representation are provided.Type: GrantFiled: May 20, 2021Date of Patent: March 7, 2023Assignee: Google LLCInventors: Ryan Overbeck, Michael Joseph Broxton, John Flynn, Daniel William Erickson, Lars Peter Johannes Hedman, Matthew Nowicki DuVall, Jason Angelo Dourgarian, Jessica Lynn Busch, Matthew Stephen Whalen, Paul Debevec
-
Patent number: 11288844Abstract: Systems, methods, and computer program products are described that implement obtaining, at an electronic computing device and for at least one image of a scene rendered in an Augmented Reality (AR) environment, a scene lighting estimation captured at a first time period. The scene lighting estimation may include at least a first image measurement associated with the scene. The implementations may include determining, at the electronic computing device, a second image measurement associated with the scene at a second time period, determining a function of the first image measurement and the second image measurement. Based on the determined function, the implementations may also include triggering calculation of a partial lighting estimation update or triggering calculation of a full lighting estimation update and rendering, on a screen of the electronic computing device and for the scene, the scene using the partial lighting estimation update or the full lighting estimation update.Type: GrantFiled: October 16, 2019Date of Patent: March 29, 2022Assignee: Google LLCInventors: Chloe LeGendre, Laurent Charbonnel, Christina Tong, Konstantine Nicholas John Tsotsos, Wan-Chun Ma, Paul Debevec
-
Publication number: 20220027659Abstract: Techniques of estimating lighting from portraits includes generating a lighting estimate from a single image of a face based on a machine learning (ML) system using multiple bidirectional reflection distribution functions (BRDFs) as a loss function. In some implementations, the ML system is trained using images of faces formed with HDR illumination computed from LDR imagery. The technical solution includes training a lighting estimation model in a supervised manner using a dataset of portraits and their corresponding ground truth illumination.Type: ApplicationFiled: September 21, 2020Publication date: January 27, 2022Inventors: Chloe LeGendre, Paul Debevec, Wan-Chun Ma, Rohit Pandey, Sean Ryan Francesco Fanello, Christina Tong
-
Publication number: 20210406581Abstract: An example method, apparatus, and computer-readable storage medium are provided to predict high-dynamic range (HDR) lighting from low-dynamic range (LDR) background images. In an example implementation, a method may include receiving low-dynamic range (LDR) background images of scenes, each LDR background image captured with appearance of one or more reference objects with different reflectance properties; and training a lighting estimation model based at least on the received LDR background images to predict high-dynamic range (HDR) lighting based at least on the trained model. In another example implementation, a method may include capturing a low-dynamic range (LDR) background image of a scene from an LDR video captured by a camera of the electronic computing device; predicting high-dynamic range (HDR) lighting for the image, the predicting, using a trained model, based at least on the LDR background image; and rendering a virtual object based at least on the predicted HDR lighting.Type: ApplicationFiled: November 15, 2019Publication date: December 30, 2021Inventors: Chloe LeGendre, Wan-Chun Ma, Graham Fyffe, John Flynn, Jessica Busch, Paul Debevec
-
Publication number: 20210368157Abstract: Mechanisms for generating compressed images are provided. More particularly, methods, systems, and media for capturing, reconstructing, compressing, and rendering view-dependent immersive light field video with a layered mesh representation are provided.Type: ApplicationFiled: May 20, 2021Publication date: November 25, 2021Inventors: Ryan Overbeck, Michael Joseph Broxton, John Flynn, Daniel William Erickson, Lars Peter Johannes Hedman, Matthew Nowicki DuVall, Jason Angelo Dourgarian, Jessica Lynn Busch, Matthew Stephen Whalen, Paul Debevec
-
Publication number: 20210166437Abstract: Systems, methods, and computer program products are described that implement obtaining, at an electronic computing device and for at least one image of a scene rendered in an Augmented Reality (AR) environment, a scene lighting estimation captured at a first time period. The scene lighting estimation may include at least a first image measurement associated with the scene. The implementations may include determining, at the electronic computing device, a second image measurement associated with the scene at a second time period, determining a function of the first image measurement and the second image measurement. Based on the determined function, the implementations may also include triggering calculation of a partial lighting estimation update or triggering calculation of a full lighting estimation update and rendering, on a screen of the electronic computing device and for the scene, the scene using the partial lighting estimation update or the full lighting estimation update.Type: ApplicationFiled: October 16, 2019Publication date: June 3, 2021Inventors: Chloe LeGendre, Laurent Charbonnel, Christina Tong, Konstantine Nicholas John Tsotsos, Wan-Chun Ma, Paul Debevec
-
Patent number: 10997457Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided.Type: GrantFiled: October 16, 2019Date of Patent: May 4, 2021Assignee: Google LLCInventors: Christoph Rhemann, Abhimitra Meka, Matthew Whalen, Jessica Lynn Busch, Sofien Bouaziz, Geoffrey Douglas Harvey, Andrea Tagliasacchi, Jonathan Taylor, Paul Debevec, Peter Joseph Denny, Sean Ryan Francesco Fanello, Graham Fyffe, Jason Angelo Dourgarian, Xueming Yu, Adarsh Prakash Murthy Kowdle, Julien Pascal Christophe Valentin, Peter Christopher Lincoln, Rohit Kumar Pandey, Christian Häne, Shahram Izadi
-
Patent number: 10922878Abstract: Systems and methods for lighting inserted content are provided. For example, the inserted content may include augmented reality content that is inserted into an image of a physical space. An example system and method may include determining a location within an image to insert content. For example, the image may be captured by a camera device. The example system and method may also include identifying a region of the image based on the determined location to insert the content, determining at least one lighting parameter based on the identified region, and rendering the content using the determined at least one lighting parameter.Type: GrantFiled: October 3, 2018Date of Patent: February 16, 2021Assignee: GOOGLE LLCInventors: Ivan Neulander, Chloe LeGendre, Paul Debevec
-
Publication number: 20200372284Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided.Type: ApplicationFiled: October 16, 2019Publication date: November 26, 2020Inventors: Christoph Rhemann, Abhimitra Meka, Matthew Whalen, Jessica Lynn Busch, Sofien Bouaziz, Geoffrey Douglas Harvey, Andrea Tagliasacchi, Jonathan Taylor, Paul Debevec, Peter Joseph Denny, Sean Ryan Francesco Fanello, Graham Fyffe, Jason Angelo Dourgarian, Xueming Yu, Adarsh Prakash Murthy Kowdle, Julien Pascal Christophe Valentin, Peter Christopher Lincoln, Rohit Kumar Pandey, Christian Häne, Shahram Izadi
-
Publication number: 20190102936Abstract: Systems and methods for lighting inserted content are provided. For example, the inserted content may include augmented reality content that is inserted into an image of a physical space. An example system and method may include determining a location within an image to insert content. For example, the image may be captured by a camera device. The example system and method may also include identifying a region of the image based on the determined location to insert the content, determining at least one lighting parameter based on the identified region, and rendering the content using the determined at least one lighting parameter.Type: ApplicationFiled: October 3, 2018Publication date: April 4, 2019Inventors: Ivan Neulander, Chloe LeGendre, Paul Debevec
-
Publication number: 20160330376Abstract: An imaging apparatus comprises a gantry with a pan and tilt rotating mechanism to which an elongated member is attached. The elongated member enables placing an image capture device at a forward offset from the center of rotation of the pan and tilt rotating mechanism. A real-world scenery is captured via rotation of the pan and tilt rotating mechanism which inscribes a sphere and positions the imaging device at the nodal points of the sphere for capturing images.Type: ApplicationFiled: May 5, 2016Publication date: November 10, 2016Inventor: Paul Debevec
-
Patent number: 9443343Abstract: Provided is a method and apparatus for realistically reproducing an eyeball that may verify and analyze a material property and a deformation property with respect to each of constituent portions of an eyeball and may render each of the constituent portions based on the analyzed priorities, thereby more realistically reproducing the eyeball.Type: GrantFiled: November 23, 2011Date of Patent: September 13, 2016Assignees: Samsung Electronics Co., Ltd., University of Southern CaliforniaInventors: Tae Hyun Rhee, Seon Min Rhee, Hyun Jung Shim, Do Kyoon Kim, Abhijeet Ghosh, Jay Busch, Jen-Yuan Chiang, Paul Debevec
-
Patent number: 8988599Abstract: A controllable lighting system may include a plurality of light source groups, a group controller for each light source group, a master controller, and a network communication system. Each group controller may be configured to control the light sources in its light source group based on a group control command. The master controller may be configured to receive a master control command relating to the light sources and to issue a group control command to each of the group controllers that collectively effectuate compliance with the master control command. The network communication system may be configured to communicate the group control commands from the master controller to the group controllers.Type: GrantFiled: August 31, 2010Date of Patent: March 24, 2015Assignee: University of Southern CaliforniaInventors: Paul Debevec, Xueming Yu, Mark Bolas, Graham Fyffe, Jay Busch, Pieter Peers, Abhijeet Ghosh
-
Publication number: 20060007502Abstract: A high dynamic range image editing system for editing an image file having pixels spanning a first range of light intensity levels in an image editing system that only displays differences in the light intensity levels of pixels within a second range of light intensity levels that is less than the first range of light intensity levels, without reducing the range of light intensity levels in the image file.Type: ApplicationFiled: February 1, 2005Publication date: January 12, 2006Inventors: Paul Debevec, Timothy Hawkins, Chris Tchou
-
Publication number: 20050276441Abstract: A lighting apparatus may be configured to illuminate a subject while the subject is undergoing a motion during a time period. An imaging system may be configured to generate image data representative of a sequence of frames of the moving subject. A controller may be configured to drive the lighting apparatus and the imaging system so that the lighting apparatus sequentially illuminates the moving subject with a time-multiplexed series of lighting conditions, and so that each one of the frames shows the subject illuminated with a respective one of the lighting conditions. The controller may be further configured to process the image data to generate re-illumination data representative of novel illumination conditions under which the subject can be re-illuminated, subsequent to the time period.Type: ApplicationFiled: June 10, 2005Publication date: December 15, 2005Inventor: Paul Debevec
-
Publication number: 20050018223Abstract: A lighting reproduction apparatus for illuminating a subject includes a reproduction light optical source that generates reproduction light. The optical source includes a plurality of light emitters, each characterized by an individual color channel. There may be nine different color channels. A driver drives the light emitter color channels with intensity values at which a substantial spectral match is achieved between the reproduction light and the desired illuminant, so that the subject appears to be illuminated by the desired illuminant. These channel intensity values may be determined by solving a minimization equation that minimizes a sum of square residuals of the reproduction light spectra to the desired illuminant spectra. The output reproduction light may be metamerically, matched with the desired illuminant, with respect to a particular camera's spectral response. One or more spectral reflectances of the subject may be measured and incorporated into the optimization process.Type: ApplicationFiled: June 17, 2004Publication date: January 27, 2005Inventors: Paul Debevec, Timothy Hawkins, Andreas Wenger