Patents by Inventor Christoph Rhemann

Christoph Rhemann has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230419600
    Abstract: Example embodiments relate to techniques for volumetric performance capture with neural rendering. A technique may involve initially obtaining images that depict a subject from multiple viewpoints and under various lighting conditions using a light stage and depth data corresponding to the subject using infrared cameras. A neural network may extract features of the subject from the images based on the depth data and map the features into a texture space (e.g., the UV texture space). A neural renderer can be used to generate an output image depicting the subject from a target view such that illumination of the subject in the output image aligns with the target view. The neural render may resample the features of the subject from the texture space to an image space to generate the output image.
    Type: Application
    Filed: November 5, 2020
    Publication date: December 28, 2023
    Inventors: Sean Ryan Francesco FANELLO, Abhi MEKA, Rohit Kumar PANDEY, Christian HAENE, Sergio Orts ESCOLANO, Christoph RHEMANN, Paul DEBEVEC, Sofien BOUAZIZ, Thabo BEELER, Ryan OVERBECK, Peter BARNUM, Daniel ERICKSON, Philip DAVIDSON, Yinda ZHANG, Jonathan TAYLOR, Chloe LeGENDRE, Shahram IZADI
  • Publication number: 20230360182
    Abstract: Apparatus and methods related to applying lighting models to images of objects are provided. An example method includes applying a geometry model to an input image to determine a surface orientation map indicative of a distribution of lighting on an object based on a surface geometry. The method further includes applying an environmental light estimation model to the input image to determine a direction of synthetic lighting to be applied to the input image. The method also includes applying, based on the surface orientation map and the direction of synthetic lighting, a light energy model to determine a quotient image indicative of an amount of light energy to be applied to each pixel of the input image. The method additionally includes enhancing, based on the quotient image, a portion of the input image. One or more neural networks can be trained to perform one or more of the aforementioned aspects.
    Type: Application
    Filed: May 17, 2021
    Publication date: November 9, 2023
    Inventors: Sean Ryan Francesco Fanello, Yun-Ta Tsai, Rohit Kumar Pandey, Paul Debevec, Michael Milne, Chloe LeGendre, Jonathan Tilton Barron, Christoph Rhemann, Sofien Bouaziz, Navin Padman Sarma
  • Publication number: 20230209036
    Abstract: An electronic device estimates a depth map of an environment based on matching reduced-resolution stereo depth images captured by depth cameras to generate a coarse disparity (depth) map. The electronic device downsamples depth images captured by the depth cameras and matches sections of the reduced-resolution images to each other to generate a coarse depth map. The electronic device upsamples the coarse depth map to a higher resolution and refines the upsampled depth map to generate a high-resolution depth map to support location-based functionality.
    Type: Application
    Filed: February 17, 2023
    Publication date: June 29, 2023
    Inventors: Sameh KHAMIS, Yinda ZHANG, Christoph RHEMANN, Julien VALENTIN, Adarsh KOWDLE, Vladimir TANKOVICH, Michael SCHOENBERG, Shahram IZADI, Thomas FUNKHOUSER, Sean FANELLO
  • Patent number: 11589031
    Abstract: An electronic device estimates a depth map of an environment based on matching reduced-resolution stereo depth images captured by depth cameras to generate a coarse disparity (depth) map. The electronic device downsamples depth images captured by the depth cameras and matches sections of the reduced-resolution images to each other to generate a coarse depth map. The electronic device upsamples the coarse depth map to a higher resolution and refines the upsampled depth map to generate a high-resolution depth map to support location-based functionality.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: February 21, 2023
    Assignee: GOOGLE LLC
    Inventors: Sameh Khamis, Yinda Zhang, Christoph Rhemann, Julien Valentin, Adarsh Kowdle, Vladimir Tankovich, Michael Schoenberg, Shahram Izadi, Thomas Funkhouser, Sean Fanello
  • Publication number: 20220065620
    Abstract: A lighting stage includes a plurality of lights that project alternating spherical color gradient illumination patterns onto an object or human performer at a predetermined frequency. The lighting stage also includes a plurality of cameras that capture images of an object or human performer corresponding to the alternating spherical color gradient illumination patterns. The lighting stage also includes a plurality of depth sensors that capture depth maps of the object or human performer at the predetermined frequency. The lighting stage also includes (or is associated with) one or more processors that implement a machine learning algorithm to produce a three-dimensional (3D) model of the object or human performer. The 3D model includes relighting parameters used to relight the 3D model under different lighting conditions.
    Type: Application
    Filed: November 11, 2020
    Publication date: March 3, 2022
    Inventors: Sean Ryan Francesco Fanello, Kaiwen Guo, Peter Christopher Lincoln, Philip Lindsley Davidson, Jessica L. Busch, Xueming Yu, Geoffrey Harvey, Sergio Orts Escolano, Rohit Kumar Pandey, Jason Dourgarian, Danhang Tang, Adarsh Prakash Murthy Kowdle, Emily B. Cooper, Mingsong Dou, Graham Fyffe, Christoph Rhemann, Jonathan James Taylor, Shahram Izadi, Paul Ernest Debevec
  • Patent number: 11265534
    Abstract: The subject disclosure is directed towards controlling the intensity of illumination of a scene or part of a scene, including to conserve illumination power. Quality of depth data in stereo images may be measured with different illumination states; environmental conditions, such as ambient light, natural texture may affect the quality. The illumination intensity may be controllably varied to obtain sufficient quality while conserving power. The control may be directed to one or more regions of interest corresponding to an entire scene or part of a scene.
    Type: Grant
    Filed: February 8, 2014
    Date of Patent: March 1, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Adam G. Kirk, Christoph Rhemann, Oliver A. Whyte, Shahram Izadi, Sing Bing Kang, Andreas Georgiou
  • Patent number: 11145075
    Abstract: A handheld user device includes a monocular camera to capture a feed of images of a local scene and a processor to select, from the feed, a keyframe and perform, for a first image from the feed, stereo matching using the first image, the keyframe, and a relative pose based on a pose associated with the first image and a pose associated with the keyframe to generate a sparse disparity map representing disparities between the first image and the keyframe. The processor further is to determine a dense depth map from the disparity map using a bilateral solver algorithm, and process a viewfinder image generated from a second image of the feed with occlusion rendering based on the depth map to incorporate one or more virtual objects into the viewfinder image to generate an AR viewfinder image. Further, the processor is to provide the AR viewfinder image for display.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: October 12, 2021
    Assignee: Google LLC
    Inventors: Julien Valentin, Onur G. Guleryuz, Mira Leung, Maksym Dzitsiuk, Jose Pascoal, Mirko Schmidt, Christoph Rhemann, Neal Wadhwa, Eric Turner, Sameh Khamis, Adarsh Prakash Murthy Kowdle, Ambrus Csaszar, João Manuel Castro Afonso, Jonathan T. Barron, Michael Schoenberg, Ivan Dryanovski, Vivek Verma, Vladimir Tankovich, Shahram Izadi, Sean Ryan Francesco Fanello, Konstantine Nicholas John Tsotsos
  • Patent number: 11037026
    Abstract: Values of pixels in an image are mapped to a binary space using a first function that preserves characteristics of values of the pixels. Labels are iteratively assigned to the pixels in the image in parallel based on a second function. The label assigned to each pixel is determined based on values of a set of nearest-neighbor pixels. The first function is trained to map values of pixels in a set of training images to the binary space and the second function is trained to assign labels to the pixels in the set of training images. Considering only the nearest neighbors in the inference scheme results in a computational complexity that is independent of the size of the solution space and produces sufficient approximations of the true distribution when the solution for each pixel is most likely found in a small subset of the set of potential solutions.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: June 15, 2021
    Assignee: Google LLC
    Inventors: Sean Ryan Fanello, Julien Pascal Christophe Valentin, Adarsh Prakash Murthy Kowdle, Christoph Rhemann, Vladimir Tankovich, Philip L. Davidson, Shahram Izadi
  • Patent number: 10997457
    Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: May 4, 2021
    Assignee: Google LLC
    Inventors: Christoph Rhemann, Abhimitra Meka, Matthew Whalen, Jessica Lynn Busch, Sofien Bouaziz, Geoffrey Douglas Harvey, Andrea Tagliasacchi, Jonathan Taylor, Paul Debevec, Peter Joseph Denny, Sean Ryan Francesco Fanello, Graham Fyffe, Jason Angelo Dourgarian, Xueming Yu, Adarsh Prakash Murthy Kowdle, Julien Pascal Christophe Valentin, Peter Christopher Lincoln, Rohit Kumar Pandey, Christian Häne, Shahram Izadi
  • Patent number: 10937182
    Abstract: An electronic device estimates a pose of one or more subjects in an environment based on estimating a correspondence between a data volume containing a data mesh based on a current frame captured by a depth camera and a reference volume containing a plurality of fused prior data frames based on spectral embedding and performing bidirectional non-rigid matching between the reference volume and the current data frame to refine the correspondence so as to support location-based functionality. The electronic device predicts correspondences between the data volume and the reference volume based on spectral embedding. The correspondences provide constraints that accelerate the convergence between the data volume and the reference volume. By tracking changes between the current data mesh frame and the reference volume, the electronic device avoids tracking failures that can occur when relying solely on a previous data mesh frame.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: March 2, 2021
    Assignee: GOOGLE LLC
    Inventors: Mingsong Dou, Sean Ryan Fanello, Adarsh Prakash Murthy Kowdle, Christoph Rhemann, Sameh Khamis, Philip L. Davidson, Shahram Izadi, Vladimir Tankovich
  • Patent number: 10929658
    Abstract: Systems and methods for stereo matching based upon active illumination using a patch in a non-actively illuminated image to obtain weights that are used in patch similarity determinations in actively illuminated stereo images is provided. To correlate pixels in actively illuminated stereo images, adaptive support weights computations are used to determine similarity of patches corresponding to the pixels. In order to obtain adaptive support weights for the adaptive support weights computations, weights are obtained by processing a non-actively illuminated (“clean”) image.
    Type: Grant
    Filed: June 21, 2013
    Date of Patent: February 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adam G. Kirk, Christoph Rhemann, Oliver A. Whyte, Shahram Izadi, Sing Bing Kang
  • Publication number: 20210004979
    Abstract: A handheld user device includes a monocular camera to capture a feed of images of a local scene and a processor to select, from the feed, a keyframe and perform, for a first image from the feed, stereo matching using the first image, the keyframe, and a relative pose based on a pose associated with the first image and a pose associated with the keyframe to generate a sparse disparity map representing disparities between the first image and the keyframe. The processor further is to determine a dense depth map from the disparity map using a bilateral solver algorithm, and process a viewfinder image generated from a second image of the feed with occlusion rendering based on the depth map to incorporate one or more virtual objects into the viewfinder image to generate an AR viewfinder image. Further, the processor is to provide the AR viewfinder image for display.
    Type: Application
    Filed: October 4, 2019
    Publication date: January 7, 2021
    Inventors: Jullien VALENTIN, Onur G. GULERYUZ, Mira LEUNG, Maksym DZITSIUK, Jose PASCOAL, Mirko SCHMIDT, Christoph RHEMANN, Neal WADHWA, Eric TURNER, Sameh KHAMIS, Adarsh Prakash Murthy KOWDLE, Ambrus CSASZAR, João Manuel Castro AFONSO, Jonathan T. BARRON, Michael SCHOENBERG, Ivan DRYANOVSKI, Vivek VERMA, Vladimir TANKOVICH, Shahram IZADI, Sean Ryan Francesco FANELLO, Konstantine Nicholas John TSOTSOS
  • Publication number: 20200372284
    Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided.
    Type: Application
    Filed: October 16, 2019
    Publication date: November 26, 2020
    Inventors: Christoph Rhemann, Abhimitra Meka, Matthew Whalen, Jessica Lynn Busch, Sofien Bouaziz, Geoffrey Douglas Harvey, Andrea Tagliasacchi, Jonathan Taylor, Paul Debevec, Peter Joseph Denny, Sean Ryan Francesco Fanello, Graham Fyffe, Jason Angelo Dourgarian, Xueming Yu, Adarsh Prakash Murthy Kowdle, Julien Pascal Christophe Valentin, Peter Christopher Lincoln, Rohit Kumar Pandey, Christian Häne, Shahram Izadi
  • Patent number: 10839539
    Abstract: An electronic device estimates a depth map of an environment based on stereo depth images captured by depth cameras having exposure times that are offset from each other in conjunction with illuminators pulsing illumination patterns into the environment. A processor of the electronic device matches small sections of the depth images from the cameras to each other and to corresponding patches of immediately preceding depth images (e.g., a spatio-temporal image patch “cube”). The processor computes a matching cost for each spatio-temporal image patch cube by converting each spatio-temporal image patch into binary codes and defining a cost function between two stereo image patches as the difference between the binary codes. The processor minimizes the matching cost to generate a disparity map, and optimizes the disparity map by rejecting outliers using a decision tree with learned pixel offsets and refining subpixels to generate a depth map of the environment.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: November 17, 2020
    Assignee: GOOGLE LLC
    Inventors: Adarsh Prakash Murthy Kowdle, Vladimir Tankovich, Danhang Tang, Cem Keskin, Jonathan James Taylor, Philip L. Davidson, Shahram Izadi, Sean Ryan Fanello, Julien Pascal Christophe Valentin, Christoph Rhemann, Mingsong Dou, Sameh Khamis, David Kim
  • Patent number: 10726255
    Abstract: Systems and methods for stereo matching based upon active illumination using a patch in a non-actively illuminated image to obtain weights that are used in patch similarity determinations in actively illuminated stereo images is provided. To correlate pixels in actively illuminated stereo images, adaptive support weights computations are used to determine similarity of patches corresponding to the pixels. In order to obtain adaptive support weights for the adaptive support weights computations, weights are obtained by processing a non-actively illuminated (“clean”) image.
    Type: Grant
    Filed: June 21, 2013
    Date of Patent: July 28, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adam G. Kirk, Christoph Rhemann, Oliver A. Whyte, Shahram Izadi, Sing Bing Kang
  • Publication number: 20200160109
    Abstract: Values of pixels in an image are mapped to a binary space using a first function that preserves characteristics of values of the pixels. Labels are iteratively assigned to the pixels in the image in parallel based on a second function. The label assigned to each pixel is determined based on values of a set of nearest-neighbor pixels. The first function is trained to map values of pixels in a set of training images to the binary space and the second function is trained to assign labels to the pixels in the set of training images. Considering only the nearest neighbors in the inference scheme results in a computational complexity that is independent of the size of the solution space and produces sufficient approximations of the true distribution when the solution for each pixel is most likely found in a small subset of the set of potential solutions.
    Type: Application
    Filed: January 22, 2020
    Publication date: May 21, 2020
    Inventors: Sean Ryan FANELLO, Julien Pascal Christophe VALENTIN, Adarsh Prakash Murthy KOWDLE, Christoph RHEMANN, Vladimir TANKOVICH, Philip L. DAVIDSON, Shahram IZADI
  • Publication number: 20200099920
    Abstract: An electronic device estimates a depth map of an environment based on matching reduced-resolution stereo depth images captured by depth cameras to generate a coarse disparity (depth) map. The electronic device downsamples depth images captured by the depth cameras and matches sections of the reduced-resolution images to each other to generate a coarse depth map. The electronic device upsamples the coarse depth map to a higher resolution and refines the upsampled depth map to generate a high-resolution depth map to support location-based functionality.
    Type: Application
    Filed: September 24, 2019
    Publication date: March 26, 2020
    Inventors: Sameh KHAMIS, Yinda ZHANG, Christoph RHEMANN, Julien VALENTIN, Adarsh KOWDLE, Vladimir TANKOVICH, Michael SCHOENBERG, Shahram IZADI, Thomas FUNKHOUSER, Sean FANELLO
  • Patent number: 10579905
    Abstract: Values of pixels in an image are mapped to a binary space using a first function that preserves characteristics of values of the pixels. Labels are iteratively assigned to the pixels in the image in parallel based on a second function. The label assigned to each pixel is determined based on values of a set of nearest-neighbor pixels. The first function is trained to map values of pixels in a set of training images to the binary space and the second function is trained to assign labels to the pixels in the set of training images. Considering only the nearest neighbors in the inference scheme results in a computational complexity that is independent of the size of the solution space and produces sufficient approximations of the true distribution when the solution for each pixel is most likely found in a small subset of the set of potential solutions.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: March 3, 2020
    Assignee: GOOGLE LLC
    Inventors: Sean Ryan Fanello, Julien Pascal Christophe Valentin, Adarsh Prakash Murthy Kowdle, Christoph Rhemann, Vladimir Tankovich, Philip L. Davidson, Shahram Izadi
  • Patent number: 10554957
    Abstract: A first and second image of a scene are captured. Each of a plurality of pixels in the first image is associated with a disparity value. An image patch associated with each of the plurality of pixels of the first image and the second image is mapped into a binary vector. Thus, values of pixels in an image are mapped to a binary space using a function that preserves characteristics of values of the pixels. The difference between the binary vector associated with each of the plurality of pixels of the first image and its corresponding binary vector in the second image designated by the disparity value associated with each of the plurality of pixels of the first image is determined. Based on the determined difference between binary vectors, correspondence between the plurality of pixels of the first image and the second image is established.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: February 4, 2020
    Assignee: GOOGLE LLC
    Inventors: Julien Pascal Christophe Valentin, Sean Ryan Fanello, Adarsh Prakash Murthy Kowdle, Christoph Rhemann, Vladimir Tankovich, Philip L. Davidson, Shahram Izadi
  • Patent number: 10311282
    Abstract: Region of interest detection in raw time of flight images is described. For example, a computing device receives at least one raw image captured for a single frame by a time of flight camera. The raw image depicts one or more objects in an environment of the time of flight camera (such as human hands, bodies or any other objects). The raw image is input to a trained region detector and in response one or more regions of interest in the raw image are received. A received region of interest comprises image elements of the raw image which are predicted to depict at least part of one of the objects. A depth computation logic computes depth from the one or more regions of interest of the raw image.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: June 4, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jamie Daniel Joseph Shotton, Cem Keskin, Christoph Rhemann, Toby Sharp, Duncan Paul Robertson, Pushmeet Kohli, Andrew William Fitzgibbon, Shahram Izadi