Patents by Inventor Tianfan Xue

Tianfan Xue has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11800235
    Abstract: Apparatus and methods related to applying lighting models to images of objects are provided. A neural network can be trained to apply a lighting model to an input image. The training of the neural network can utilize confidence learning that is based on light predictions and prediction confidence values associated with lighting of the input image. A computing device can receive an input image of an object and data about a particular lighting model to be applied to the input image. The computing device can determine an output image of the object by using the trained neural network to apply the particular lighting model to the input image of the object.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: October 24, 2023
    Assignee: Google LLC
    Inventors: Ryan Geiss, Marc S. Levoy, Samuel William Hasinoff, Tianfan Xue
  • Publication number: 20230308769
    Abstract: An example method includes displaying, by a graphical user interface of a computing device, an image comprising a target region. The target region may be smaller than an entirety of the image. The method includes providing, by the graphical user interface, a user-adjustable control to adjust a desired local brightness exposure level for the target region. The method includes receiving, by the user-adjustable control, a user indication of the desired local brightness exposure level for the target region. The method includes adjusting the local brightness exposure level for the target region in the image in response to the user indication.
    Type: Application
    Filed: March 25, 2022
    Publication date: September 28, 2023
    Inventors: Tianfan Xue, Samuel William Hasinoff, Rachit Gupta
  • Publication number: 20220375045
    Abstract: A method includes obtaining an input image that contains a particular representation of lens flare, and processing the input image by a machine learning model to generate a de-flared image that includes the input image with at least part of the particular representation of lens flare removed. The machine learning (ML) model may be trained by generating training images that combine respective baseline images with corresponding lens flare images. For each respective training image, a modified image may be determined by processing the respective training image by the ML model, and a loss value may be determined based on a loss function comparing the modified image to a corresponding baseline image used to generate the respective training image. Parameters of the ML model may be adjusted based on the loss value determined for each respective training image and the loss function.
    Type: Application
    Filed: November 9, 2020
    Publication date: November 24, 2022
    Inventors: Yicheng Wu, Qiurui He, Tianfan Xue, Rahul Garg, Jiawen Chen, Jonathan T. Barron
  • Publication number: 20220375042
    Abstract: A method includes obtaining dual-pixel image data that includes a first sub-image and a second sub-image, and generating an in-focus image, a first kernel corresponding to the first sub-image, and a second kernel corresponding to the second sub-image. A loss value may be determined using a loss function that determines a difference between (i) a convolution of the first sub-image with the second kernel and (ii) a convolution of the second sub-image with the first kernel, and/or a sum of (i) a difference between the first sub-image and a convolution of the in-focus image with the first kernel and (ii) a difference between the second sub-image and a convolution of the in-focus image with the second kernel. Based on the loss value and the loss function, the in-focus image, the first kernel, and/or the second kernel, may be updated and displayed.
    Type: Application
    Filed: November 13, 2020
    Publication date: November 24, 2022
    Inventors: Rahul Garg, Neal Wadhwa, Pratul Preeti Srinivasan, Tianfan Xue, Jiawen Chen, Shumian Xin, Jonathan T. Barron
  • Patent number: 11490070
    Abstract: Scenes can be imaged under low-light conditions using flash photography. However, the flash can be irritating to individuals being photographed, especially when those individuals' eyes have adapted to the dark. Additionally, portions of images generated using a flash can appear washed-out or otherwise negatively affected by the flash. These issues can be addressed by using a flash at an invisible wavelength, e.g., an infrared and/or ultraviolet flash. At the same time a scene is being imaged, at the invisible wavelength of the invisible flash, the scene can also be imaged at visible wavelengths. This can include simultaneously using both a standard RGB camera and a modified visible-plus-invisible-wavelengths camera (e.g., an “IR-G-UV” camera). The visible and invisible image data can then be combined to generate an improved visible-light image of the scene, e.g., that approximates a visible light image of the scene, had the scene been illuminated during daytime light conditions.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: November 1, 2022
    Assignee: Google LLC
    Inventors: Tianfan Xue, Jian Wang, Jiawen Chen, Jonathan Barron
  • Publication number: 20220327769
    Abstract: Examples relate to implementations of a neural light transport. A computing system may obtain data indicative of a plurality of UV texture maps and a geometry of an object. Each UV texture map depicts the object from a perspective of a plurality of perspectives. The computing system may train a neural network to learn a light transport function using the data. The light transport function may be a continuous function that specifies how light interacts with the object when the object is viewed from the plurality of perspectives. The computing system may generate an output UV texture map that depicts the object from a synthesized perspective based on an application of the light transport function by the trained neural network.
    Type: Application
    Filed: May 4, 2020
    Publication date: October 13, 2022
    Inventors: Yun-Ta TSAI, Xiuming ZHANG, Jonathan T. BARRON, Sean FANELLO, Tiancheng SUN, Tianfan XUE
  • Publication number: 20220256068
    Abstract: Apparatus and methods related to applying lighting models to images of objects are provided. A neural network can be trained to apply a lighting model to an input image. The training of the neural network can utilize confidence learning that is based on light predictions and prediction confidence values associated with lighting of the input image. A computing device can receive an input image of an object and data about a particular lighting model to be applied to the input image. The computing device can determine an output image of the object by using the trained neural network to apply the particular lighting model to the input image of the object.
    Type: Application
    Filed: August 19, 2019
    Publication date: August 11, 2022
    Inventors: Ryan GEISS, Marc S. LEVOY, Samuel William HASINOFF, Tianfan XUE
  • Publication number: 20210274151
    Abstract: Scenes can be imaged under low-light conditions using flash photography. However, the flash can be irritating to individuals being photographed, especially when those individuals' eyes have adapted to the dark. Additionally, portions of images generated using a flash can appear washed-out or otherwise negatively affected by the flash. These issues can be addressed by using a flash at an invisible wavelength, e.g., an infrared and/or ultraviolet flash. At the same time a scene is being imaged, at the invisible wavelength of the invisible flash, the scene can also be imaged at visible wavelengths. This can include simultaneously using both a standard RGB camera and a modified visible-plus-invisible-wavelengths camera (e.g., an “IR-G-UV” camera). The visible and invisible image data can then be combined to generate an improved visible-light image of the scene, e.g., that approximates a visible light image of the scene, had the scene been illuminated during daytime light conditions.
    Type: Application
    Filed: May 17, 2021
    Publication date: September 2, 2021
    Inventors: Tianfan Xue, Jian Wang, Jiawen Chen, Jonathan Barron
  • Patent number: 11039122
    Abstract: Scenes can be imaged under low-light conditions using flash photography. However, the flash can be irritating to individuals being photographed, especially when those individuals' eyes have adapted to the dark. Additionally, portions of images generated using a flash can appear washed-out or otherwise negatively affected by the flash. These issues can be addressed by using a flash at an invisible wavelength, e.g., an infrared and/or ultraviolet flash. At the same time a scene is being imaged, at the invisible wavelength of the invisible flash, the scene can also be imaged at visible wavelengths. This can include simultaneously using both a standard RGB camera and a modified visible-plus-invisible-wavelengths camera (e.g., an “IR-G-UV” camera). The visible and invisible image data can then be combined to generate an improved visible-light image of the scene, e.g., that approximates a visible light image of the scene, had the scene been illuminated during daytime light conditions.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: June 15, 2021
    Assignee: Google LLC
    Inventors: Tianfan Xue, Jian Wang, Jiawen Chen, Jonathan Barron
  • Patent number: 10636149
    Abstract: An apparatus according to an embodiment of the present invention enables measurement and visualization of a refractive field such as a fluid. An embodiment device obtains video captured by a video camera with an imaging plane. Representations of apparent motions in the video are correlated to determine actual motions of the refractive field. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Embodiments can render refractive flow visualizations for augmented reality, wearable devices, and video microscopes.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: April 28, 2020
    Assignee: Massachusetts Institute of Technology
    Inventors: William T. Freeman, Frederic Durand, Tianfan Xue, Michael Rubinstein, Neal Wadhwa
  • Publication number: 20200077076
    Abstract: Scenes can be imaged under low-light conditions using flash photography. However, the flash can be irritating to individuals being photographed, especially when those individuals' eyes have adapted to the dark. Additionally, portions of images generated using a flash can appear washed-out or otherwise negatively affected by the flash. These issues can be addressed by using a flash at an invisible wavelength, e.g., an infrared and/or ultraviolet flash. At the same time a scene is being imaged, at the invisible wavelength of the invisible flash, the scene can also be imaged at visible wavelengths. This can include simultaneously using both a standard RGB camera and a modified visible-plus-invisible-wavelengths camera (e.g., an “IR-G-UV” camera). The visible and invisible image data can then be combined to generate an improved visible-light image of the scene, e.g., that approximates a visible light image of the scene, had the scene been illuminated during daytime light conditions.
    Type: Application
    Filed: September 4, 2018
    Publication date: March 5, 2020
    Inventors: Tianfan Xue, Jian Wang, Jiawen Chen, Jonathan Barron
  • Publication number: 20180096482
    Abstract: An apparatus according to an embodiment of the present invention enables measurement and visualization of a refractive field such as a fluid. An embodiment device obtains video captured by a video camera with an imaging plane. Representations of apparent motions in the video are correlated to determine actual motions of the refractive field. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Embodiments can render refractive flow visualizations for augmented reality, wearable devices, and video microscopes.
    Type: Application
    Filed: November 21, 2017
    Publication date: April 5, 2018
    Inventors: William T. Freeman, Frederic Durand, Tianfan Xue, Michael Rubinstein, Neal Wadhwa
  • Patent number: 9842404
    Abstract: An imaging method and corresponding apparatus according to an embodiment of the present invention enables measurement and visualization of fluid flow. An embodiment method includes obtaining video captured by a video camera with an imaging plane. Representations of motions in the video are correlated. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Depth and three-dimensional information can be recovered using stereo video, and uncertainty methods can enhance measurement robustness where backgrounds are less textured. Example applications can include avionics and hydrocarbon leak detection.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: December 12, 2017
    Assignee: Massachusetts Institite of Technology
    Inventors: William T. Freeman, Frederic Durand, Tianfan Xue, Michael Rubinstein, Neal Wadhwa
  • Patent number: 9710917
    Abstract: An imaging method and corresponding apparatus according to an embodiment of the present invention enables measurement and visualization of fluid flow. An embodiment method includes obtaining video captured by a video camera with an imaging plane. Representations of motions in the video are correlated. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Depth and three-dimensional information can be recovered using stereo video, and uncertainty methods can enhance measurement robustness where backgrounds are less textured. Example applications can include avionics and hydrocarbon leak detection.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: July 18, 2017
    Assignee: Massachusetts Institute of Technology
    Inventors: William T. Freeman, Frederic Durand, Tianfan Xue, Michael Rubinstein, Neal Wadhwa
  • Publication number: 20150016690
    Abstract: An imaging method and corresponding apparatus according to an embodiment of the present invention enables measurement and visualization of fluid flow. An embodiment method includes obtaining video captured by a video camera with an imaging plane. Representations of motions in the video are correlated. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Depth and three-dimensional information can be recovered using stereo video, and uncertainty methods can enhance measurement robustness where backgrounds are less textured. Example applications can include avionics and hydrocarbon leak detection.
    Type: Application
    Filed: May 15, 2014
    Publication date: January 15, 2015
    Applicant: Massachusetts Institute of Technology
    Inventors: William T. Freeman, Frederic Durand, Tianfan Xue, Michael Rubinstein, Neal Wadhwa
  • Publication number: 20140340502
    Abstract: An imaging method and corresponding apparatus according to an embodiment of the present invention enables measurement and visualization of fluid flow. An embodiment method includes obtaining video captured by a video camera with an imaging plane. Representations of motions in the video are correlated. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Depth and three-dimensional information can be recovered using stereo video, and uncertainty methods can enhance measurement robustness where backgrounds are less textured. Example applications can include avionics and hydrocarbon leak detection.
    Type: Application
    Filed: May 15, 2014
    Publication date: November 20, 2014
    Applicant: Massachusetts Institute of Technology
    Inventors: William T. Freeman, Frederic Durand, Tianfan Xue, Michael Rubinstein, Neal Wadhwa