Patents by Inventor Ravi Ramamoorthi

Ravi Ramamoorthi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11816779
    Abstract: Methods and systems disclosed herein relate generally to surface-rendering neural networks to represent and render a variety of material appearances (e.g., textured surfaces) at different scales. The system includes receiving image metadata for a texel that includes position, incoming and outgoing radiance direction, and a kernel size. The system applies a offset-prediction neural network to the query to identify an offset coordinate for the texel. The system inputs the offset coordinate to a data structure to determine a feature vector for the texel of the textured surface. The reflectance feature vector is then processed using a decoder neural network to estimate a light-reflectance value of the texel, at which the light-reflectance value is used to render the texel of the textured surface.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: November 14, 2023
    Assignees: Adobe Inc., The Regents of the University of California
    Inventors: Krishna Bhargava Mullia Lakshminarayana, Zexiang Xu, Milos Hasan, Ravi Ramamoorthi, Alexandr Kuznetsov
  • Patent number: 11669986
    Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: June 6, 2023
    Assignees: ADOBE INC., THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, David Jay Kriegman, Ravi Ramamoorthi
  • Publication number: 20230169715
    Abstract: Methods and systems disclosed herein relate generally to surface-rendering neural networks to represent and render a variety of material appearances (e.g., textured surfaces) at different scales. The system includes receiving image metadata for a texel that includes position, incoming and outgoing radiance direction, and a kernel size. The system applies a offset-prediction neural network to the query to identify an offset coordinate for the texel. The system inputs the offset coordinate to a data structure to determine a feature vector for the texel of the textured surface. The reflectance feature vector is then processed using a decoder neural network to estimate a light-reflectance value of the texel, at which the light-reflectance value is used to render the texel of the textured surface.
    Type: Application
    Filed: November 30, 2021
    Publication date: June 1, 2023
    Inventors: Krishna Bhargava Mullia Lakshminarayana, Zexiang Xu, Milos Hasan, Ravi Ramamoorthi, Alexandr Kuznetsov
  • Publication number: 20220343522
    Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.
    Type: Application
    Filed: April 16, 2021
    Publication date: October 27, 2022
    Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, David Jay Kriegman, Ravi Ramamoorthi
  • Publication number: 20220335636
    Abstract: A scene reconstruction system renders images of a scene with high-quality geometry and appearance and supports view synthesis, relighting, and scene editing. Given a set of input images of a scene, the scene reconstruction system trains a network to learn a volume representation of the scene that includes separate geometry and reflectance parameters. Using the volume representation, the scene reconstruction system can render images of the scene under arbitrary viewing (view synthesis) and lighting (relighting) locations. Additionally, the scene reconstruction system can render images that change the reflectance of objects in the scene (scene editing).
    Type: Application
    Filed: April 15, 2021
    Publication date: October 20, 2022
    Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, Milos Hasan, Yannick Hold-Geoffroy, David Jay Kriegman, Ravi Ramamoorthi
  • Patent number: 11094043
    Abstract: Devices, systems and methods for generating high dynamic range images and video from a set of low dynamic range images and video using convolution neural networks (CNNs) are described. One exemplary method for generating high dynamic range visual media includes generating, using a first CNN to merge a first set of images having a first dynamic range, a final image having a second dynamic range that is greater than the first dynamic range. Another exemplary method for generating training data includes generating sets of static and dynamic images having a first dynamic range, generating, based on a weighted sum of the set of static images, a set of ground truth images having a second dynamic range greater than the first dynamic range, and replacing at least one of the set of dynamic images with an image from the set of static images to generate a set of training images.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: August 17, 2021
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Nima Khademi Kalantari, Ravi Ramamoorthi
  • Patent number: 10574905
    Abstract: Systems and methods in accordance with embodiments of this invention perform depth regularization and semiautomatic interactive matting using images. In an embodiment of the invention, the image processing pipeline application directs a processor to receive (i) an image (ii) an initial depth map corresponding to the depths of pixels within the image, regularize the initial depth map into a dense depth map using depth values of known pixels to compute depth values of unknown pixels, determine an object of interest to be extracted from the image, generate an initial trimap using the dense depth map and the object of interest to be extracted from the image, and apply color image matting to unknown regions of the initial trimap to generate a matte for image matting.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: February 25, 2020
    Assignee: FotoNation Limited
    Inventors: Manohar Srikanth, Ravi Ramamoorthi, Kartik Venkataraman, Priyam Chatterjee
  • Publication number: 20190096046
    Abstract: Devices, systems and methods for generating high dynamic range images and video from a set of low dynamic range images and video using convolution neural networks (CNNs) are described. One exemplary method for generating high dynamic range visual media includes generating, using a first CNN to merge a first set of images having a first dynamic range, a final image having a second dynamic range that is greater than the first dynamic range. Another exemplary method for generating training data includes generating sets of static and dynamic images having a first dynamic range, generating, based on a weighted sum of the set of static images, a set of ground truth images having a second dynamic range greater than the first dynamic range, and replacing at least one of the set of dynamic images with an image from the set of static images to generate a set of training images.
    Type: Application
    Filed: September 25, 2018
    Publication date: March 28, 2019
    Inventors: Nima Khademi Kalantari, Ravi Ramamoorthi
  • Publication number: 20190037150
    Abstract: Systems and methods in accordance with embodiments of this invention perform depth regularization and semiautomatic interactive matting using images. In an embodiment of the invention, the image processing pipeline application directs a processor to receive (i) an image (ii) an initial depth map corresponding to the depths of pixels within the image, regularize the initial depth map into a dense depth map using depth values of known pixels to compute depth values of unknown pixels, determine an object of interest to be extracted from the image, generate an initial trimap using the dense depth map and the object of interest to be extracted from the image, and apply color image matting to unknown regions of the initial trimap to generate a matte for image matting.
    Type: Application
    Filed: October 1, 2018
    Publication date: January 31, 2019
    Inventors: Manohar SRIKANTH, Ravi RAMAMOORTHI, Kartik VENKATARAMAN, Priyam CHATTERJEE
  • Patent number: 10089740
    Abstract: Systems and methods in accordance with embodiments of this invention perform depth regularization and semiautomatic interactive matting using images. In an embodiment of the invention, the image processing pipeline application directs a processor to receive (i) an image (ii) an initial depth map corresponding to the depths of pixels within the image, regularize the initial depth map into a dense depth map using depth values of known pixels to compute depth values of unknown pixels, determine an object of interest to be extracted from the image, generate an initial trimap using the dense depth map and the object of interest to be extracted from the image, and apply color image matting to unknown regions of the initial trimap to generate a matte for image matting.
    Type: Grant
    Filed: March 9, 2015
    Date of Patent: October 2, 2018
    Assignee: FotoNation Limited
    Inventors: Manohar Srikanth, Ravi Ramamoorthi, Kartik Venkataraman, Priyam Chatterjee
  • Publication number: 20150254868
    Abstract: Systems and methods in accordance with embodiments of this invention perform depth regularization and semiautomatic interactive matting using images. In an embodiment of the invention, the image processing pipeline application directs a processor to receive (i) an image (ii) an initial depth map corresponding to the depths of pixels within the image, regularize the initial depth map into a dense depth map using depth values of known pixels to compute depth values of unknown pixels, determine an object of interest to be extracted from the image, generate an initial trimap using the dense depth map and the object of interest to be extracted from the image, and apply color image matting to unknown regions of the initial trimap to generate a matte for image matting.
    Type: Application
    Filed: March 9, 2015
    Publication date: September 10, 2015
    Inventors: Manohar Srikanth, Ravi Ramamoorthi, Kartik Venkataraman, Priyam Chatterjee