Patents by Inventor Ravi Ramamoorthi
Ravi Ramamoorthi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240169653Abstract: A scene modeling system accesses a three-dimensional (3D) scene including a 3D object. The scene modeling system applies a silhouette bidirectional texture function (SBTF) model to the 3D object to generate an output image of a textured material rendered as a surface of the 3D object. Applying the SBTF model includes determining a bounding geometry for the surface of the 3D object. Applying the SBTF model includes determining, for each pixel of the output image, a pixel value based on the bounding geometry. The scene modeling system displays, via a user interface, the output image based on the determined pixel values.Type: ApplicationFiled: November 23, 2022Publication date: May 23, 2024Inventors: Krishna Bhargava Mullia Lakshminarayana, Zexiang Xu, Milos Hasan, Fujun Luan, Alexandr Kuznetsov, Xuezheng Wang, Ravi Ramamoorthi
-
Patent number: 11816779Abstract: Methods and systems disclosed herein relate generally to surface-rendering neural networks to represent and render a variety of material appearances (e.g., textured surfaces) at different scales. The system includes receiving image metadata for a texel that includes position, incoming and outgoing radiance direction, and a kernel size. The system applies a offset-prediction neural network to the query to identify an offset coordinate for the texel. The system inputs the offset coordinate to a data structure to determine a feature vector for the texel of the textured surface. The reflectance feature vector is then processed using a decoder neural network to estimate a light-reflectance value of the texel, at which the light-reflectance value is used to render the texel of the textured surface.Type: GrantFiled: November 30, 2021Date of Patent: November 14, 2023Assignees: Adobe Inc., The Regents of the University of CaliforniaInventors: Krishna Bhargava Mullia Lakshminarayana, Zexiang Xu, Milos Hasan, Ravi Ramamoorthi, Alexandr Kuznetsov
-
Patent number: 11669986Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.Type: GrantFiled: April 16, 2021Date of Patent: June 6, 2023Assignees: ADOBE INC., THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, David Jay Kriegman, Ravi Ramamoorthi
-
Publication number: 20230169715Abstract: Methods and systems disclosed herein relate generally to surface-rendering neural networks to represent and render a variety of material appearances (e.g., textured surfaces) at different scales. The system includes receiving image metadata for a texel that includes position, incoming and outgoing radiance direction, and a kernel size. The system applies a offset-prediction neural network to the query to identify an offset coordinate for the texel. The system inputs the offset coordinate to a data structure to determine a feature vector for the texel of the textured surface. The reflectance feature vector is then processed using a decoder neural network to estimate a light-reflectance value of the texel, at which the light-reflectance value is used to render the texel of the textured surface.Type: ApplicationFiled: November 30, 2021Publication date: June 1, 2023Inventors: Krishna Bhargava Mullia Lakshminarayana, Zexiang Xu, Milos Hasan, Ravi Ramamoorthi, Alexandr Kuznetsov
-
Publication number: 20220343522Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.Type: ApplicationFiled: April 16, 2021Publication date: October 27, 2022Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, David Jay Kriegman, Ravi Ramamoorthi
-
Publication number: 20220335636Abstract: A scene reconstruction system renders images of a scene with high-quality geometry and appearance and supports view synthesis, relighting, and scene editing. Given a set of input images of a scene, the scene reconstruction system trains a network to learn a volume representation of the scene that includes separate geometry and reflectance parameters. Using the volume representation, the scene reconstruction system can render images of the scene under arbitrary viewing (view synthesis) and lighting (relighting) locations. Additionally, the scene reconstruction system can render images that change the reflectance of objects in the scene (scene editing).Type: ApplicationFiled: April 15, 2021Publication date: October 20, 2022Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, Milos Hasan, Yannick Hold-Geoffroy, David Jay Kriegman, Ravi Ramamoorthi
-
Patent number: 11094043Abstract: Devices, systems and methods for generating high dynamic range images and video from a set of low dynamic range images and video using convolution neural networks (CNNs) are described. One exemplary method for generating high dynamic range visual media includes generating, using a first CNN to merge a first set of images having a first dynamic range, a final image having a second dynamic range that is greater than the first dynamic range. Another exemplary method for generating training data includes generating sets of static and dynamic images having a first dynamic range, generating, based on a weighted sum of the set of static images, a set of ground truth images having a second dynamic range greater than the first dynamic range, and replacing at least one of the set of dynamic images with an image from the set of static images to generate a set of training images.Type: GrantFiled: September 25, 2018Date of Patent: August 17, 2021Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Nima Khademi Kalantari, Ravi Ramamoorthi
-
System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
Patent number: 10574905Abstract: Systems and methods in accordance with embodiments of this invention perform depth regularization and semiautomatic interactive matting using images. In an embodiment of the invention, the image processing pipeline application directs a processor to receive (i) an image (ii) an initial depth map corresponding to the depths of pixels within the image, regularize the initial depth map into a dense depth map using depth values of known pixels to compute depth values of unknown pixels, determine an object of interest to be extracted from the image, generate an initial trimap using the dense depth map and the object of interest to be extracted from the image, and apply color image matting to unknown regions of the initial trimap to generate a matte for image matting.Type: GrantFiled: October 1, 2018Date of Patent: February 25, 2020Assignee: FotoNation LimitedInventors: Manohar Srikanth, Ravi Ramamoorthi, Kartik Venkataraman, Priyam Chatterjee -
Publication number: 20190096046Abstract: Devices, systems and methods for generating high dynamic range images and video from a set of low dynamic range images and video using convolution neural networks (CNNs) are described. One exemplary method for generating high dynamic range visual media includes generating, using a first CNN to merge a first set of images having a first dynamic range, a final image having a second dynamic range that is greater than the first dynamic range. Another exemplary method for generating training data includes generating sets of static and dynamic images having a first dynamic range, generating, based on a weighted sum of the set of static images, a set of ground truth images having a second dynamic range greater than the first dynamic range, and replacing at least one of the set of dynamic images with an image from the set of static images to generate a set of training images.Type: ApplicationFiled: September 25, 2018Publication date: March 28, 2019Inventors: Nima Khademi Kalantari, Ravi Ramamoorthi
-
SYSTEM AND METHODS FOR DEPTH REGULARIZATION AND SEMIAUTOMATIC INTERACTIVE MATTING USING RGB-D IMAGES
Publication number: 20190037150Abstract: Systems and methods in accordance with embodiments of this invention perform depth regularization and semiautomatic interactive matting using images. In an embodiment of the invention, the image processing pipeline application directs a processor to receive (i) an image (ii) an initial depth map corresponding to the depths of pixels within the image, regularize the initial depth map into a dense depth map using depth values of known pixels to compute depth values of unknown pixels, determine an object of interest to be extracted from the image, generate an initial trimap using the dense depth map and the object of interest to be extracted from the image, and apply color image matting to unknown regions of the initial trimap to generate a matte for image matting.Type: ApplicationFiled: October 1, 2018Publication date: January 31, 2019Inventors: Manohar SRIKANTH, Ravi RAMAMOORTHI, Kartik VENKATARAMAN, Priyam CHATTERJEE -
System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
Patent number: 10089740Abstract: Systems and methods in accordance with embodiments of this invention perform depth regularization and semiautomatic interactive matting using images. In an embodiment of the invention, the image processing pipeline application directs a processor to receive (i) an image (ii) an initial depth map corresponding to the depths of pixels within the image, regularize the initial depth map into a dense depth map using depth values of known pixels to compute depth values of unknown pixels, determine an object of interest to be extracted from the image, generate an initial trimap using the dense depth map and the object of interest to be extracted from the image, and apply color image matting to unknown regions of the initial trimap to generate a matte for image matting.Type: GrantFiled: March 9, 2015Date of Patent: October 2, 2018Assignee: FotoNation LimitedInventors: Manohar Srikanth, Ravi Ramamoorthi, Kartik Venkataraman, Priyam Chatterjee -
SYSTEM AND METHODS FOR DEPTH REGULARIZATION AND SEMIAUTOMATIC INTERACTIVE MATTING USING RGB-D IMAGES
Publication number: 20150254868Abstract: Systems and methods in accordance with embodiments of this invention perform depth regularization and semiautomatic interactive matting using images. In an embodiment of the invention, the image processing pipeline application directs a processor to receive (i) an image (ii) an initial depth map corresponding to the depths of pixels within the image, regularize the initial depth map into a dense depth map using depth values of known pixels to compute depth values of unknown pixels, determine an object of interest to be extracted from the image, generate an initial trimap using the dense depth map and the object of interest to be extracted from the image, and apply color image matting to unknown regions of the initial trimap to generate a matte for image matting.Type: ApplicationFiled: March 9, 2015Publication date: September 10, 2015Inventors: Manohar Srikanth, Ravi Ramamoorthi, Kartik Venkataraman, Priyam Chatterjee