Patents by Inventor Kartik Venkataraman

Kartik Venkataraman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210373349
    Abstract: An optical system includes: a beam splitter system configured to split an input beam into a plurality of output beams including a first output beam, a second output beam, and a third output beam; a first polarizing filter having a first polarization angle and configured to filter the first output beam to produce a first filtered output beam; a second polarizing filter having a second angle of polarization and configured to filter the second output beam to produce a second filtered output beam; and a third polarizing filter having a third angle of polarization and configured to filter the third output beam to produce a third filtered output beam, the first, second, and third angles of polarization being different from one another.
    Type: Application
    Filed: May 27, 2021
    Publication date: December 2, 2021
    Inventors: Kartik VENKATARAMAN, Agastya KALRA, Achuta KADAMBI
  • Publication number: 20210356572
    Abstract: A multi-modal sensor system includes: an underlying sensor system; a polarization camera system configured to capture polarization raw frames corresponding to a plurality of different polarization states; and a processing system including a processor and memory, the processing system being configured to control the underlying sensor system and the polarization camera system, the memory storing instructions that, when executed by the processor, cause the processor to: control the underlying sensor system to perform sensing on a scene and the polarization camera system to capture a plurality of polarization raw frames of the scene; extract first tensors in polarization representation spaces based on the plurality of polarization raw frames; and compute a characterization output based on an output of the underlying sensor system and the first tensors in polarization representation spaces.
    Type: Application
    Filed: October 7, 2020
    Publication date: November 18, 2021
    Inventors: Achuta KADAMBI, Ramesh RASKAR, Kartik VENKATARAMAN, Supreeth Krishna RAO, Agastya KALRA
  • Publication number: 20210358154
    Abstract: Systems and methods for depth estimation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for estimating depth from images. The method includes steps for receiving a plurality of source images captured from a plurality of different viewpoints using a processing system configured by an image processing application, generating a target image from a target viewpoint that is different to the viewpoints of the plurality of source images based upon a set of generative model parameters using the processing system configured by the image processing application, and identifying depth information of at least one output image based on the predicted target image using the processing system configured by the image processing application.
    Type: Application
    Filed: May 26, 2021
    Publication date: November 18, 2021
    Applicant: FotoNation Limited
    Inventor: Kartik Venkataraman
  • Publication number: 20210350573
    Abstract: A method for characterizing a pose estimation system includes: receiving, from a pose estimation system, first poses of an arrangement of objects in a first scene; receiving, from the pose estimation system, second poses of the arrangement of objects in a second scene, the second scene being a rigid transformation of the arrangement of objects of the first scene with respect to the pose estimation system; computing a coarse scene transformation between the first scene and the second scene; matching corresponding poses between the first poses and the second poses; computing a refined scene transformation between the first scene and the second scene based on coarse scene transformation, the first poses, and the second poses; transforming the first poses based on the refined scene transformation to compute transformed first poses; and computing an average rotation error and an average translation error of the pose estimation system based on differences between the transformed first poses and the second poses.
    Type: Application
    Filed: December 3, 2020
    Publication date: November 11, 2021
    Inventors: Agastya KALRA, Achuta KADAMBI, Kartik VENKATARAMAN
  • Publication number: 20210312207
    Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.
    Type: Application
    Filed: April 19, 2021
    Publication date: October 7, 2021
    Applicant: FotoNation Limited
    Inventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
  • Publication number: 20210264147
    Abstract: A computer-implemented method for surface modeling includes: receiving one or more polarization raw frames of a surface of a physical object, the polarization raw frames being captured with a polarizing filter at different linear polarization angles; extracting one or more first tensors in one or more polarization representation spaces from the polarization raw frames; and detecting a surface characteristic of the surface of the physical object based on the one or more first tensors in the one or more polarization representation spaces.
    Type: Application
    Filed: September 17, 2020
    Publication date: August 26, 2021
    Inventors: Achuta KADAMBI, Agastya KALRA, Supreeth Krishna RAO, Kartik VENKATARAMAN
  • Publication number: 20210264607
    Abstract: A computer-implemented method for computing a prediction on images of a scene includes: receiving one or more polarization raw frames of a scene, the polarization raw frames being captured with a polarizing filter at a different linear polarization angle; extracting one or more first tensors in one or more polarization representation spaces from the polarization raw frames; and computing a prediction regarding one or more optically challenging objects in the scene based on the one or more first tensors in the one or more polarization representation spaces.
    Type: Application
    Filed: August 28, 2020
    Publication date: August 26, 2021
    Inventors: Agastya KALRA, Vage TAAMAZYAN, Supreeth Krishna RAO, Kartik VENKATARAMAN, Ramesh RASKAR, Achuta KADAMBI
  • Patent number: 11024046
    Abstract: Systems and methods for depth estimation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for estimating depth from images. The method includes steps for receiving a plurality of source images captured from a plurality of different viewpoints using a processing system configured by an image processing application, generating a target image from a target viewpoint that is different to the viewpoints of the plurality of source images based upon a set of generative model parameters using the processing system configured by the image processing application, and identifying depth information of at least one output image based on the predicted target image using the processing system configured by the image processing application.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: June 1, 2021
    Assignee: FotoNation Limited
    Inventor: Kartik Venkataraman
  • Publication number: 20210150748
    Abstract: Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.
    Type: Application
    Filed: January 29, 2021
    Publication date: May 20, 2021
    Applicant: FotoNation Limited
    Inventors: Florian Ciurea, Kartik Venkataraman, Gabriel Molina, Dan Lelescu
  • Publication number: 20210133927
    Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.
    Type: Application
    Filed: November 15, 2020
    Publication date: May 6, 2021
    Applicant: FotoNation Limited
    Inventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
  • Patent number: 10984276
    Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: April 20, 2021
    Assignee: FotoNation Limited
    Inventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
  • Publication number: 20210063141
    Abstract: Systems and methods in accordance with embodiments of the invention estimate depth from projected texture using camera arrays. One embodiment of the invention includes: at least one two-dimensional array of cameras comprising a plurality of cameras; an illumination system configured to illuminate a scene with a projected texture; a processor; and memory containing an image processing pipeline application and an illumination system controller application. In addition, the illumination system controller application directs the processor to control the illumination system to illuminate a scene with a projected texture.
    Type: Application
    Filed: September 7, 2020
    Publication date: March 4, 2021
    Applicant: FotoNation Limited
    Inventors: Kartik Venkataraman, Jacques Duparré
  • Publication number: 20210044790
    Abstract: Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.
    Type: Application
    Filed: October 12, 2020
    Publication date: February 11, 2021
    Applicant: FotoNation Limited
    Inventors: Kartik Venkataraman, Dan Lelescu, Jacques Duparre
  • Publication number: 20210042952
    Abstract: Systems and methods for hybrid depth regularization in accordance with various embodiments of the invention are disclosed. In one embodiment of the invention, a depth sensing system comprises a plurality of cameras; a processor; and a memory containing an image processing application. The image processing application may direct the processor to obtain image data for a plurality of images from multiple viewpoints, the image data comprising a reference image and at least one alternate view image; generate a raw depth map using a first depth estimation process, and a confidence map; and generate a regularized depth map. The regularized depth map may be generated by computing a secondary depth map using a second different depth estimation process; and computing a composite depth map by selecting depth estimates from the raw depth map and the secondary depth map based on the confidence map.
    Type: Application
    Filed: October 23, 2020
    Publication date: February 11, 2021
    Applicant: FotoNation Limited
    Inventors: Ankit Jain, Priyam Chatterjee, Kartik Venkataraman
  • Patent number: 10909707
    Abstract: Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: February 2, 2021
    Assignee: FotoNation Limited
    Inventors: Florian Ciurea, Kartik Venkataraman, Gabriel Molina, Dan Lelescu
  • Publication number: 20200389604
    Abstract: Systems and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. An imaging device in accordance with one embodiment of the invention includes at least one imager array, and each imager in the array comprises a plurality of light sensing elements and a lens stack including at least one lens surface, where the lens stack is configured to form an image on the light sensing elements, control circuitry configured to capture images formed on the light sensing elements of each of the imagers, and a super-resolution processing module configured to generate at least one higher resolution super-resolved image using a plurality of the captured images.
    Type: Application
    Filed: June 19, 2020
    Publication date: December 10, 2020
    Inventors: Kartik Venkataraman, Amandeep S. Jabbi, Robert H. Mullis, Jacques Duparre, Shane Ching-Feng Hu
  • Patent number: 10839485
    Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: November 17, 2020
    Assignee: FotoNation Limited
    Inventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
  • Patent number: 10818026
    Abstract: Systems and methods for hybrid depth regularization in accordance with various embodiments of the invention are disclosed. In one embodiment of the invention, a depth sensing system comprises a plurality of cameras; a processor; and a memory containing an image processing application. The image processing application may direct the processor to obtain image data for a plurality of images from multiple viewpoints, the image data comprising a reference image and at least one alternate view image; generate a raw depth map using a first depth estimation process, and a confidence map; and generate a regularized depth map. The regularized depth map may be generated by computing a secondary depth map using a second different depth estimation process; and computing a composite depth map by selecting depth estimates from the raw depth map and the secondary depth map based on the confidence map.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: October 27, 2020
    Assignee: FotoNation Limited
    Inventors: Ankit Jain, Priyam Chatterjee, Kartik Venkataraman
  • Publication number: 20200334905
    Abstract: In an embodiment, a 3D facial modeling system includes a plurality of cameras configured to capture images from different viewpoints, a processor, and a memory containing a 3D facial modeling application and parameters defining a face detector, wherein the 3D facial modeling application directs the processor to obtain a plurality of images of a face captured from different viewpoints using the plurality of cameras, locate a face within each of the plurality of images using the face detector, wherein the face detector labels key feature points on the located face within each of the plurality of images, determine disparity between corresponding key feature points of located faces within the plurality of images, and generate a 3D model of the face using the depth of the key feature points.
    Type: Application
    Filed: May 4, 2020
    Publication date: October 22, 2020
    Applicant: FotoNation Limited
    Inventor: Kartik Venkataraman
  • Patent number: 10805589
    Abstract: Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.
    Type: Grant
    Filed: April 19, 2016
    Date of Patent: October 13, 2020
    Assignee: FotoNation Limited
    Inventors: Kartik Venkataraman, Dan Lelescu, Jacques Duparre