Patents by Inventor Dan Lelescu

Dan Lelescu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11941833
    Abstract: Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: March 26, 2024
    Assignee: Adeia Imaging LLC
    Inventors: Florian Ciurea, Kartik Venkataraman, Gabriel Molina, Dan Lelescu
  • Patent number: 11875475
    Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: January 16, 2024
    Assignee: Adeia Imaging LLC
    Inventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
  • Publication number: 20230421742
    Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.
    Type: Application
    Filed: June 22, 2023
    Publication date: December 28, 2023
    Applicant: Adeia Imaging LLC
    Inventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
  • Publication number: 20230336707
    Abstract: Systems and methods for dynamically calibrating an array camera to accommodate variations in geometry that can occur throughout its operational life are disclosed. The dynamic calibration processes can include acquiring a set of images of a scene and identifying corresponding features within the images. Geometric calibration data can be used to rectify the images and determine residual vectors for the geometric calibration data at locations where corresponding features are observed. The residual vectors can then be used to determine updated geometric calibration data for the camera array. In several embodiments, the residual vectors are used to generate a residual vector calibration data field that updates the geometric calibration data. In many embodiments, the residual vectors are used to select a set of geometric calibration from amongst a number of different sets of geometric calibration data that is the best fit for the current geometry of the camera array.
    Type: Application
    Filed: December 28, 2022
    Publication date: October 19, 2023
    Applicant: Adeia Imaging LLC
    Inventors: Florian Ciurea, Dan Lelescu, Priyam Chatterjee
  • Patent number: 11729365
    Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: August 15, 2023
    Assignee: Adela Imaging LLC
    Inventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
  • Publication number: 20230007223
    Abstract: Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.
    Type: Application
    Filed: June 17, 2022
    Publication date: January 5, 2023
    Applicant: FotoNation Limited
    Inventors: Kartik Venkataraman, Dan Lelescu, Jacques Duparre
  • Patent number: 11546576
    Abstract: Systems and methods for dynamically calibrating an array camera to accommodate variations in geometry that can occur throughout its operational life are disclosed. The dynamic calibration processes can include acquiring a set of images of a scene and identifying corresponding features within the images. Geometric calibration data can be used to rectify the images and determine residual vectors for the geometric calibration data at locations where corresponding features are observed. The residual vectors can then be used to determine updated geometric calibration data for the camera array. In several embodiments, the residual vectors are used to generate a residual vector calibration data field that updates the geometric calibration data. In many embodiments, the residual vectors are used to select a set of geometric calibration from amongst a number of different sets of geometric calibration data that is the best fit for the current geometry of the camera array.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: January 3, 2023
    Assignee: Adeia Imaging LLC
    Inventors: Florian Ciurea, Dan Lelescu, Priyam Chatterjee
  • Publication number: 20220414829
    Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.
    Type: Application
    Filed: August 22, 2022
    Publication date: December 29, 2022
    Applicant: FotoNation Limited
    Inventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
  • Patent number: 11423513
    Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.
    Type: Grant
    Filed: November 15, 2020
    Date of Patent: August 23, 2022
    Inventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
  • Patent number: 11368662
    Abstract: Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.
    Type: Grant
    Filed: October 12, 2020
    Date of Patent: June 21, 2022
    Inventors: Kartik Venkataraman, Dan Lelescu, Jacques Duparre
  • Publication number: 20210312207
    Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.
    Type: Application
    Filed: April 19, 2021
    Publication date: October 7, 2021
    Applicant: FotoNation Limited
    Inventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
  • Publication number: 20210281828
    Abstract: Systems and methods for dynamically calibrating an array camera to accommodate variations in geometry that can occur throughout its operational life are disclosed. The dynamic calibration processes can include acquiring a set of images of a scene and identifying corresponding features within the images. Geometric calibration data can be used to rectify the images and determine residual vectors for the geometric calibration data at locations where corresponding features are observed. The residual vectors can then be used to determine updated geometric calibration data for the camera array. In several embodiments, the residual vectors are used to generate a residual vector calibration data field that updates the geometric calibration data. In many embodiments, the residual vectors are used to select a set of geometric calibration from amongst a number of different sets of geometric calibration data that is the best fit for the current geometry of the camera array.
    Type: Application
    Filed: March 8, 2021
    Publication date: September 9, 2021
    Applicant: FotoNation Limited
    Inventors: Florian Ciurea, Dan Lelescu, Priyam Chatterjee
  • Patent number: 11044398
    Abstract: A light field panorama system in which a user holding a mobile device performs a gesture to capture images of a scene from different positions. Additional information, for example position and orientation information, may also be captured. The images and information may be processed to determine metadata including the relative positions of the images and depth information for the images. The images and metadata may be stored as a light field panorama. The light field panorama may be processed by a rendering engine to render different 3D views of the scene to allow a viewer to explore the scene from different positions and angles with six degrees of freedom. Using a rendering and viewing system such as a mobile device or head-mounted display, the viewer may see behind or over objects in the scene, zoom in or out on the scene, or view different parts of the scene.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: June 22, 2021
    Assignee: Apple Inc.
    Inventors: Gabriel D. Molina, Ricardo J. Motta, Gary L. Vondran, Jr., Dan Lelescu, Tobias Rick, Brett Miller
  • Patent number: 11022725
    Abstract: Systems and methods in accordance with embodiments of the invention actively align a lens stack array with an array of focal planes to construct an array camera module. In one embodiment, a method for actively aligning a lens stack array with a sensor that has a focal plane array includes: aligning the lens stack array relative to the sensor in an initial position; varying the spatial relationship between the lens stack array and the sensor; capturing images of a known target that has a region of interest using a plurality of active focal planes at different spatial relationships; scoring the images based on the extent to which the region of interest is focused in the images; selecting a spatial relationship between the lens stack array and the sensor based on a comparison of the scores; and forming an array camera subassembly based on the selected spatial relationship.
    Type: Grant
    Filed: April 11, 2019
    Date of Patent: June 1, 2021
    Assignee: FotoNation Limited
    Inventors: Jacques Duparre, Andrew Kenneth John McMahon, Dan Lelescu
  • Publication number: 20210150748
    Abstract: Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.
    Type: Application
    Filed: January 29, 2021
    Publication date: May 20, 2021
    Applicant: FotoNation Limited
    Inventors: Florian Ciurea, Kartik Venkataraman, Gabriel Molina, Dan Lelescu
  • Publication number: 20210133927
    Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.
    Type: Application
    Filed: November 15, 2020
    Publication date: May 6, 2021
    Applicant: FotoNation Limited
    Inventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
  • Patent number: 10984276
    Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: April 20, 2021
    Assignee: FotoNation Limited
    Inventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
  • Patent number: 10944961
    Abstract: Systems and methods for dynamically calibrating an array camera to accommodate variations in geometry that can occur throughout its operational life are disclosed. The dynamic calibration processes can include acquiring a set of images of a scene and identifying corresponding features within the images. Geometric calibration data can be used to rectify the images and determine residual vectors for the geometric calibration data at locations where corresponding features are observed. The residual vectors can then be used to determine updated geometric calibration data for the camera array. In several embodiments, the residual vectors are used to generate a residual vector calibration data field that updates the geometric calibration data. In many embodiments, the residual vectors are used to select a set of geometric calibration from amongst a number of different sets of geometric calibration data that is the best fit for the current geometry of the camera array.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: March 9, 2021
    Assignee: FotoNation Limited
    Inventors: Florian Ciurea, Dan Lelescu, Priyam Chatterjee
  • Publication number: 20210044790
    Abstract: Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.
    Type: Application
    Filed: October 12, 2020
    Publication date: February 11, 2021
    Applicant: FotoNation Limited
    Inventors: Kartik Venkataraman, Dan Lelescu, Jacques Duparre
  • Patent number: 10909707
    Abstract: Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: February 2, 2021
    Assignee: FotoNation Limited
    Inventors: Florian Ciurea, Kartik Venkataraman, Gabriel Molina, Dan Lelescu