Patents by Inventor Dan Lelescu

Dan Lelescu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250142038
    Abstract: Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.
    Type: Application
    Filed: July 15, 2024
    Publication date: May 1, 2025
    Applicant: Adeia Imaging LLC
    Inventors: Kartik Venkataraman, Dan Lelescu, Jacques Duparre
  • Patent number: 12243190
    Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.
    Type: Grant
    Filed: November 8, 2023
    Date of Patent: March 4, 2025
    Assignee: Adeia Imaging LLC
    Inventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
  • Publication number: 20240331181
    Abstract: Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.
    Type: Application
    Filed: February 6, 2024
    Publication date: October 3, 2024
    Applicant: Adeia Imaging LLC
    Inventors: Florian Ciurea, Kartik Venkataraman, Gabriel Molina, Dan Lelescu
  • Publication number: 20240333901
    Abstract: Systems and methods in accordance with embodiments of the invention are configured to decode images containing an image of a scene and a corresponding depth map. A depth-based effect is applied to the image to generate a synthetic image of the scene. The synthetic image can be encoded into a new image file that contains metadata associated with the depth-based effect. In many embodiments, the original decoded image has a different depth-based effect applied to it with respect to the synthetic image.
    Type: Application
    Filed: June 13, 2024
    Publication date: October 3, 2024
    Applicant: Adeia Imaging LLC
    Inventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
  • Patent number: 12081721
    Abstract: Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.
    Type: Grant
    Filed: June 17, 2022
    Date of Patent: September 3, 2024
    Assignee: Adeia Imaging LLC
    Inventors: Kartik Venkataraman, Dan Lelescu, Jacques Duparre
  • Patent number: 12052409
    Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.
    Type: Grant
    Filed: June 22, 2023
    Date of Patent: July 30, 2024
    Assignee: Adela Imaging LLC
    Inventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
  • Patent number: 12002233
    Abstract: Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: June 4, 2024
    Assignee: Adeia Imaging LLC
    Inventors: Florian Ciurea, Kartik Venkataraman, Gabriel Molina, Dan Lelescu
  • Publication number: 20240169483
    Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.
    Type: Application
    Filed: November 8, 2023
    Publication date: May 23, 2024
    Applicant: Adeia Imaging LLC
    Inventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
  • Patent number: 11941833
    Abstract: Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: March 26, 2024
    Assignee: Adeia Imaging LLC
    Inventors: Florian Ciurea, Kartik Venkataraman, Gabriel Molina, Dan Lelescu
  • Patent number: 11875475
    Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: January 16, 2024
    Assignee: Adeia Imaging LLC
    Inventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
  • Publication number: 20230421742
    Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.
    Type: Application
    Filed: June 22, 2023
    Publication date: December 28, 2023
    Applicant: Adeia Imaging LLC
    Inventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
  • Publication number: 20230336707
    Abstract: Systems and methods for dynamically calibrating an array camera to accommodate variations in geometry that can occur throughout its operational life are disclosed. The dynamic calibration processes can include acquiring a set of images of a scene and identifying corresponding features within the images. Geometric calibration data can be used to rectify the images and determine residual vectors for the geometric calibration data at locations where corresponding features are observed. The residual vectors can then be used to determine updated geometric calibration data for the camera array. In several embodiments, the residual vectors are used to generate a residual vector calibration data field that updates the geometric calibration data. In many embodiments, the residual vectors are used to select a set of geometric calibration from amongst a number of different sets of geometric calibration data that is the best fit for the current geometry of the camera array.
    Type: Application
    Filed: December 28, 2022
    Publication date: October 19, 2023
    Applicant: Adeia Imaging LLC
    Inventors: Florian Ciurea, Dan Lelescu, Priyam Chatterjee
  • Patent number: 11729365
    Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: August 15, 2023
    Assignee: Adela Imaging LLC
    Inventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
  • Publication number: 20230007223
    Abstract: Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.
    Type: Application
    Filed: June 17, 2022
    Publication date: January 5, 2023
    Applicant: FotoNation Limited
    Inventors: Kartik Venkataraman, Dan Lelescu, Jacques Duparre
  • Patent number: 11546576
    Abstract: Systems and methods for dynamically calibrating an array camera to accommodate variations in geometry that can occur throughout its operational life are disclosed. The dynamic calibration processes can include acquiring a set of images of a scene and identifying corresponding features within the images. Geometric calibration data can be used to rectify the images and determine residual vectors for the geometric calibration data at locations where corresponding features are observed. The residual vectors can then be used to determine updated geometric calibration data for the camera array. In several embodiments, the residual vectors are used to generate a residual vector calibration data field that updates the geometric calibration data. In many embodiments, the residual vectors are used to select a set of geometric calibration from amongst a number of different sets of geometric calibration data that is the best fit for the current geometry of the camera array.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: January 3, 2023
    Assignee: Adeia Imaging LLC
    Inventors: Florian Ciurea, Dan Lelescu, Priyam Chatterjee
  • Publication number: 20220414829
    Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.
    Type: Application
    Filed: August 22, 2022
    Publication date: December 29, 2022
    Applicant: FotoNation Limited
    Inventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
  • Patent number: 11423513
    Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.
    Type: Grant
    Filed: November 15, 2020
    Date of Patent: August 23, 2022
    Inventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
  • Patent number: 11368662
    Abstract: Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.
    Type: Grant
    Filed: October 12, 2020
    Date of Patent: June 21, 2022
    Inventors: Kartik Venkataraman, Dan Lelescu, Jacques Duparre
  • Publication number: 20210312207
    Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.
    Type: Application
    Filed: April 19, 2021
    Publication date: October 7, 2021
    Applicant: FotoNation Limited
    Inventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
  • Publication number: 20210281828
    Abstract: Systems and methods for dynamically calibrating an array camera to accommodate variations in geometry that can occur throughout its operational life are disclosed. The dynamic calibration processes can include acquiring a set of images of a scene and identifying corresponding features within the images. Geometric calibration data can be used to rectify the images and determine residual vectors for the geometric calibration data at locations where corresponding features are observed. The residual vectors can then be used to determine updated geometric calibration data for the camera array. In several embodiments, the residual vectors are used to generate a residual vector calibration data field that updates the geometric calibration data. In many embodiments, the residual vectors are used to select a set of geometric calibration from amongst a number of different sets of geometric calibration data that is the best fit for the current geometry of the camera array.
    Type: Application
    Filed: March 8, 2021
    Publication date: September 9, 2021
    Applicant: FotoNation Limited
    Inventors: Florian Ciurea, Dan Lelescu, Priyam Chatterjee