Patents by Inventor Dan Lelescu
Dan Lelescu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10909707Abstract: Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.Type: GrantFiled: August 9, 2019Date of Patent: February 2, 2021Assignee: FotoNation LimitedInventors: Florian Ciurea, Kartik Venkataraman, Gabriel Molina, Dan Lelescu
-
Patent number: 10839485Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.Type: GrantFiled: July 24, 2019Date of Patent: November 17, 2020Assignee: FotoNation LimitedInventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
-
Patent number: 10805589Abstract: Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.Type: GrantFiled: April 19, 2016Date of Patent: October 13, 2020Assignee: FotoNation LimitedInventors: Kartik Venkataraman, Dan Lelescu, Jacques Duparre
-
Patent number: 10674138Abstract: Systems with an array camera augmented with a conventional camera in accordance with embodiments of the invention are disclosed. In some embodiments, the array camera is used to capture a first set of image data of a scene and a conventional camera is used to capture a second set of image data for the scene. An object of interest is identified in the first set of image data. A first depth measurement for the object of interest is determined and compared to a predetermined threshold. If the first depth measurement is above the threshold, a second set of image data captured using the conventional camera is obtained. The object of interest is identified in the second set of image data and a second depth measurement for the object of interest is determined using at least a portion of the first set of image data and at least a portion of the second set of image data.Type: GrantFiled: November 2, 2018Date of Patent: June 2, 2020Assignee: FotoNation LimitedInventors: Kartik Venkataraman, Paul Gallagher, Ankit K. Jain, Semyon Nisenzon, Dan Lelescu, Florian Ciurea, Gabriel Molina
-
Patent number: 10638099Abstract: Systems and methods for extended color processing on Pelican array cameras in accordance with embodiments of the invention are disclosed. In one embodiment, a method of generating a high resolution image includes obtaining input images, where a first set of images includes information in a first band of visible wavelengths and a second set of images includes information in a second band of visible wavelengths and non-visible wavelengths, determining an initial estimate by combining the first set of images into a first fused image, combining the second set of images into a second fused image, spatially registering the fused images, denoising the fused images using bilateral filters, normalizing the second fused image in the photometric reference space of the first fused image, combining the fused images, determining a high resolution image that when mapped through a forward imaging transformation matches the input images within at least one predetermined criterion.Type: GrantFiled: January 11, 2019Date of Patent: April 28, 2020Assignee: FotoNation LimitedInventors: Robert H. Mullis, Dan Lelescu, Kartik Venkataraman
-
Publication number: 20200106959Abstract: A light field panorama system in which a user holding a mobile device performs a gesture to capture images of a scene from different positions. Additional information, for example position and orientation information, may also be captured. The images and information may be processed to determine metadata including the relative positions of the images and depth information for the images. The images and metadata may be stored as a light field panorama. The light field panorama may be processed by a rendering engine to render different 3D views of the scene to allow a viewer to explore the scene from different positions and angles with six degrees of freedom. Using a rendering and viewing system such as a mobile device or head-mounted display, the viewer may see behind or over objects in the scene, zoom in or out on the scene, or view different parts of the scene.Type: ApplicationFiled: September 25, 2019Publication date: April 2, 2020Applicant: Apple Inc.Inventors: Gabriel D. Molina, Ricardo J. Motta, Gary L. Vondran, JR., Dan Lelescu, Tobias Rick, Brett Miller
-
Publication number: 20200026948Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.Type: ApplicationFiled: September 27, 2019Publication date: January 23, 2020Applicant: FotoNation LimitedInventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
-
Patent number: 10542208Abstract: Systems and methods for synthesizing high resolution images using image deconvolution and depth information in accordance embodiments of the invention are disclosed. In one embodiment, an array camera includes a processor and a memory, wherein an image deconvolution application configures the processor to obtain light field image data, determine motion data based on metadata contained in the light field image data, generate a depth-dependent point spread function based on the synthesized high resolution image, the depth map, and the motion data, measure the quality of the synthesized high resolution image based on the generated depth-dependent point spread function, and when the measured quality of the synthesized high resolution image is within a quality threshold, incorporate the synthesized high resolution image into the light field image data.Type: GrantFiled: April 23, 2018Date of Patent: January 21, 2020Assignee: FotoNation LimitedInventors: Dan Lelescu, Thang Duong
-
Patent number: 10540806Abstract: Systems and methods for automatically correcting apparent distortions in close range photographs that are captured using an imaging system capable of capturing images and depth maps are disclosed. In many embodiments, faces are automatically detected and segmented from images using a depth-assisted alpha matting. The detected faces can then be re-rendered from a more distant viewpoint and composited with the background to create a new image in which apparent perspective distortion is reduced.Type: GrantFiled: February 19, 2018Date of Patent: January 21, 2020Assignee: FotoNation LimitedInventors: Samuel Yang, Manohar Srikanth, Dan Lelescu, Kartik Venkataraman
-
Publication number: 20190362515Abstract: Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.Type: ApplicationFiled: August 9, 2019Publication date: November 28, 2019Applicant: FotoNation LimitedInventors: Florian Ciurea, Kartik Venkataraman, Gabriel Molina, Dan Lelescu
-
Publication number: 20190347768Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.Type: ApplicationFiled: July 24, 2019Publication date: November 14, 2019Applicant: FotoNation LimitedInventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
-
Patent number: 10462362Abstract: Systems and methods in accordance with embodiments of the invention enable feature based high resolution motion estimation from low resolution images captured using an array camera. One embodiment includes performing feature detection with respect to a sequence of low resolution images to identify initial locations for a plurality of detected features in the sequence of low resolution images, where the at least one sequence of low resolution images is part of a set of sequences of low resolution images captured from different perspectives. The method also includes synthesizing high resolution image portions, where the synthesized high resolution image portions contain the identified plurality of detected features from the sequence of low resolution images.Type: GrantFiled: November 6, 2017Date of Patent: October 29, 2019Assignee: FotoNation LimitedInventors: Dan Lelescu, Ankit K. Jain
-
Patent number: 10430682Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.Type: GrantFiled: July 9, 2018Date of Patent: October 1, 2019Assignee: FotoNation LimitedInventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
-
Patent number: 10412314Abstract: Systems and methods for performing photometric normalization in an array camera in accordance with embodiments of this invention are disclosed. The image data of scene from a reference imaging component and alternate imaging components is received. The image data from each of the alternate imaging components is then translated to so that pixel information in the image data of each alternate imaging component corresponds to pixel information in the image data of the reference component. The shifted image data of each alternate imaging component is compared to the image data of the reference imaging component to determine gain and offset parameters for each alternate imaging component. The gain and offset parameters of each alternate imaging component is then applied to the image data of the associate imaging to generate corrected image data for each of the alternate imaging components.Type: GrantFiled: October 9, 2017Date of Patent: September 10, 2019Assignee: FotoNation LimitedInventors: Andrew Kenneth John McMahon, Dan Lelescu, Florian Ciurea
-
Patent number: 10380752Abstract: Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.Type: GrantFiled: December 29, 2017Date of Patent: August 13, 2019Assignee: FotoNation LimitedInventors: Florian Ciurea, Kartik Venkataraman, Gabriel Molina, Dan Lelescu
-
Patent number: 10375302Abstract: Imager arrays, array camera modules, and array cameras in accordance with embodiments of the invention utilize pixel apertures to control the amount of aliasing present in captured images of a scene. One embodiment includes a plurality of focal planes, control circuitry configured to control the capture of image information by the pixels within the focal planes, and sampling circuitry configured to convert pixel outputs into digital pixel data. In addition, the pixels in the plurality of focal planes include a pixel stack including a microlens and an active area, where light incident on the surface of the microlens is focused onto the active area by the microlens and the active area samples the incident light to capture image information, and the pixel stack defines a pixel area and includes a pixel aperture, where the size of the pixel apertures is smaller than the pixel area.Type: GrantFiled: October 16, 2017Date of Patent: August 6, 2019Assignee: FotoNation LimitedInventors: Shree Nayar, Kartik Venkataraman, Bedabrata Pain, Dan Lelescu
-
Publication number: 20190235138Abstract: Systems and methods in accordance with embodiments of the invention actively align a lens stack array with an array of focal planes to construct an array camera module. In one embodiment, a method for actively aligning a lens stack array with a sensor that has a focal plane array includes: aligning the lens stack array relative to the sensor in an initial position; varying the spatial relationship between the lens stack array and the sensor; capturing images of a known target that has a region of interest using a plurality of active focal planes at different spatial relationships; scoring the images based on the extent to which the region of interest is focused in the images; selecting a spatial relationship between the lens stack array and the sensor based on a comparison of the scores; and forming an array camera subassembly based on the selected spatial relationship.Type: ApplicationFiled: April 11, 2019Publication date: August 1, 2019Applicant: FotoNation LimitedInventors: Jacques Duparre, Andrew Kenneth John McMahon, Dan Lelescu
-
Patent number: 10366472Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.Type: GrantFiled: June 1, 2016Date of Patent: July 30, 2019Assignee: FotoNation LimitedInventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
-
Publication number: 20190230348Abstract: Systems and methods for dynamically calibrating an array camera to accommodate variations in geometry that can occur throughout its operational life are disclosed. The dynamic calibration processes can include acquiring a set of images of a scene and identifying corresponding features within the images. Geometric calibration data can be used to rectify the images and determine residual vectors for the geometric calibration data at locations where corresponding features are observed. The residual vectors can then be used to determine updated geometric calibration data for the camera array. In several embodiments, the residual vectors are used to generate a residual vector calibration data field that updates the geometric calibration data. In many embodiments, the residual vectors are used to select a set of geometric calibration from amongst a number of different sets of geometric calibration data that is the best fit for the current geometry of the camera array.Type: ApplicationFiled: April 1, 2019Publication date: July 25, 2019Applicant: FotoNation LimitedInventors: Florian Ciurea, Dan Lelescu, Priyam Chatterjee
-
Publication number: 20190215496Abstract: Systems and methods for extended color processing on Pelican array cameras in accordance with embodiments of the invention are disclosed. In one embodiment, a method of generating a high resolution image includes obtaining input images, where a first set of images includes information in a first band of visible wavelengths and a second set of images includes information in a second band of visible wavelengths and non-visible wavelengths, determining an initial estimate by combining the first set of images into a first fused image, combining the second set of images into a second fused image, spatially registering the fused images, denoising the fused images using bilateral filters, normalizing the second fused image in the photometric reference space of the first fused image, combining the fused images, determining a high resolution image that when mapped through a forward imaging transformation matches the input images within at least one predetermined criterion.Type: ApplicationFiled: January 11, 2019Publication date: July 11, 2019Applicant: FotoNation LimitedInventors: Robert H. Mullis, Dan Lelescu, Kartik Venkataraman