Patents by Inventor Semyon Nisenzon
Semyon Nisenzon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230421742Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.Type: ApplicationFiled: June 22, 2023Publication date: December 28, 2023Applicant: Adeia Imaging LLCInventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
-
Patent number: 11729365Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.Type: GrantFiled: April 19, 2021Date of Patent: August 15, 2023Assignee: Adela Imaging LLCInventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
-
Patent number: 11635767Abstract: In one example, a method may include capturing two-dimensional (2D) images of scenes using a set of multi-resolution cameras disposed on at least one side of an autonomous vehicle. Further, a low-resolution depth map with relatively small depths may be generated for each scene using the captured 2D images. Furthermore, a high-resolution depth map may be generated for each scene for a wide depth range by iteratively refining the low-resolution depth map for each scene. Also, a 3D video may be generated based on the high-resolution depth maps and the captured 2D images of the central camera. Further, a distance, a velocity, and/or an acceleration of one or more objects relative to the autonomous vehicle is computed by analyzing one or more frames of the 3D video. Then, the autonomous vehicle may be controlled based on the computed distance, velocity, and/or acceleration of the one or more objects.Type: GrantFiled: February 12, 2020Date of Patent: April 25, 2023Inventor: Semyon Nisenzon
-
Publication number: 20210312207Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.Type: ApplicationFiled: April 19, 2021Publication date: October 7, 2021Applicant: FotoNation LimitedInventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
-
Patent number: 10984276Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.Type: GrantFiled: September 27, 2019Date of Patent: April 20, 2021Assignee: FotoNation LimitedInventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
-
Publication number: 20200257306Abstract: In one example, a method may include capturing two-dimensional (2D) images of scenes using a set of multi-resolution cameras disposed on at least one side of an autonomous vehicle. Further, a low-resolution depth map with relatively small depths may be generated for each scene using the captured 2D images. Furthermore, a high-resolution depth map may be generated for each scene for a wide depth range by iteratively refining the low-resolution depth map for each scene. Also, a 3D video may be generated based on the high-resolution depth maps and the captured 2D images of the central camera. Further, a distance, a velocity, and/or an acceleration of one or more objects relative to the autonomous vehicle is computed by analyzing one or more frames of the 3D video. Then, the autonomous vehicle may be controlled based on the computed distance, velocity, and/or acceleration of the one or more objects.Type: ApplicationFiled: February 12, 2020Publication date: August 13, 2020Inventor: SEMYON NISENZON
-
Patent number: 10674138Abstract: Systems with an array camera augmented with a conventional camera in accordance with embodiments of the invention are disclosed. In some embodiments, the array camera is used to capture a first set of image data of a scene and a conventional camera is used to capture a second set of image data for the scene. An object of interest is identified in the first set of image data. A first depth measurement for the object of interest is determined and compared to a predetermined threshold. If the first depth measurement is above the threshold, a second set of image data captured using the conventional camera is obtained. The object of interest is identified in the second set of image data and a second depth measurement for the object of interest is determined using at least a portion of the first set of image data and at least a portion of the second set of image data.Type: GrantFiled: November 2, 2018Date of Patent: June 2, 2020Assignee: FotoNation LimitedInventors: Kartik Venkataraman, Paul Gallagher, Ankit K. Jain, Semyon Nisenzon, Dan Lelescu, Florian Ciurea, Gabriel Molina
-
Publication number: 20200026948Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.Type: ApplicationFiled: September 27, 2019Publication date: January 23, 2020Applicant: FotoNation LimitedInventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
-
Patent number: 10455218Abstract: Systems and methods for stereo imaging with camera arrays in accordance with embodiments of the invention are disclosed. In one embodiment, a method of generating depth information for an object using two or more array cameras that each include a plurality of imagers includes obtaining a first set of image data captured from a first set of viewpoints, identifying an object in the first set of image data, determining a first depth measurement, determining whether the first depth measurement is above a threshold, and when the depth is above the threshold: obtaining a second set of image data of the same scene from a second set of viewpoints located known distances from one viewpoint in the first set of viewpoints, identifying the object in the second set of image data, and determining a second depth measurement using the first set of image data and the second set of image data.Type: GrantFiled: October 23, 2017Date of Patent: October 22, 2019Assignee: FotoNation LimitedInventors: Kartik Venkataraman, Paul Gallagher, Ankit Jain, Semyon Nisenzon
-
Patent number: 10430682Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.Type: GrantFiled: July 9, 2018Date of Patent: October 1, 2019Assignee: FotoNation LimitedInventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
-
Patent number: 10390005Abstract: Systems and methods for the synthesis of light field images from virtual viewpoints in accordance with embodiments of the invention are disclosed. In one embodiment of the invention, a system includes a processor and a memory configured to store captured light field image data and an image manipulation application, wherein the captured light field image data includes image data, pixel position data, and a depth map, and wherein the image manipulation application configures the processor to obtain captured light field image data, determine a virtual viewpoint for the captured light field image data, where the virtual viewpoint includes a virtual location and virtual depth information, compute a virtual depth map based on the captured light field image data and the virtual viewpoint, and generate an image from the perspective of the virtual viewpoint based on the captured light field image data and the virtual depth map.Type: GrantFiled: October 6, 2015Date of Patent: August 20, 2019Assignee: FotoNation LimitedInventors: Semyon Nisenzon, Ankit K. Jain
-
Patent number: 10326981Abstract: Techniques for generating 3D images using multi-resolution camera set are described. In one example embodiment, the method includes, disposing a set of multi-resolution cameras including a central camera, having a first resolution, and one or more multiple camera groups, having one or more resolutions that are different from the first resolution, that are positioned substantially surrounding the central camera. Images are then captured using the multi-resolution camera set. A low-resolution depth map is then generated by down scaling the captured higher resolution image to lower resolution image. Captured lower resolution images are then up-scaled. A high-resolution depth map is then generated using the captured image of the central camera, the up-scaled captured images of the one or more multiple camera groups, and the generated low-resolution depth map. The 3D image of the captured image is then generated using the generated high-resolution depth map and the captured images.Type: GrantFiled: May 15, 2015Date of Patent: June 18, 2019Inventor: Semyon Nisenzon
-
Patent number: 10275676Abstract: Systems and methods for storing images synthesized from light field image data and metadata describing the images in electronic files in accordance with embodiments of the invention are disclosed. One embodiment includes a processor and memory containing an encoding application and light field image data, where the light field image data comprises a plurality of low resolution images of a scene captured from different viewpoints. In addition, the encoding application configures the processor to synthesize a higher resolution image of the scene from a reference viewpoint using the low resolution images, where synthesizing the higher resolution image involves creating a depth map that specifies depths from the reference viewpoint for pixels in the higher resolution image; encode the higher resolution image; and create a light field image file including the encoded image, the low resolution images, and metadata including the depth map.Type: GrantFiled: January 8, 2018Date of Patent: April 30, 2019Assignee: FotoNation LimitedInventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
-
Publication number: 20190089947Abstract: Systems with an array camera augmented with a conventional camera in accordance with embodiments of the invention are disclosed. In some embodiments, the array camera is used to capture a first set of image data of a scene and a conventional camera is used to capture a second set of image data for the scene. An object of interest is identified in the first set of image data. A first depth measurement for the object of interest is determined and compared to a predetermined threshold. If the first depth measurement is above the threshold, a second set of image data captured using the conventional camera is obtained. The object of interest is identified in the second set of image data and a second depth measurement for the object of interest is determined using at least a portion of the first set of image data and at least a portion of the second set of image data.Type: ApplicationFiled: November 2, 2018Publication date: March 21, 2019Applicant: FotoNation LimitedInventors: Kartik Venkataraman, Paul Gallagher, Ankit K. Jain, Semyon Nisenzon, Dan Lelescu, Florian Ciurea, Gabriel Molina
-
Patent number: 10235590Abstract: Systems and methods for storing images synthesized from light field image data and metadata describing the images in electronic files in accordance with embodiments of the invention are disclosed. One embodiment includes a processor and memory containing an encoding application and light field image data, where the light field image data comprises a plurality of low resolution images of a scene captured from different viewpoints. In addition, the encoding application configures the processor to synthesize a higher resolution image of the scene from a reference viewpoint using the low resolution images, where synthesizing the higher resolution image involves creating a depth map that specifies depths from the reference viewpoint for pixels in the higher resolution image; encode the higher resolution image; and create a light field image file including the encoded image, the low resolution images, and metadata including the depth map.Type: GrantFiled: January 8, 2018Date of Patent: March 19, 2019Assignee: FotoNation LimitedInventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
-
Publication number: 20180330182Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.Type: ApplicationFiled: July 9, 2018Publication date: November 15, 2018Applicant: FotoNation Cayman LimitedInventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
-
Patent number: 10122993Abstract: Systems with an array camera augmented with a conventional camera in accordance with embodiments of the invention are disclosed. In some embodiments, the array camera is used to capture a first set of image data of a scene and a conventional camera is used to capture a second set of image data for the scene. An object of interest is identified in the first set of image data. A first depth measurement for the object of interest is determined and compared to a predetermined threshold. If the first depth measurement is above the threshold, a second set of image data captured using the conventional camera is obtained. The object of interest is identified in the second set of image data and a second depth measurement for the object of interest is determined using at least a portion of the first set of image data and at least a portion of the second set of image data.Type: GrantFiled: May 28, 2015Date of Patent: November 6, 2018Assignee: FotoNation LimitedInventors: Kartik Venkataraman, Paul Gallagher, Ankit K. Jain, Semyon Nisenzon, Dan Lelescu, Florian Ciurea, Gabriel Molina
-
Publication number: 20180197035Abstract: Systems and methods for storing images synthesized from light field image data and metadata describing the images in electronic files in accordance with embodiments of the invention are disclosed. One embodiment includes a processor and memory containing an encoding application and light field image data, where the light field image data comprises a plurality of low resolution images of a scene captured from different viewpoints. In addition, the encoding application configures the processor to synthesize a higher resolution image of the scene from a reference viewpoint using the low resolution images, where synthesizing the higher resolution image involves creating a depth map that specifies depths from the reference viewpoint for pixels in the higher resolution image; encode the higher resolution image; and create a light field image file including the encoded image, the low resolution images, and metadata including the depth map.Type: ApplicationFiled: January 8, 2018Publication date: July 12, 2018Applicant: FotoNation Cayman LimitedInventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
-
Patent number: 10019816Abstract: Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.Type: GrantFiled: December 30, 2016Date of Patent: July 10, 2018Assignee: FotoNation Cayman LimitedInventors: Kartik Venkataraman, Semyon Nisenzon, Dan Lelescu
-
Patent number: 9900584Abstract: Techniques for depth map generation using cluster hierarchy and multiple multiresolution camera clusters are described. In one example embodiment, the method includes, capturing images using multiple multiresolution camera clusters. Multiple low-resolution depth maps are then generated by down scaling the captured high resolution image and mid-resolution images to lower resolution images. A low-resolution central camera depth map is generated using the refined multiple low-resolution depth maps. Captured lower resolution images are then up-scaled to mid-resolution images. A mid-resolution depth maps are then generated for each cluster using multiple view points and the up-scaled mid-resolution images. A high-resolution depth map is then generated using the refined initial mid-resolution depth map, the low-resolution central camera depth map, and the up-scaled central cluster images.Type: GrantFiled: April 27, 2016Date of Patent: February 20, 2018Inventor: Semyon Nisenzon