Patents by Inventor Meir Tzur
Meir Tzur has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230216999Abstract: An imaging system receives depth data (corresponding to an environment) from a depth sensor and first image data (a depiction of the environment) from an image sensor. The imaging system generates, based on the depth data, first motion vectors corresponding to a change in perspective of the depiction of the environment in the first image data. The imaging system generates, using grid inversion based on the first motion vectors, second motion vectors that indicate respective distances moved by respective pixels of the depiction of the environment in the first image data for the change in perspective. The imaging system generates second image data by modifying the first image data according to the second motion vectors. The second image data includes a second depiction of the environment from a different perspective than the first image data. Some image reprojection applications (e.g., frame interpolation) can be performed without the depth data.Type: ApplicationFiled: September 9, 2022Publication date: July 6, 2023Inventors: Pia ZOBEL, Yuval SCHWARTZ, Tal ZADIK, Itschak MARTSIANO, Roee HARDOON, Meir TZUR, Ron GAIZMAN, Yehuda PASTERNAK
-
Patent number: 11277562Abstract: Techniques and systems are provided for machine-learning based image stabilization. In some examples, a system obtains a sequence of frames captured by an image capture device during a period of time, and collects motion sensor measurements calculated by a motion sensor associated with the image capture device based on movement of the image capture device during the period of time. The system generates, using a deep learning network and the motion sensor measurements, parameters for counteracting motions in one or more frames in the sequence of frames, the motions resulting from the movement of the image capture device during the period of time. The system then adjusts the one or more frames in the sequence of frames according to the parameters to generate one or more adjusted frames having a reduction in at least some of the motions in the one or more frames.Type: GrantFiled: August 17, 2020Date of Patent: March 15, 2022Assignee: Qualcomm IncorporatedInventors: Young Hoon Kang, Hee Jun Park, Tauseef Kazi, Ron Gaizman, Eran Pinhasov, Meir Tzur
-
Publication number: 20200412954Abstract: Example are described for combining optical image stabilization and electronic image stabilization for capture and processing of frames for video. An example device configured to perform electronic image stabilization in light of optical image stabilization performed may include a memory and one or more processors. The one or more processors may be configured to obtain optical image stabilization (OIS) information for OIS performed during capture of a sequence of frames by an image sensor and determine an electronic image stabilization (EIS) filter based on the OIS information. The one or more processors may also be configured to obtain camera position information, and the EIS filter may also be based on the camera position information. The one or more processors may also configure an image signal processor to perform EIS based on the EIS filter.Type: ApplicationFiled: April 13, 2020Publication date: December 31, 2020Inventors: Ron Gaizman, Eliad Tsairi, Meir Tzur, Maksym Aslianskyi
-
Publication number: 20200382706Abstract: Techniques and systems are provided for machine-learning based image stabilization. In some examples, a system obtains a sequence of frames captured by an image capture device during a period of time, and collects motion sensor measurements calculated by a motion sensor associated with the image capture device based on movement of the image capture device during the period of time. The system generates, using a deep learning network and the motion sensor measurements, parameters for counteracting motions in one or more frames in the sequence of frames, the motions resulting from the movement of the image capture device during the period of time. The system then adjusts the one or more frames in the sequence of frames according to the parameters to generate one or more adjusted frames having a reduction in at least some of the motions in the one or more frames.Type: ApplicationFiled: August 17, 2020Publication date: December 3, 2020Inventors: Young Hoon KANG, Hee Jun PARK, Tauseef KAZI, Ron GAIZMAN, Eran PINHASOV, Meir TZUR
-
Patent number: 10771698Abstract: Techniques and systems are provided for machine-learning based image stabilization. In some examples, a system obtains a sequence of frames captured by an image capture device during a period of time, and collects motion sensor measurements calculated by a motion sensor associated with the image capture device based on movement of the image capture device during the period of time. The system generates, using a deep learning network and the motion sensor measurements, parameters for counteracting motions in one or more frames in the sequence of frames, the motions resulting from the movement of the image capture device during the period of time. The system then adjusts the one or more frames in the sequence of frames according to the parameters to generate one or more adjusted frames having a reduction in at least some of the motions in the one or more frames.Type: GrantFiled: August 31, 2018Date of Patent: September 8, 2020Assignee: Qualcomm IncorporatedInventors: Young Hoon Kang, Hee Jun Park, Tauseef Kazi, Ron Gaizman, Eran Pinhasov, Meir Tzur
-
Publication number: 20200077023Abstract: Techniques and systems are provided for machine-learning based image stabilization. In some examples, a system obtains a sequence of frames captured by an image capture device during a period of time, and collects motion sensor measurements calculated by a motion sensor associated with the image capture device based on movement of the image capture device during the period of time. The system generates, using a deep learning network and the motion sensor measurements, parameters for counteracting motions in one or more frames in the sequence of frames, the motions resulting from the movement of the image capture device during the period of time. The system then adjusts the one or more frames in the sequence of frames according to the parameters to generate one or more adjusted frames having a reduction in at least some of the motions in the one or more frames.Type: ApplicationFiled: August 31, 2018Publication date: March 5, 2020Inventors: Young Hoon KANG, Hee Jun PARK, Tauseef KAZI, Ron GAIZMAN, Eran PINHASOV, Meir TZUR
-
Patent number: 10237528Abstract: Embodiments are directed towards enabling digital cameras to create a 3D view, which can be re-rendered onto any object within a scene, so that it is both in focus and a center of perspective, based on capturing a single set of multiple 2D images of the scene. From capturing a single set of 2D images for a scene, a depth map of the scene may be generated, and used to calculate principal depths, which are then used to capture an image focused at each of the principal depths. A correspondence between a 2D image of the scene and the principal depths are determined that corresponds to a specific principal depth. For different coordinates of the 2D image, different 3D views of the scene are created that are each focused at a principal dept that corresponds to the given coordinate.Type: GrantFiled: March 14, 2013Date of Patent: March 19, 2019Assignee: QUALCOMM IncorporatedInventors: Meir Tzur, Noam Levy
-
Patent number: 9501834Abstract: A system, method, and computer program product for capturing images for later refocusing. Embodiments estimate a distance map for a scene, determine a number of principal depths, capture a set of images, with each image focused at one of the principal depths, and process captured images to produce an output image. The scene is divided into regions, and the depth map represents region depths corresponding to a particular focus step. Entries having a specific focus step value are placed into a histogram, and depths having the most entries are selected as the principal depths. Embodiments may also identify scene areas having important objects and include different important object depths in the principal depths. Captured images may be selected according to user input, aligned, and then combined using blending functions that favor only scene regions that are focused in particular captured images.Type: GrantFiled: January 9, 2012Date of Patent: November 22, 2016Assignee: QUALCOMM Technologies, Inc.Inventor: Meir Tzur
-
Patent number: 9215357Abstract: Embodiments are directed towards performing depth estimation within a digital camera system based on interpolation of inverse focus statistics. After an image is captured, various statistics or focus measure may be calculated using, for example, a high pass filter. Depth is estimated by interpolating the inverse of the statistics for three positions of focus for the image. The inverse of the statistics, St(n), may be 1/St(n), or 1/St2(n), or even 1/StZ(n), where Z?1. Several approaches to interpolating the inverse values of the statistics to obtain a depth estimate are disclosed, including a general parabolic minimum approach, using a parabolic minimum within a progressive scheme, or within a continuous AF scheme. The depth estimate may then be used for a variety of applications, including automatic focusing, as well as converting 2D images to 3D images.Type: GrantFiled: October 24, 2014Date of Patent: December 15, 2015Assignee: QUALCOMM TECHNOLOGIES, INC.Inventor: Meir Tzur
-
Publication number: 20150042841Abstract: Embodiments are directed towards performing depth estimation within a digital camera system based on interpolation of inverse focus statistics. After an image is captured, various statistics or focus measure may be calculated using, for example, a high pass filter. Depth is estimated by interpolating the inverse of the statistics for three positions of focus for the image. The inverse of the statistics, St(n), may be 1/St(n), or 1/St2(n), or even 1/StZ(n), where Z?1. Several approaches to interpolating the inverse values of the statistics to obtain a depth estimate are disclosed, including a general parabolic minimum approach, using a parabolic minimum within a progressive scheme, or within a continuous AF scheme. The depth estimate may then be used for a variety of applications, including automatic focusing, as well as converting 2D images to 3D images.Type: ApplicationFiled: October 24, 2014Publication date: February 12, 2015Inventor: Meir Tzur
-
Patent number: 8896747Abstract: Embodiments are directed towards performing depth estimation within a digital camera system based on interpolation of inverse focus statistics. After an image is captured, various statistics or focus measure may be calculated using, for example, a high pass filter. Depth is estimated by interpolating the inverse of the statistics for three positions of focus for the image. The inverse of the statistics, St(n), may be 1/St(n), or 1/St2(n), or even 1/StZ(n), where Z?1. Several approaches to interpolating the inverse values of the statistics to obtain a depth estimate are disclosed, including a general parabolic minimum approach, using a parabolic minimum within a progressive scheme, or within a continuous AF scheme. The depth estimate may then be used for a variety of applications, including automatic focusing, as well as converting 2D images to 3D images.Type: GrantFiled: November 13, 2012Date of Patent: November 25, 2014Assignee: QUALCOMM Technologies, Inc.Inventor: Meir Tzur
-
Publication number: 20140267602Abstract: Embodiments are directed towards enabling digital cameras to create a 3D view, which can be re-rendered onto any object within a scene, so that it is both in focus and a center of perspective, based on capturing a single set of multiple 2D images of the scene. From capturing a single set of 2D images for a scene, a depth map of the scene may be generated, and used to calculate principal depths, which are then used to capture an image focused at each of the principal depths. A correspondence between a 2D image of the scene and the principal depths are determined that corresponds to a specific principal depth. For different coordinates of the 2D image, different 3D views of the scene are created that are each focused at a principal dept that corresponds to the given coordinate.Type: ApplicationFiled: March 14, 2013Publication date: September 18, 2014Applicant: CSR Technology, Inc.Inventors: Meir Tzur, Noam Levy
-
Publication number: 20140132791Abstract: Embodiments are directed towards performing depth estimation within a digital camera system based on interpolation of inverse focus statistics. After an image is captured, various statistics or focus measure may be calculated using, for example, a high pass filter. Depth is estimated by interpolating the inverse of the statistics for three positions of focus for the image. The inverse of the statistics, St(n), may be 1/St(n), or 1/St2(n), or even 1/StZ(n), where Z?1. Several approaches to interpolating the inverse values of the statistics to obtain a depth estimate are disclosed, including a general parabolic minimum approach, using a parabolic minimum within a progressive scheme, or within a continuous AF scheme. The depth estimate may then be used for a variety of applications, including automatic focusing, as well as converting 2D images to 3D images.Type: ApplicationFiled: November 13, 2012Publication date: May 15, 2014Applicant: CSR TECHNOLOGY INC.Inventor: Meir Tzur
-
Patent number: 8711234Abstract: A system and method for capturing images is provided. In the system and method, preview images are acquired and global local and local motion are estimated based on at least a portion of the preview images. If the local motion is less than or equal to the global motion, a final image is captured based at least on an exposure time based on the global motion. If the local motion is greater than the global motion, a first image is captured based on at least a first exposure time and at least a second image is captured based on at least one second exposure time less than the first exposure time. After capturing the first and second images, global motion regions are separated from local motion regions in the first and second images, and the final image is reconstructed at least based on the local motion regions.Type: GrantFiled: May 28, 2013Date of Patent: April 29, 2014Assignee: CSR Technology Inc.Inventors: Meir Tzur, Artemy Baxansky
-
Patent number: 8698905Abstract: Methods for estimating the point spread function (PSF) of a motion-blurred image are disclosed and claimed. In certain embodiments, the estimated PSF may be used to compensate for the blur caused by hand-shake without the use of an accelerometer or gyro. Edge spread functions may be extracted along different directions from straight edges in a blurred image and combined to find the PSF that best matches. In other embodiments, the blur response to edges of other forms may similarly be extracted, such as corners or circles, and combined to find the best matching PSF. The PSF may then be represented in a parametric form, where the parameters used are related to low-order polynomial coefficients of the angular velocity vx(t) and vy(t) as a function of time.Type: GrantFiled: March 10, 2010Date of Patent: April 15, 2014Assignee: CSR Technology Inc.Inventors: Artemy Baxansky, Meir Tzur
-
Patent number: 8644697Abstract: A system, method, and computer program product are provided for automatically progressively determining focus depth estimates for an imaging device from defocused images. After a depth-from-defocus (DFD) system generates sometimes-noisy estimates for focus depth and optionally a confidence level that the focus depth estimate is correct, embodiments of the present invention process a sequence of such input DFD measures to iteratively decrease the likelihood of focus depth ambiguity and to increase an overall focus depth estimate confidence level. Automatic focus systems for imaging devices may use the outputs of the embodiments to operate more quickly and accurately, either directly or in combination with other focus depth estimation methods, such as calculated sharpness measures. A depth map of a 3D scene may be estimated for creating a pair of images based on a single image.Type: GrantFiled: July 14, 2011Date of Patent: February 4, 2014Assignee: CSR Technology Inc.Inventors: Meir Tzur, Guy Rapaport
-
Patent number: 8593541Abstract: Methods for estimating illumination parameters under flickering lighting conditions are disclosed. Illumination parameters, such as phase and contrast, of a intensity-varying light source may be estimated by capturing a sequence of video images, either prior to or after a desired still image to be processed. The relative average light intensities of the adjacently-captured images are calculated and used to estimate the illumination parameters applicable to the desired still image. The estimated illumination parameters may be used to calculate the point spread function of a still image for image de-blurring processing. The estimated illumination parameters may also be used to synchronize the exposure timing of a still image to the time when there is the most light, as well as for use in motion estimation during view/video modes.Type: GrantFiled: March 1, 2012Date of Patent: November 26, 2013Assignee: CSR Technology Inc.Inventors: Victor Pinto, Artemy Baxansky, Meir Tzur
-
Patent number: 8582001Abstract: A device and methods are provided for producing a high dynamic range (HDR) image of a scene are disclosed and claimed. In one embodiment, method includes setting an exposure period of an image sensor of the digital camera and capturing image data based on the exposure period. The method may further include checking the image data to determine whether the number of saturated pixels exceeds a saturation threshold and checking the image data to determine whether the number of cutoff pixels exceeds a cutoff threshold. The method may further include generating a high dynamic range image based on image data captured by the digital camera, wherein the high dynamic range image is generated based on a minimum number of images to capture a full dynamic range of the scene.Type: GrantFiled: April 7, 2010Date of Patent: November 12, 2013Assignee: CSR Technology Inc.Inventors: Meir Tzur, Victor Pinto
-
Patent number: 8554014Abstract: A camera that provides for a panorama mode of operation that employs internal software and internal acceleration hardware to stitch together two or more captured images to create a single panorama image with a wide format. Captured images are projected from rectilinear coordinates into cylindrical coordinates with the aid of image interpolation acceleration hardware. Matches are quickly determined between each pair of images with a block based search that employs motion estimation acceleration hardware. Transformation are found, utilizing regression and robust statistics techniques, to align the captured images with each other, which are applied to the images using the interpolation acceleration hardware. A determination is made for an optimal seam to stitch images together in the overlap region by finding a path which cuts through relatively non-noticeable regions so that the images can be stitched together into a single image with a wide panoramic effect.Type: GrantFiled: August 27, 2009Date of Patent: October 8, 2013Assignee: CSR Technology Inc.Inventors: Noam Levy, Meir Tzur
-
Publication number: 20130258174Abstract: A system and method for capturing images is provided. In the system and method, preview images are acquired and global local and local motion are estimated based on at least a portion of the preview images. If the local motion is less than or equal to the global motion, a final image is captured based at least on an exposure time based on the global motion. If the local motion is greater than the global motion, a first image is captured based on at least a first exposure time and at least a second image is captured based on at least one second exposure time less than the first exposure time. After capturing the first and second images, global motion regions are separated from local motion regions in the first and second images, and the final image is reconstructed at least based on the local motion regions.Type: ApplicationFiled: May 28, 2013Publication date: October 3, 2013Inventors: MEIR TZUR, Artemy Baxansky