Patents by Inventor Kalin Atanassov
Kalin Atanassov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11187804Abstract: Aspects of the present disclosure relate to systems and methods for determining one or more depths. An example system includes a time-of-flight receiver configured to sense pulses from reflections of light from a structured light transmitter. An example method includes sensing, by a time-of-flight receiver, pulses from reflections of light from a structured light transmitter.Type: GrantFiled: May 30, 2018Date of Patent: November 30, 2021Assignee: QUALCOMM IncorporatedInventors: Albrecht Johannes Lindner, Htet Naing, Kalin Atanassov
-
Patent number: 10991112Abstract: Aspects relate to processing captured images from structured light systems. An example device may include one or more processors and a memory. The memory may include instructions that, when executed by the one or more processors, cause the device to receive a captured image of a scene from a structured light receiver, analyze one or more first portions of the captured image at a first scale, and analyze one or more second portions of the captured image at a second scale finer than the first scale. The analysis of the one or more second portions may be based on the analysis of the one or more first portions. The instructions further may cause the device to determine for each of the one or more second portions a codeword from a codeword distribution and determine one or more depths in the scene based on the one or more determined codewords.Type: GrantFiled: July 19, 2018Date of Patent: April 27, 2021Assignee: QUALCOMM IncorporatedInventors: Hasib Siddiqui, James Nash, Kalin Atanassov
-
Patent number: 10812731Abstract: Aspects of the present disclosure relate to active depth sensing. An example device may include a memory and a processor coupled to the memory. The processor may be configured to determine an amount of ambient light for a scene to be captured by a structured light receiver and adjust an exposure time for frame capture of a sensor of the structured light receiver based on the determined amount of ambient light. The exposure time of the sensor of the structured light receiver is inversely related to the determined amount of ambient light. The processor further may be configured to receive a plurality of captured frames from the structured light receiver based on the adjusted exposure time and generate an aggregated frame, including aggregating values across the plurality of captured frames.Type: GrantFiled: August 22, 2018Date of Patent: October 20, 2020Assignee: Qualcomm IncorporatedInventors: Htet Naing, Kalin Atanassov, Stephen Michael Verrall
-
Patent number: 10771768Abstract: Methods and systems disclosed provide improved depth sensing for an imaging device. In some aspects, a plurality of transmitters including a first transmitter are configured to generate a structured light pattern with a first depth of field at a first resolution and a second structured light pattern with a second depth of field at the first resolution, wherein the second depth of field is wider than the first depth of field. A receiver, such as an imaging sensor, is configured to focus within the first depth of field to receive the first structured light pattern and capture a first image representing the first structured light pattern and to focus within the second depth of field to receive the second structured light pattern and capture a second image representing the second structured light pattern. An electronic hardware processor is configured to generate a depth map of the scene based on the first image and the second image.Type: GrantFiled: December 15, 2016Date of Patent: September 8, 2020Assignee: QUALCOMM IncorporatedInventors: Kalin Atanassov, Sergiu Goma, Stephen Michael Verrall
-
Patent number: 10750135Abstract: A device (e.g., an image sensor, camera, etc.) may identify a camera lens and color filter array (CFA) sensor used to capture an image, and may determine filter parameters (e.g., a convolutional operator) based on the identified camera lens and CFA sensor. For example, a set of kernels (e.g., including a set of horizontal filters and a set of vertical filters) may be determined based on properties of a given lens and/or q-channel CFA sensor. Each kernel or filter may correspond to a row of a convolutional operator (e.g., of a restoration bit matrix) used by an image signal processor (ISP) of the device for non-linear filtering of the captured image. The corresponding outputs from the horizontal and vertical filters (e.g., two outputs of the horizontal and vertical filters corresponding to an input channel associated with the CFA sensor) may then be combined using a non-linear classification operation.Type: GrantFiled: October 19, 2018Date of Patent: August 18, 2020Assignee: Qualcomm IncorporatedInventors: Hasib Siddiqui, Kalin Atanassov, Magdi Mohamed
-
Patent number: 10732285Abstract: Aspects of the present disclosure relate to systems and methods for structured light (SL) depth systems. A depth finding system includes one or more processors and a memory, coupled to the one or more processors, includes instructions that, when executed by the one or more processors, cause the system to capture a plurality of frames based on transmitted pulses of light, where each of the frames is captured by scanning a sensor array after a respective one of the pulses has been transmitted, and generate an image depicting reflections of the transmitted light by combining the plurality of frames, where each of the frames provides a different portion of the image.Type: GrantFiled: September 26, 2018Date of Patent: August 4, 2020Assignee: Qualcomm IncorporatedInventors: Kalin Atanassov, Sergiu Goma, James Nash
-
Publication number: 20200128216Abstract: A device (e.g., an image sensor, camera, etc.) may identify a camera lens and color filter array (CFA) sensor used to capture an image, and may determine filter parameters (e.g., a convolutional operator) based on the identified camera lens and CFA sensor. For example, a set of kernels (e.g., including a set of horizontal filters and a set of vertical filters) may be determined based on properties of a given lens and/or q-channel CFA sensor. Each kernel or filter may correspond to a row of a convolutional operator (e.g., of a restoration bit matrix) used by an image signal processor (ISP) of the device for non-linear filtering of the captured image. The corresponding outputs from the horizontal and vertical filters (e.g., two outputs of the horizontal and vertical filters corresponding to an input channel associated with the CFA sensor) may then be combined using a non-linear classification operation.Type: ApplicationFiled: October 19, 2018Publication date: April 23, 2020Inventors: Hasib Siddiqui, Kalin Atanassov, Magdi Mohamed
-
Publication number: 20200096640Abstract: Aspects of the present disclosure relate to systems and methods for structured light (SL) depth systems. A depth finding system includes one or more processors and a memory, coupled to the one or more processors, includes instructions that, when executed by the one or more processors, cause the system to capture a plurality of frames based on transmitted pulses of light, where each of the frames is captured by scanning a sensor array after a respective one of the pulses has been transmitted, and generate an image depicting reflections of the transmitted light by combining the plurality of frames, where each of the frames provides a different portion of the image.Type: ApplicationFiled: September 26, 2018Publication date: March 26, 2020Inventors: Kalin Atanassov, Sergiu Goma, James Nash
-
Publication number: 20200077073Abstract: A stereoscopic imaging device is configured to capture multiple corresponding images of objects from a first camera and a second camera. The stereoscopic imaging device can determine multiple sets of keypoint matches based on the multiple corresponding images of objects, and can accumulate the keypoints. In some examples, the stereoscopic imaging device can determine a vertical disparity between the first camera and the second camera based on the multiple sets of keypoint matches. In some examples, the stereoscopic imaging device can determine yaw errors between the first camera and the second camera based on the sets of keypoint matches, and can determine a yaw disparity between the first camera and the second camera based on the determined yaw errors. The stereoscopic imaging device can generate calibration data to calibrate one or more of the first camera and the second camera based on the determined vertical disparity and/or yaw disparity.Type: ApplicationFiled: August 28, 2018Publication date: March 5, 2020Inventors: James Nash, Kalin Atanassov, Narayana Karthik Ravirala
-
Publication number: 20200068111Abstract: Aspects of the present disclosure relate to active depth sensing. An example device may include a memory and a processor coupled to the memory. The processor may be configured to determine an amount of ambient light for a scene to be captured by a structured light receiver and adjust an exposure time for frame capture of a sensor of the structured light receiver based on the determined amount of ambient light. The exposure time of the sensor of the structured light receiver is inversely related to the determined amount of ambient light. The processor further may be configured to receive a plurality of captured frames from the structured light receiver based on the adjusted exposure time and generate an aggregated frame, including aggregating values across the plurality of captured frames.Type: ApplicationFiled: August 22, 2018Publication date: February 27, 2020Inventors: Htet Naing, Kalin Atanassov, Stephen Michael Verrall
-
Publication number: 20190369247Abstract: Aspects of the present disclosure relate to systems and methods for determining one or more depths. An example system includes a time-of-flight receiver configured to sense pulses from reflections of light from a structured light transmitter. An example method includes sensing, by a time-of-flight receiver, pulses from reflections of light from a structured light transmitter.Type: ApplicationFiled: May 30, 2018Publication date: December 5, 2019Inventors: Albrecht Johannes Lindner, Htet Naing, Kalin Atanassov
-
Publication number: 20190340776Abstract: Aspects of the present disclosure relate to systems and methods for structured light (SL) depth systems. An example method for determining a depth map post-processing filter may include receiving an image including a scene superimposed on a codeword pattern, segmenting the image into a plurality of tiles, estimating a codeword for each tile of the plurality of tiles, estimating a mean scene value for each tile based at least in part on the respective estimated codeword, and determining the depth map post-processing filter based at least in part on the estimated codewords and the mean scene values.Type: ApplicationFiled: August 21, 2018Publication date: November 7, 2019Inventors: James Nash, Hasib Siddiqui, Kalin Atanassov, Justin Cheng
-
Patent number: 10445861Abstract: Systems and method for refining a depth map of a scene based upon a captured image of the scene. A captured depth map of the scene may contain outage areas or other areas of low confidence. The depth map may be aligned with a color image of the scene, and the depth values of the depth map may be adjusted based upon corresponding color values of the color image. An amount of refinement for each depth value of the aligned depth map is based upon the confidence value of the depth value and a smoothing function based upon a corresponding location of the depth value on the color image.Type: GrantFiled: February 14, 2017Date of Patent: October 15, 2019Assignee: QUALCOMM IncorporatedInventors: Hasib Siddiqui, Kalin Atanassov, James Nash
-
Publication number: 20190228535Abstract: Aspects of the present disclosure relate to processing captured images from structured light systems. An example device may include one or more processors and a memory. The memory may include instructions that, when executed by the one or more processors, cause the device to receive a captured image of a scene from a structured light receiver, analyze one or more first portions of the captured image at a first scale, and analyze one or more second portions of the captured image at a second scale finer than the first scale. The analysis of the one or more second portions may be based on the analysis of the one or more first portions. The instructions further may cause the device to determine for each of the one or more second portions a codeword from a codeword distribution and determine one or more depths in the scene based on the one or more determined codewords.Type: ApplicationFiled: July 19, 2018Publication date: July 25, 2019Inventors: Hasib Siddiqui, James Nash, Kalin Atanassov
-
Patent number: 10337923Abstract: Systems and methods are disclosed for processing spectral imaging (SI) data. A training operation estimates reconstruction matrices based on a spectral mosaic of an SI sensor, generates directionally interpolated maximum a-priori (MAP) estimations of image data based on the estimated reconstruction matrices. The training operation may determine filter coefficients for each of a number of cross-band interpolation filters based at least in part on the MAP estimations, and may determine edge classification factors based at least in part on the determined filter coefficients. The training operation may configure a cross-band interpolation circuit based at least in part on the determined filter coefficients and the determined edge classification factors. The configured cross-band interpolation circuit captures mosaic data using the SI sensor, and recovers full-resolution spectral data from the captured mosaic data.Type: GrantFiled: September 13, 2017Date of Patent: July 2, 2019Assignee: Qualcomm IncorporatedInventors: Hasib Siddiqui, Magdi Mohamed, James Nash, Kalin Atanassov
-
Publication number: 20190162885Abstract: Various embodiments are directed to an optical filter. The optical filter may include a plurality of regions. The plurality of regions may include a first region transmissive of light within a first wavelength range and a second region transmissive of light within a second wavelength range.Type: ApplicationFiled: November 30, 2017Publication date: May 30, 2019Inventors: James Nash, Kalin Atanassov, Hasib Siddiqui
-
Patent number: 10242458Abstract: Systems and methods configured to generate virtual gimbal information for range images produced from 3D depth scans are described. In operation according to embodiments, known and advantageous spatial geometries of features of a scanned volume are exploited to generate virtual gimbal information for a pose. The virtual gimbal information of embodiments may be used to align a range image of the pose with one or more other range images for the scanned volume, such as for combining the range images for use in indoor mapping, gesture recognition, object scanning, etc. Implementations of range image registration using virtual gimbal information provide a realtime one shot direct pose estimator by detecting and estimating the normal vectors for surfaces of features between successive scans which effectively imparts a coordinate system for each scan with an orthogonal set of gimbal axes and defines the relative camera attitude.Type: GrantFiled: April 21, 2017Date of Patent: March 26, 2019Assignee: QUALCOMM IncorporatedInventors: James Nash, Kalin Atanassov, Albrecht Johannes Lindner
-
Publication number: 20190078937Abstract: Systems and methods are disclosed for processing spectral imaging (SI) data. A training operation estimates reconstruction matrices based on a spectral mosaic of an SI sensor, generates directionally interpolated maximum a-priori (MAP) estimations of image data based on the estimated reconstruction matrices. The training operation may determine filter coefficients for each of a number of cross-band interpolation filters based at least in part on the MAP estimations, and may determine edge classification factors based at least in part on the determined filter coefficients. The training operation may configure a cross-band interpolation circuit based at least in part on the determined filter coefficients and the determined edge classification factors. The configured cross-band interpolation circuit captures mosaic data using the SI sensor, and recovers full-resolution spectral data from the captured mosaic data.Type: ApplicationFiled: September 13, 2017Publication date: March 14, 2019Inventors: Hasib Siddiqui, Magdi Mohamed, James Nash, Kalin Atanassov
-
Publication number: 20180308249Abstract: Systems and methods configured to generate virtual gimbal information for range images produced from 3D depth scans are described. In operation according to embodiments, known and advantageous spatial geometries of features of a scanned volume are exploited to generate virtual gimbal information for a pose. The virtual gimbal information of embodiments may be used to align a range image of the pose with one or more other range images for the scanned volume, such as for combining the range images for use in indoor mapping, gesture recognition, object scanning, etc. Implementations of range image registration using virtual gimbal information provide a realtime one shot direct pose estimator by detecting and estimating the normal vectors for surfaces of features between successive scans which effectively imparts a coordinate system for each scan with an orthogonal set of gimbal axes and defines the relative camera attitude.Type: ApplicationFiled: April 21, 2017Publication date: October 25, 2018Inventors: James Nash, Kalin Atanassov, Albrecht Johannes Linder
-
Publication number: 20180309919Abstract: An aspect of this disclosure is an apparatus for capturing images. The apparatus comprises a first image sensor, a second image sensor, and at least one controller coupled to the first image sensor and the second image sensor. The controller is configured to determine a first exposure time of the first image sensor and a second exposure time of the second image sensor. The controller is further configured to control an exposure of the first image sensor according to the first exposure time and control an exposure of the second image sensor according to the second exposure time. The controller also determines a difference between the first and second exposure times and generates a signal for synchronizing image capture by the first and second image sensors based on the determined difference between the first and second exposure times.Type: ApplicationFiled: April 19, 2017Publication date: October 25, 2018Inventors: Htet Naing, Kalin Atanassov, Stephen Michael Verrall