Patents by Inventor Hasib Siddiqui
Hasib Siddiqui has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12243347Abstract: Some methods may involve receiving fingerprint image data and a first set of background image data from a fingerprint sensor and determining first processed fingerprint image data via a subtraction of the first set of background image data from the fingerprint image data. Some methods may involve obtaining force data corresponding to a force applied to the fingerprint sensor when the fingerprint image data were obtained. Some methods may involve obtaining a second set of background image data corresponding to the force data. Some methods may involve determining second processed fingerprint image data based, at least in part, on the first processed fingerprint image data and the second set of background image data, and outputting the second processed fingerprint image data. In some examples, determining the second processed fingerprint image data may involve a machine learning model. Some examples may involve estimating residual noise based on the force data.Type: GrantFiled: December 19, 2023Date of Patent: March 4, 2025Assignee: QUALCOMM IncorporatedInventors: Kwokleung Chan, Raj Kumar, Jessica Liu Strohmann, Sandeep Louis D'Souza, Deepak Rajendra Karnik, Advait Prasad Koparkar, Hasib Siddiqui
-
Patent number: 12033422Abstract: Some disclosed methods involve obtaining current A-line data corresponding to reflections of ultrasonic waves from a target object detected by a single receiver pixel, obtaining current ultrasonic fingerprint image data corresponding to reflections of ultrasonic waves from a target object surface, obtaining previously-obtained A-line data that was previously obtained from an authorized user, and obtaining previously-obtained ultrasonic fingerprint image data that was previously obtained from the authorized user. Some disclosed methods involve estimating, based at least in part on the current A-line data, the previously-obtained A-line data, the current ultrasonic fingerprint image data and the previously-obtained ultrasonic fingerprint image data, whether the target object is a finger of the authorized user. The estimation may involve an anti-spoofing process based at least in part on the current A-line data and the previously-obtained A-line data.Type: GrantFiled: March 20, 2023Date of Patent: July 9, 2024Assignee: QUALCOMM IncorporatedInventors: Adi Hendel, Nathan Altman, Bence Major, Javier Frydman, Hasib Siddiqui
-
Publication number: 20210232801Abstract: Various aspects of the present disclosure generally relate to wireless communication. In some aspects, a device may receive, from a fingerprint scanner, fingerprint scan data associated with an image that depicts a scanned fingerprint of a user; process, using a model-based iterative reconstruction (MBIR) model, the fingerprint scan data to generate an enhanced image associated with the image; and perform, based at least in part on the enhanced image, a match analysis to authenticate the user. Numerous other aspects are provided.Type: ApplicationFiled: August 5, 2020Publication date: July 29, 2021Inventors: Hasib SIDDIQUI, Nathan Felix ALTMAN, Kwokleung CHAN
-
Patent number: 10991112Abstract: Aspects relate to processing captured images from structured light systems. An example device may include one or more processors and a memory. The memory may include instructions that, when executed by the one or more processors, cause the device to receive a captured image of a scene from a structured light receiver, analyze one or more first portions of the captured image at a first scale, and analyze one or more second portions of the captured image at a second scale finer than the first scale. The analysis of the one or more second portions may be based on the analysis of the one or more first portions. The instructions further may cause the device to determine for each of the one or more second portions a codeword from a codeword distribution and determine one or more depths in the scene based on the one or more determined codewords.Type: GrantFiled: July 19, 2018Date of Patent: April 27, 2021Assignee: QUALCOMM IncorporatedInventors: Hasib Siddiqui, James Nash, Kalin Atanassov
-
Patent number: 10750135Abstract: A device (e.g., an image sensor, camera, etc.) may identify a camera lens and color filter array (CFA) sensor used to capture an image, and may determine filter parameters (e.g., a convolutional operator) based on the identified camera lens and CFA sensor. For example, a set of kernels (e.g., including a set of horizontal filters and a set of vertical filters) may be determined based on properties of a given lens and/or q-channel CFA sensor. Each kernel or filter may correspond to a row of a convolutional operator (e.g., of a restoration bit matrix) used by an image signal processor (ISP) of the device for non-linear filtering of the captured image. The corresponding outputs from the horizontal and vertical filters (e.g., two outputs of the horizontal and vertical filters corresponding to an input channel associated with the CFA sensor) may then be combined using a non-linear classification operation.Type: GrantFiled: October 19, 2018Date of Patent: August 18, 2020Assignee: Qualcomm IncorporatedInventors: Hasib Siddiqui, Kalin Atanassov, Magdi Mohamed
-
Publication number: 20200128216Abstract: A device (e.g., an image sensor, camera, etc.) may identify a camera lens and color filter array (CFA) sensor used to capture an image, and may determine filter parameters (e.g., a convolutional operator) based on the identified camera lens and CFA sensor. For example, a set of kernels (e.g., including a set of horizontal filters and a set of vertical filters) may be determined based on properties of a given lens and/or q-channel CFA sensor. Each kernel or filter may correspond to a row of a convolutional operator (e.g., of a restoration bit matrix) used by an image signal processor (ISP) of the device for non-linear filtering of the captured image. The corresponding outputs from the horizontal and vertical filters (e.g., two outputs of the horizontal and vertical filters corresponding to an input channel associated with the CFA sensor) may then be combined using a non-linear classification operation.Type: ApplicationFiled: October 19, 2018Publication date: April 23, 2020Inventors: Hasib Siddiqui, Kalin Atanassov, Magdi Mohamed
-
Patent number: 10565726Abstract: A device includes a first camera and a processor configured to detect one or more first keypoints within a first image captured by the first camera at a first time, detect one or more second keypoints within a second image captured by a second camera at the first time, and detect the one or more first keypoints within a third image captured by the first camera at a second time. The processor is configured to determine a pose estimation based on coordinates of the one or more first keypoints of the first image relative to a common coordinate system, coordinates of the one or more second keypoints of the second image relative to the common coordinate system, and coordinates of the one or more first keypoints of the third image relative to the common coordinate system. The first coordinate system is different than the common coordinate system.Type: GrantFiled: July 3, 2017Date of Patent: February 18, 2020Assignee: QUALCOMM IncorporatedInventors: Albrecht Johannes Lindner, Kalin Mitkov Atanassov, James Wilson Nash, Hasib Siddiqui
-
Publication number: 20190340776Abstract: Aspects of the present disclosure relate to systems and methods for structured light (SL) depth systems. An example method for determining a depth map post-processing filter may include receiving an image including a scene superimposed on a codeword pattern, segmenting the image into a plurality of tiles, estimating a codeword for each tile of the plurality of tiles, estimating a mean scene value for each tile based at least in part on the respective estimated codeword, and determining the depth map post-processing filter based at least in part on the estimated codewords and the mean scene values.Type: ApplicationFiled: August 21, 2018Publication date: November 7, 2019Inventors: James Nash, Hasib Siddiqui, Kalin Atanassov, Justin Cheng
-
Patent number: 10445861Abstract: Systems and method for refining a depth map of a scene based upon a captured image of the scene. A captured depth map of the scene may contain outage areas or other areas of low confidence. The depth map may be aligned with a color image of the scene, and the depth values of the depth map may be adjusted based upon corresponding color values of the color image. An amount of refinement for each depth value of the aligned depth map is based upon the confidence value of the depth value and a smoothing function based upon a corresponding location of the depth value on the color image.Type: GrantFiled: February 14, 2017Date of Patent: October 15, 2019Assignee: QUALCOMM IncorporatedInventors: Hasib Siddiqui, Kalin Atanassov, James Nash
-
Publication number: 20190228535Abstract: Aspects of the present disclosure relate to processing captured images from structured light systems. An example device may include one or more processors and a memory. The memory may include instructions that, when executed by the one or more processors, cause the device to receive a captured image of a scene from a structured light receiver, analyze one or more first portions of the captured image at a first scale, and analyze one or more second portions of the captured image at a second scale finer than the first scale. The analysis of the one or more second portions may be based on the analysis of the one or more first portions. The instructions further may cause the device to determine for each of the one or more second portions a codeword from a codeword distribution and determine one or more depths in the scene based on the one or more determined codewords.Type: ApplicationFiled: July 19, 2018Publication date: July 25, 2019Inventors: Hasib Siddiqui, James Nash, Kalin Atanassov
-
Patent number: 10337923Abstract: Systems and methods are disclosed for processing spectral imaging (SI) data. A training operation estimates reconstruction matrices based on a spectral mosaic of an SI sensor, generates directionally interpolated maximum a-priori (MAP) estimations of image data based on the estimated reconstruction matrices. The training operation may determine filter coefficients for each of a number of cross-band interpolation filters based at least in part on the MAP estimations, and may determine edge classification factors based at least in part on the determined filter coefficients. The training operation may configure a cross-band interpolation circuit based at least in part on the determined filter coefficients and the determined edge classification factors. The configured cross-band interpolation circuit captures mosaic data using the SI sensor, and recovers full-resolution spectral data from the captured mosaic data.Type: GrantFiled: September 13, 2017Date of Patent: July 2, 2019Assignee: Qualcomm IncorporatedInventors: Hasib Siddiqui, Magdi Mohamed, James Nash, Kalin Atanassov
-
Publication number: 20190162885Abstract: Various embodiments are directed to an optical filter. The optical filter may include a plurality of regions. The plurality of regions may include a first region transmissive of light within a first wavelength range and a second region transmissive of light within a second wavelength range.Type: ApplicationFiled: November 30, 2017Publication date: May 30, 2019Inventors: James Nash, Kalin Atanassov, Hasib Siddiqui
-
Publication number: 20190078937Abstract: Systems and methods are disclosed for processing spectral imaging (SI) data. A training operation estimates reconstruction matrices based on a spectral mosaic of an SI sensor, generates directionally interpolated maximum a-priori (MAP) estimations of image data based on the estimated reconstruction matrices. The training operation may determine filter coefficients for each of a number of cross-band interpolation filters based at least in part on the MAP estimations, and may determine edge classification factors based at least in part on the determined filter coefficients. The training operation may configure a cross-band interpolation circuit based at least in part on the determined filter coefficients and the determined edge classification factors. The configured cross-band interpolation circuit captures mosaic data using the SI sensor, and recovers full-resolution spectral data from the captured mosaic data.Type: ApplicationFiled: September 13, 2017Publication date: March 14, 2019Inventors: Hasib Siddiqui, Magdi Mohamed, James Nash, Kalin Atanassov
-
Publication number: 20190005678Abstract: A device includes a first camera and a processor configured to detect one or more first keypoints within a first image captured by the first camera at a first time, detect one or more second keypoints within a second image captured by a second camera at the first time, and detect the one or more first keypoints within a third image captured by the first camera at a second time. The processor is configured to determine a pose estimation based on coordinates of the one or more first keypoints of the first image relative to a common coordinate system, coordinates of the one or more second keypoints of the second image relative to the common coordinate system, and coordinates of the one or more first keypoints of the third image relative to the common coordinate system. The first coordinate system is different than the common coordinate system.Type: ApplicationFiled: July 3, 2017Publication date: January 3, 2019Inventors: Albrecht Johannes Lindner, Kalin Mitkov Atanassov, James Wilson Nash, Hasib Siddiqui
-
Publication number: 20180232859Abstract: Systems and method for refining a depth map of a scene based upon a captured image of the scene. A captured depth map of the scene may contain outage areas or other areas of low confidence. The depth map may be aligned with a color image of the scene, and the depth values of the depth map may be adjusted based upon corresponding color values of the color image. An amount of refinement for each depth value of the aligned depth map is based upon the confidence value of the depth value and a smoothing function based upon a corresponding location of the depth value on the color image.Type: ApplicationFiled: February 14, 2017Publication date: August 16, 2018Inventors: Hasib Siddiqui, Kalin Atanassov, James Nash