Patents by Inventor Hasib Siddiqui

Hasib Siddiqui has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210232801
    Abstract: Various aspects of the present disclosure generally relate to wireless communication. In some aspects, a device may receive, from a fingerprint scanner, fingerprint scan data associated with an image that depicts a scanned fingerprint of a user; process, using a model-based iterative reconstruction (MBIR) model, the fingerprint scan data to generate an enhanced image associated with the image; and perform, based at least in part on the enhanced image, a match analysis to authenticate the user. Numerous other aspects are provided.
    Type: Application
    Filed: August 5, 2020
    Publication date: July 29, 2021
    Inventors: Hasib SIDDIQUI, Nathan Felix ALTMAN, Kwokleung CHAN
  • Patent number: 10991112
    Abstract: Aspects relate to processing captured images from structured light systems. An example device may include one or more processors and a memory. The memory may include instructions that, when executed by the one or more processors, cause the device to receive a captured image of a scene from a structured light receiver, analyze one or more first portions of the captured image at a first scale, and analyze one or more second portions of the captured image at a second scale finer than the first scale. The analysis of the one or more second portions may be based on the analysis of the one or more first portions. The instructions further may cause the device to determine for each of the one or more second portions a codeword from a codeword distribution and determine one or more depths in the scene based on the one or more determined codewords.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: April 27, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Hasib Siddiqui, James Nash, Kalin Atanassov
  • Patent number: 10750135
    Abstract: A device (e.g., an image sensor, camera, etc.) may identify a camera lens and color filter array (CFA) sensor used to capture an image, and may determine filter parameters (e.g., a convolutional operator) based on the identified camera lens and CFA sensor. For example, a set of kernels (e.g., including a set of horizontal filters and a set of vertical filters) may be determined based on properties of a given lens and/or q-channel CFA sensor. Each kernel or filter may correspond to a row of a convolutional operator (e.g., of a restoration bit matrix) used by an image signal processor (ISP) of the device for non-linear filtering of the captured image. The corresponding outputs from the horizontal and vertical filters (e.g., two outputs of the horizontal and vertical filters corresponding to an input channel associated with the CFA sensor) may then be combined using a non-linear classification operation.
    Type: Grant
    Filed: October 19, 2018
    Date of Patent: August 18, 2020
    Assignee: Qualcomm Incorporated
    Inventors: Hasib Siddiqui, Kalin Atanassov, Magdi Mohamed
  • Publication number: 20200128216
    Abstract: A device (e.g., an image sensor, camera, etc.) may identify a camera lens and color filter array (CFA) sensor used to capture an image, and may determine filter parameters (e.g., a convolutional operator) based on the identified camera lens and CFA sensor. For example, a set of kernels (e.g., including a set of horizontal filters and a set of vertical filters) may be determined based on properties of a given lens and/or q-channel CFA sensor. Each kernel or filter may correspond to a row of a convolutional operator (e.g., of a restoration bit matrix) used by an image signal processor (ISP) of the device for non-linear filtering of the captured image. The corresponding outputs from the horizontal and vertical filters (e.g., two outputs of the horizontal and vertical filters corresponding to an input channel associated with the CFA sensor) may then be combined using a non-linear classification operation.
    Type: Application
    Filed: October 19, 2018
    Publication date: April 23, 2020
    Inventors: Hasib Siddiqui, Kalin Atanassov, Magdi Mohamed
  • Patent number: 10565726
    Abstract: A device includes a first camera and a processor configured to detect one or more first keypoints within a first image captured by the first camera at a first time, detect one or more second keypoints within a second image captured by a second camera at the first time, and detect the one or more first keypoints within a third image captured by the first camera at a second time. The processor is configured to determine a pose estimation based on coordinates of the one or more first keypoints of the first image relative to a common coordinate system, coordinates of the one or more second keypoints of the second image relative to the common coordinate system, and coordinates of the one or more first keypoints of the third image relative to the common coordinate system. The first coordinate system is different than the common coordinate system.
    Type: Grant
    Filed: July 3, 2017
    Date of Patent: February 18, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Albrecht Johannes Lindner, Kalin Mitkov Atanassov, James Wilson Nash, Hasib Siddiqui
  • Publication number: 20190340776
    Abstract: Aspects of the present disclosure relate to systems and methods for structured light (SL) depth systems. An example method for determining a depth map post-processing filter may include receiving an image including a scene superimposed on a codeword pattern, segmenting the image into a plurality of tiles, estimating a codeword for each tile of the plurality of tiles, estimating a mean scene value for each tile based at least in part on the respective estimated codeword, and determining the depth map post-processing filter based at least in part on the estimated codewords and the mean scene values.
    Type: Application
    Filed: August 21, 2018
    Publication date: November 7, 2019
    Inventors: James Nash, Hasib Siddiqui, Kalin Atanassov, Justin Cheng
  • Patent number: 10445861
    Abstract: Systems and method for refining a depth map of a scene based upon a captured image of the scene. A captured depth map of the scene may contain outage areas or other areas of low confidence. The depth map may be aligned with a color image of the scene, and the depth values of the depth map may be adjusted based upon corresponding color values of the color image. An amount of refinement for each depth value of the aligned depth map is based upon the confidence value of the depth value and a smoothing function based upon a corresponding location of the depth value on the color image.
    Type: Grant
    Filed: February 14, 2017
    Date of Patent: October 15, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Hasib Siddiqui, Kalin Atanassov, James Nash
  • Publication number: 20190228535
    Abstract: Aspects of the present disclosure relate to processing captured images from structured light systems. An example device may include one or more processors and a memory. The memory may include instructions that, when executed by the one or more processors, cause the device to receive a captured image of a scene from a structured light receiver, analyze one or more first portions of the captured image at a first scale, and analyze one or more second portions of the captured image at a second scale finer than the first scale. The analysis of the one or more second portions may be based on the analysis of the one or more first portions. The instructions further may cause the device to determine for each of the one or more second portions a codeword from a codeword distribution and determine one or more depths in the scene based on the one or more determined codewords.
    Type: Application
    Filed: July 19, 2018
    Publication date: July 25, 2019
    Inventors: Hasib Siddiqui, James Nash, Kalin Atanassov
  • Patent number: 10337923
    Abstract: Systems and methods are disclosed for processing spectral imaging (SI) data. A training operation estimates reconstruction matrices based on a spectral mosaic of an SI sensor, generates directionally interpolated maximum a-priori (MAP) estimations of image data based on the estimated reconstruction matrices. The training operation may determine filter coefficients for each of a number of cross-band interpolation filters based at least in part on the MAP estimations, and may determine edge classification factors based at least in part on the determined filter coefficients. The training operation may configure a cross-band interpolation circuit based at least in part on the determined filter coefficients and the determined edge classification factors. The configured cross-band interpolation circuit captures mosaic data using the SI sensor, and recovers full-resolution spectral data from the captured mosaic data.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: July 2, 2019
    Assignee: Qualcomm Incorporated
    Inventors: Hasib Siddiqui, Magdi Mohamed, James Nash, Kalin Atanassov
  • Publication number: 20190162885
    Abstract: Various embodiments are directed to an optical filter. The optical filter may include a plurality of regions. The plurality of regions may include a first region transmissive of light within a first wavelength range and a second region transmissive of light within a second wavelength range.
    Type: Application
    Filed: November 30, 2017
    Publication date: May 30, 2019
    Inventors: James Nash, Kalin Atanassov, Hasib Siddiqui
  • Publication number: 20190078937
    Abstract: Systems and methods are disclosed for processing spectral imaging (SI) data. A training operation estimates reconstruction matrices based on a spectral mosaic of an SI sensor, generates directionally interpolated maximum a-priori (MAP) estimations of image data based on the estimated reconstruction matrices. The training operation may determine filter coefficients for each of a number of cross-band interpolation filters based at least in part on the MAP estimations, and may determine edge classification factors based at least in part on the determined filter coefficients. The training operation may configure a cross-band interpolation circuit based at least in part on the determined filter coefficients and the determined edge classification factors. The configured cross-band interpolation circuit captures mosaic data using the SI sensor, and recovers full-resolution spectral data from the captured mosaic data.
    Type: Application
    Filed: September 13, 2017
    Publication date: March 14, 2019
    Inventors: Hasib Siddiqui, Magdi Mohamed, James Nash, Kalin Atanassov
  • Publication number: 20190005678
    Abstract: A device includes a first camera and a processor configured to detect one or more first keypoints within a first image captured by the first camera at a first time, detect one or more second keypoints within a second image captured by a second camera at the first time, and detect the one or more first keypoints within a third image captured by the first camera at a second time. The processor is configured to determine a pose estimation based on coordinates of the one or more first keypoints of the first image relative to a common coordinate system, coordinates of the one or more second keypoints of the second image relative to the common coordinate system, and coordinates of the one or more first keypoints of the third image relative to the common coordinate system. The first coordinate system is different than the common coordinate system.
    Type: Application
    Filed: July 3, 2017
    Publication date: January 3, 2019
    Inventors: Albrecht Johannes Lindner, Kalin Mitkov Atanassov, James Wilson Nash, Hasib Siddiqui
  • Publication number: 20180232859
    Abstract: Systems and method for refining a depth map of a scene based upon a captured image of the scene. A captured depth map of the scene may contain outage areas or other areas of low confidence. The depth map may be aligned with a color image of the scene, and the depth values of the depth map may be adjusted based upon corresponding color values of the color image. An amount of refinement for each depth value of the aligned depth map is based upon the confidence value of the depth value and a smoothing function based upon a corresponding location of the depth value on the color image.
    Type: Application
    Filed: February 14, 2017
    Publication date: August 16, 2018
    Inventors: Hasib Siddiqui, Kalin Atanassov, James Nash
  • Patent number: 9495591
    Abstract: Methods, systems and articles of manufacture for recognizing and locating one or more objects in a scene are disclosed. An image and/or video of the scene are captured. Using audio recorded at the scene, an object search of the captured scene is narrowed down. For example, the direction of arrival (DOA) of a sound can be determined and used to limit the search area in a captured image/video. In another example, keypoint signatures may be selected based on types of sounds identified in the recorded audio. A keypoint signature corresponds to a particular object that the system is configured to recognize. Objects in the scene may then be recognized using a shift invariant feature transform (SIFT) analysis comparing keypoints identified in the captured scene to the selected keypoint signatures.
    Type: Grant
    Filed: October 30, 2012
    Date of Patent: November 15, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Erik Visser, Haiyin Wang, Hasib A. Siddiqui, Lae-Hoon Kim
  • Publication number: 20130272548
    Abstract: Methods, systems and articles of manufacture for recognizing and locating one or more objects in a scene are disclosed. An image and/or video of the scene are captured. Using audio recorded at the scene, an object search of the captured scene is narrowed down. For example, the direction of arrival (DOA) of a sound can be determined and used to limit the search area in a captured image/video. In another example, keypoint signatures may be selected based on types of sounds identified in the recorded audio. A keypoint signature corresponds to a particular object that the system is configured to recognize. Objects in the scene may then be recognized using a shift invariant feature transform (SIFT) analysis comparing keypoints identified in the captured scene to the selected keypoint signatures.
    Type: Application
    Filed: October 30, 2012
    Publication date: October 17, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Erik Visser, Haiyin Wang, Hasib A. Siddiqui, Lae-Hoon Kim