Patents by Inventor Yingyong Qi

Yingyong Qi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200218878
    Abstract: Methods, systems, and devices for personalized (e.g., user specific) eye openness estimation are described. A network model (e.g., a convolutional neural network) may be trained using a set of synthetic eye openness image data (e.g., synthetic face images with known degrees or percentages of eye openness) and a set of real eye openness image data (e.g., facial images of real persons that are annotated as either open eyed or closed eyed). A device may estimate, using the network model, a multi-stage eye openness level (e.g., a percentage or degree to which an eye is open) of a user based on captured real time eye openness image data. The degree of eye openness estimated by the network model may then be compared to an eye size of the user (e.g., a user specific maximum eye size), and a user specific eye openness level may be estimated based on the comparison.
    Type: Application
    Filed: January 3, 2019
    Publication date: July 9, 2020
    Inventors: Eyasu Zemene Mequanint, Shuai Zhang, Yingyong Qi, Ning Bi
  • Patent number: 10706267
    Abstract: Methods, systems, and devices for object recognition are described. Generally, the described techniques provide for a compact and efficient convolutional neural network (CNN) model for facial recognition. The proposed techniques relate to a light model with a set of layers of convolution and one fully connected layer for feature representation. A new building block of for each convolution layer is proposed. A maximum feature map (MFM) operation may be employed to reduce channels (e.g., by combining two or more channels via maximum feature selection within the channels). Depth-wise separable convolution may be employed for computation reduction (e.g., reduction of convolution computation). Batch normalization may be applied to normalize the output of the convolution layers and the fully connected layer (e.g., to prevent overfitting). The described techniques provide a compact and efficient CNN model which can be used for efficient and effective face recognition.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: July 7, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Lei Wang, Ning Bi, Yingyong Qi
  • Publication number: 20200175260
    Abstract: Methods, systems, and devices for image processing are described. The method may include identifying a face in a first image based on identifying one or more biometric features of the face, determining an angular direction of one or more pixels of the identified face, identifying an anchor point on the identified face, sorting each of one or more pixels of the identified face into one of a set of pixel bins based on a combination of the determined angular direction of the pixel and a distance between the pixel and the identified anchor point, and outputting an indication of authenticity associated with the face based on a number of pixels in each bin.
    Type: Application
    Filed: November 30, 2018
    Publication date: June 4, 2020
    Inventors: Ke-Li Cheng, Kuang-Man Huang, Michel Adib Sarkis, Yingyong Qi, Ning Bi
  • Patent number: 10643063
    Abstract: Methods, systems, and devices for object recognition are described. A device may generate a subspace based at least in part on a set of representative feature vectors for an object. The device may obtain an array of pixels representing an image. The device may determine a probe feature vector for the image by applying a convolutional operation to the array of pixels. The device may create a reconstructed feature vector in the subspace based at least in part on the set of representative feature vectors and the probe feature vector. The device may compare the reconstructed feature vector and the probe feature vector and recognize the object in the image based at least in part on the comparison. For example, the described techniques may support pose invariant facial recognition or other such object recognition applications.
    Type: Grant
    Filed: April 9, 2018
    Date of Patent: May 5, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Lei Wang, Yingyong Qi, Ning Bi
  • Patent number: 10636205
    Abstract: A method performed by an electronic device is described. The method includes incrementally adding a current node to a graph. The method also includes incrementally determining a respective adaptive edge threshold for each candidate edge between the current node and one or more candidate neighbor nodes. The method further includes determining whether to accept or reject each candidate edge based on each respective adaptive edge threshold. The method additionally includes performing refining based on the graph to produce refined data. The method also includes producing a three-dimensional (3D) model based on the refined data.
    Type: Grant
    Filed: January 5, 2018
    Date of Patent: April 28, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Yan Deng, Michel Adib Sarkis, Yingyong Qi
  • Patent number: 10628965
    Abstract: A method is described. The method includes determining normalized radiance of an image sequence based on a camera response function (CRF). The method also includes determining one or more reliability images of the image sequence based on a reliability function corresponding to the CRF. The method further includes extracting features based on the normalized radiance of the image sequence. The method additionally includes optimizing a model based on the extracted features and the reliability images.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: April 21, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Yan Deng, Michel Adib Sarkis, Yingyong Qi
  • Patent number: 10620826
    Abstract: A method includes receiving a user input (e.g., a one-touch user input), performing segmentation to generate multiple candidate regions of interest (ROIs) in response to the user input, and performing ROI fusion to generate a final ROI (e.g., for a computer vision application). In some cases, the segmentation may include motion-based segmentation, color-based segmentation, or a combination thereof. Further, in some cases, the ROI fusion may include intraframe (or spatial) ROI fusion, temporal ROI fusion, or a combination thereof.
    Type: Grant
    Filed: August 28, 2014
    Date of Patent: April 14, 2020
    Assignee: Qualcomm Incorporated
    Inventors: Xin Zhong, Dashan Gao, Yu Sun, Yingyong Qi, Baozhong Zheng, Marc Bosch Ruiz, Nagendra Kamath
  • Publication number: 20200082062
    Abstract: Techniques and systems are provided for authenticating a user of a device. For example, input biometric data associated with a person can be obtained. A similarity score for the input biometric data can be determined by comparing the input biometric data to a set of templates that include reference biometric data associated with the user. The similarity score can be compared to an authentication threshold. The person is authenticated as the user when the similarity score is greater than the authentication threshold. The similarity score can also be compared to a learning threshold that is greater than the authentication threshold. A new template including features of the input biometric data is saved for the user when the similarity score is less than the learning threshold and greater than the authentication threshold.
    Type: Application
    Filed: September 7, 2018
    Publication date: March 12, 2020
    Inventors: Eyasu Zemene MEQUANINT, Shuai ZHANG, Yingyong QI, Ning BI
  • Publication number: 20200074747
    Abstract: A method performed by an electronic device is described. The method includes receiving a set of frames. The set of frames describes a moving three-dimensional (3D) object. The method also includes registering the set of frames based on a canonical model. The canonical model includes geometric information and optical information. The method additionally includes fusing frame information of each frame to the canonical model based on the registration. The method further includes reconstructing the 3D object based on the canonical model.
    Type: Application
    Filed: August 30, 2018
    Publication date: March 5, 2020
    Inventors: Yan Deng, Michel Adib Sarkis, Yingyong Qi
  • Patent number: 10552970
    Abstract: A depth based scanning system can be configured to determine whether pixel depth values of a depth map are within a depth range; determine a component of the depth map comprised of connected pixels each with a depth value within the depth range; replace the depth values of any pixels of the depth map that are not connected pixels; determine whether each pixel of the connected pixels of the component has at least a threshold number of neighboring pixels that have a depth value within the depth range; and for each pixel of the connected pixels of the component, if the pixel is determined to have at least the threshold number of neighboring pixels, replace its depth value with a filtered depth value that is based on the depth values of the neighboring pixels that have a depth value within the depth range.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: February 4, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Kuang-Man Huang, Michel Adib Sarkis, Yingyong Qi
  • Publication number: 20200025877
    Abstract: Techniques and systems are provided for performing object verification using radar images. For example, a first radar image and a second radar image are obtained, and features are extracted from the first radar image and the second radar image. A similarity is determined between an object represented by the first radar image and an object represented by the second radar image based on the features extracted from the first radar image and the features extracted from the second radar image. A determined similarity between these two sets of features is used to determine whether the object represented by the first radar image matches the object represented by the second radar image. Distances between the features in the two radar images can optionally also be compared and used to determine object similarity. The objects in the radar images may optionally be faces.
    Type: Application
    Filed: February 11, 2019
    Publication date: January 23, 2020
    Inventors: Michel Adib SARKIS, Ning BI, Yingyong QI, Amichai SANDEROVICH, Evyatar HEMO
  • Publication number: 20190349365
    Abstract: Embodiments described herein can address these and other issues by using radar machine learning to address the radio frequency (RF) to perform object identification, including facial recognition. In particular, embodiments may obtain IQ samples by transmitting and receiving a plurality of data packets with a respective plurality of transmitter antenna elements and receiver antenna elements. I/Q samples indicative of a channel impulse responses of an identification region obtained from the transmission and reception of the plurality of data packets may then be used to identify, with an autoencoder, a physical object in the identification region.
    Type: Application
    Filed: May 7, 2019
    Publication date: November 14, 2019
    Inventors: Sharad SAMBHWANI, Amichai SANDEROVICH, Evyatar HEMO, Evgeny LEVITAN, Eran HOF, Mohammad Faroq SALAMA, Michel Adib SARKIS, Ning BI, Yingyong QI
  • Publication number: 20190346536
    Abstract: Embodiments described herein can address these and other issues by using radar machine learning to address the radio frequency (RF) to perform object identification, including facial recognition. In particular, embodiments may obtain IQ samples by transmitting and receiving a plurality of data packets with a respective plurality of transmitter antenna elements and receiver antenna elements, where each data packet of the plurality of data packets comprises one or more complementary pairs of Golay sequences. I/Q samples indicative of a channel impulse responses of an identification region obtained from the transmission and reception of the plurality of data packets may then be used to identify, with a random forest model, a physical object in the identification region.
    Type: Application
    Filed: May 7, 2019
    Publication date: November 14, 2019
    Inventors: Sharad SAMBHWANI, Amichai SANDEROVICH, Evyatar HEMO, Evgeny LEVITAN, Eran HOF, Mohammad Faroq SALAMA, Michel Adib SARKIS, Ning BI, Yingyong QI
  • Patent number: 10474145
    Abstract: An apparatus includes a first sensor configured to generate first sensor data. The first sensor data is related to an occupant of a vehicle. The apparatus further includes a depth sensor and a processor. The depth sensor is configured to generate data corresponding to a volume associated with at least a portion of the occupant. The processor is configured to receive the first sensor data and to activate the depth sensor based on the first sensor data.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: November 12, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Feng Guo, Yingyong Qi, Ning Bi, Bolan Jiang, Chienchung Chang
  • Publication number: 20190311183
    Abstract: Methods, systems, and devices for object recognition are described. A device may generate a subspace based at least in part on a set of representative feature vectors for an object. The device may obtain an array of pixels representing an image. The device may determine a probe feature vector for the image by applying a convolutional operation to the array of pixels. The device may create a reconstructed feature vector in the subspace based at least in part on the set of representative feature vectors and the probe feature vector. The device may compare the reconstructed feature vector and the probe feature vector and recognize the object in the image based at least in part on the comparison. For example, the described techniques may support pose invariant facial recognition or other such object recognition applications.
    Type: Application
    Filed: April 9, 2018
    Publication date: October 10, 2019
    Inventors: Lei Wang, Yingyong Qi, Ning Bi
  • Patent number: 10395385
    Abstract: In various implementations, object tracking in a video content analysis system can be augmented with an image-based object re-identification system (e.g., for person re-identification or re-identification of other objects) to improve object tracking results for objects moving in a scene. The object re-identification system can use image recognition principles, which can be enhanced by considering data provided by object trackers that can be output by an object traffic system. In a testing stage, the object re-identification system can selectively test object trackers against object models. For most input video frames, not all object trackers need be tested against all object models. Additionally, different types of object trackers can be tested differently, so that a context provided by each object tracker can be considered. In a training stage, object models can also be selectively updated.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: August 27, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Yang Zhou, Ying Chen, Yingyong Qi, Ning Bi
  • Publication number: 20190220653
    Abstract: Methods, systems, and devices for object recognition are described. Generally, the described techniques provide for a compact and efficient convolutional neural network (CNN) model for facial recognition. The proposed techniques relate to a light model with a set of layers of convolution and one fully connected layer for feature representation. A new building block of for each convolution layer is proposed. A maximum feature map (MFM) operation may be employed to reduce channels (e.g., by combining two or more channels via maximum feature selection within the channels). Depth-wise separable convolution may be employed for computation reduction (e.g., reduction of convolution computation). Batch normalization may be applied to normalize the output of the convolution layers and the fully connected layer (e.g., to prevent overfitting). The described techniques provide a compact and efficient CNN model which can be used for efficient and effective face recognition.
    Type: Application
    Filed: January 12, 2018
    Publication date: July 18, 2019
    Inventors: Lei Wang, Ning Bi, Yingyong Qi
  • Publication number: 20190220987
    Abstract: A depth based scanning system can be configured to determine whether pixel depth values of a depth map are within a depth range; determine a component of the depth map comprised of connected pixels each with a depth value within the depth range; replace the depth values of any pixels of the depth map that are not connected pixels; determine whether each pixel of the connected pixels of the component has at least a threshold number of neighboring pixels that have a depth value within the depth range; and for each pixel of the connected pixels of the component, if the pixel is determined to have at least the threshold number of neighboring pixels, replace its depth value with a filtered depth value that is based on the depth values of the neighboring pixels that have a depth value within the depth range.
    Type: Application
    Filed: January 12, 2018
    Publication date: July 18, 2019
    Inventors: Kuang-Man Huang, Michel Adib Sarkis, Yingyong Qi
  • Publication number: 20190213787
    Abstract: A method performed by an electronic device is described. The method includes incrementally adding a current node to a graph. The method also includes incrementally determining a respective adaptive edge threshold for each candidate edge between the current node and one or more candidate neighbor nodes. The method further includes determining whether to accept or reject each candidate edge based on each respective adaptive edge threshold. The method additionally includes performing refining based on the graph to produce refined data. The method also includes producing a three-dimensional (3D) model based on the refined data.
    Type: Application
    Filed: January 5, 2018
    Publication date: July 11, 2019
    Inventors: Yan Deng, Michel Adib Sarkis, Yingyong Qi
  • Publication number: 20190156515
    Abstract: A method is described. The method includes determining normalized radiance of an image sequence based on a camera response function (CRF). The method also includes determining one or more reliability images of the image sequence based on a reliability function corresponding to the CRF. The method further includes extracting features based on the normalized radiance of the image sequence. The method additionally includes optimizing a model based on the extracted features and the reliability images.
    Type: Application
    Filed: May 25, 2018
    Publication date: May 23, 2019
    Inventors: Yan Deng, Michel Adib Sarkis, Yingyong Qi