Patents by Inventor Yingyong Qi

Yingyong Qi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12236614
    Abstract: Systems and techniques are provided for performing scene segmentation and object tracking. For example, a method for processing one or more frames. The method may include determining first one or more features from a first frame. The first frame includes a target object. The method may include obtaining a first mask associated with the first frame. The first mask includes an indication of the target object. The method may further include generating, based on the first mask and the first one or more features, a representation of a foreground and a background of the first frame. The method may include determining second one or more features from a second frame and determining, based on the representation of the foreground and the background of the first frame and the second one or more features, a location of the target object in the second frame.
    Type: Grant
    Filed: May 31, 2022
    Date of Patent: February 25, 2025
    Assignee: QUALCOMM Incorporated
    Inventors: Jiancheng Lyu, Dashan Gao, Yingyong Qi, Shuai Zhang, Ning Bi
  • Publication number: 20240394893
    Abstract: Systems, methods, and computer-readable media are provided for performing image segmentation with depth filtering. In some examples, a method can include obtaining a frame capturing a scene: generating, based on the frame, a first segmentation map including a target segmentation mask identifying a target of interest and one or more background masks identifying one or more background regions of the frame; and generating a second segmentation map including the first segmentation map with the one or more background masks filtered out, the one or more background masks being filtered from the first segmentation map based on a depth map associated with the frame.
    Type: Application
    Filed: December 1, 2021
    Publication date: November 28, 2024
    Inventors: Yingyong QI, Xin LI, Xiaowen YING, Shuai ZHANG
  • Publication number: 20240378727
    Abstract: Techniques are provided for image processing. For instance, a process can include obtaining an image; extracting a first set of features at a first scale resolution; extracting a second set of features at a second scale resolution (lower than the first scale resolution); performing a self-attention transform to generate similarity scores for the second set of features; adding the similarity scores to the second set of features to generate a first feature extractor output; up-sampling the first feature extractor output to generate a second feature extractor output; adding the second feature extractor output to the first set of features to generate a third feature extractor output; receiving an instance query; performing a cross-attention transform on the instance query and the first feature extractor output to generate a set of weights; and matrix multiplying the set of weights and the third feature extractor output to generate instance masks.
    Type: Application
    Filed: May 12, 2023
    Publication date: November 14, 2024
    Inventors: Xin LI, Jiancheng LYU, Yingyong QI
  • Patent number: 12141981
    Abstract: Systems and techniques are provided for performing semantic image segmentation using a machine learning system (e.g., including one or more cross-attention transformer layers). For instance, a process can include generating one or more input image features for a frame of image data and generating one or more input depth features for a frame of depth data. One or more fused image features can be determined, at least in part, by fusing the one or more input depth features with the one or more input image features, using a first cross-attention transformer network. One or more segmentation masks can be generated for the frame of image data based on the one or more fused image features.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: November 12, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Shuai Zhang, Xiaowen Ying, Jiancheng Lyu, Yingyong Qi
  • Publication number: 20240319374
    Abstract: Systems and techniques are described herein for determining depth information. For instance, a method for determining depth information is provided. The method may include transmitting electromagnetic (EM) radiation toward a plurality of points in an environment; comparing a phase of the transmitted EM radiation with a phase of received EM radiation to determine a respective time-of-flight estimate of the EM radiation between transmission and reception for each point of the plurality of points in the environment; determining first depth information based on the respective time-of-flight estimates determined for each point of the plurality of points in the environment; obtaining second depth information based on an image of the environment; comparing the first depth information with the second depth information to determine an inconsistency between the first depth information and the second depth information; and adjusting a depth of the first depth information based on the inconsistency.
    Type: Application
    Filed: March 21, 2023
    Publication date: September 26, 2024
    Inventors: Xiaoliang BAI, Yingyong QI, Li HONG, Ning BI, Xiaoyun JIANG
  • Publication number: 20240281990
    Abstract: Systems and techniques are provided for performing an accurate object count using monocular three-dimensional (3D) perception. In some examples, a computing device can generate a reference depth map based on a reference frame depicting a volume of interest. The computing device can generate a current depth map based on a current frame depicting the volume of interest and one or more objects. The computing device can compare the current depth map to the reference depth map to determine a respective change in depth for each of the one or more objects. The computing device can further compare the respective change in depth for each object to a threshold. The computing device can determine whether each object is located within the volume of interest based on comparing the respective change in depth for each object to the threshold.
    Type: Application
    Filed: February 22, 2023
    Publication date: August 22, 2024
    Inventors: Xiaoliang BAI, Dashan GAO, Yingyong QI, Ning BI
  • Publication number: 20240233140
    Abstract: Methods and systems of frame based image segmentation are provided. For example, a method for feature object tracking between frames of video data is provided. The method comprises receiving a first frame of video data, extracting a mask feature for each of one or more objects of the first frame, adjusting the first frame by applying each initial mask and corresponding identification to a respective object of the first frame, and outputting the adjusted first frame. The method further comprises tracking the one or more objects in one or more consecutive frames. The tracking comprises extracting a masked feature for each of one or more objects in the consecutive frame, adjusting the consecutive frame by applying each initial mask and corresponding identification for the consecutive frame to the respective object of the one or more objects of the consecutive frame, and outputting the adjusted consecutive frame.
    Type: Application
    Filed: October 25, 2022
    Publication date: July 11, 2024
    Inventors: Xin Li, Jiancheng Lyu, Yingyong Qi
  • Publication number: 20240135549
    Abstract: Methods and systems of frame based image segmentation are provided. For example, a method for feature object tracking between frames of video data is provided. The method comprises receiving a first frame of video data, extracting a mask feature for each of one or more objects of the first frame, adjusting the first frame by applying each initial mask and corresponding identification to a respective object of the first frame, and outputting the adjusted first frame. The method further comprises tracking the one or more objects in one or more consecutive frames. The tracking comprises extracting a masked feature for each of one or more objects in the consecutive frame, adjusting the consecutive frame by applying each initial mask and corresponding identification for the consecutive frame to the respective object of the one or more objects of the consecutive frame, and outputting the adjusted consecutive frame.
    Type: Application
    Filed: October 24, 2022
    Publication date: April 25, 2024
    Inventors: Xin Li, Jiancheng Lyu, Yingyong Qi
  • Patent number: 11887404
    Abstract: Techniques and systems are provided for authenticating a user of a device. For example, input biometric data associated with a person can be obtained. A similarity score for the input biometric data can be determined by comparing the input biometric data to a set of templates that include reference biometric data associated with the user. The similarity score can be compared to an authentication threshold. The person is authenticated as the user when the similarity score is greater than the authentication threshold. The similarity score can also be compared to a learning threshold that is greater than the authentication threshold. A new template including features of the input biometric data is saved for the user when the similarity score is less than the learning threshold and greater than the authentication threshold.
    Type: Grant
    Filed: December 2, 2021
    Date of Patent: January 30, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Eyasu Zemene Mequanint, Shuai Zhang, Yingyong Qi, Ning Bi
  • Publication number: 20230386052
    Abstract: Systems and techniques are provided for performing scene segmentation and object tracking. For example, a method for processing one or more frames. The method may include determining first one or more features from a first frame. The first frame includes a target object. The method may include obtaining a first mask associated with the first frame. The first mask includes an indication of the target object. The method may further include generating, based on the first mask and the first one or more features, a representation of a foreground and a background of the first frame. The method may include determining second one or more features from a second frame and determining, based on the representation of the foreground and the background of the first frame and the second one or more features, a location of the target object in the second frame.
    Type: Application
    Filed: May 31, 2022
    Publication date: November 30, 2023
    Inventors: Jiancheng LYU, Dashan GAO, Yingyong QI, Shuai ZHANG, Ning BI
  • Patent number: 11776129
    Abstract: Examples are described of segmenting an image into image regions based on depicted categories of objects, and for refining the image regions semantically. For example, a system can determine that a first image region in an image depicts a first category of object. The system can generate a color distance map of the first image region that maps color distance values to each pixel in the first image region. A color distance value quantifies a difference between a color value of a pixel in the first image region and a color value of a sample pixel in a second image region in the image. The system can process the image based on a refined variant of the first image region that is refined based on the color distance map, for instance by removing pixels from the first image region whose color distances fall below a color distance threshold.
    Type: Grant
    Filed: December 16, 2020
    Date of Patent: October 3, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Eyasu Zemene Mequanint, Yingyong Qi, Ning Bi
  • Publication number: 20230306600
    Abstract: Systems and techniques are provided for performing semantic image segmentation using a machine learning system (e.g., including one or more cross-attention transformer layers). For instance, a process can include generating one or more input image features for a frame of image data and generating one or more input depth features for a frame of depth data. One or more fused image features can be determined, at least in part, by fusing the one or more input depth features with the one or more input image features, using a first cross-attention transformer network. One or more segmentation masks can be generated for the frame of image data based on the one or more fused image features.
    Type: Application
    Filed: February 10, 2022
    Publication date: September 28, 2023
    Inventors: Shuai ZHANG, Xiaowen YING, Jiancheng LYU, Yingyong QI
  • Patent number: 11391819
    Abstract: Techniques and systems are provided for performing object verification using radar images. For example, a first radar image and a second radar image are obtained, and features are extracted from the first radar image and the second radar image. A similarity is determined between an object represented by the first radar image and an object represented by the second radar image based on the features extracted from the first radar image and the features extracted from the second radar image. A determined similarity between these two sets of features is used to determine whether the object represented by the first radar image matches the object represented by the second radar image. Distances between the features in the two radar images can optionally also be compared and used to determine object similarity. The objects in the radar images may optionally be faces.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: July 19, 2022
    Assignee: QUALCOMM Incorporate
    Inventors: Michel Adib Sarkis, Ning Bi, Yingyong Qi, Amichai Sanderovich, Evyatar Hemo
  • Patent number: 11391817
    Abstract: Embodiments described herein can address these and other issues by using radar machine learning to address the radio frequency (RF) to perform object identification, including facial recognition. In particular, embodiments may obtain IQ samples by transmitting and receiving a plurality of data packets with a respective plurality of transmitter antenna elements and receiver antenna elements, where each data packet of the plurality of data packets comprises one or more complementary pairs of Golay sequences. I/Q samples indicative of a channel impulse responses of an identification region obtained from the transmission and reception of the plurality of data packets may then be used to identify, with a random forest model, a physical object in the identification region.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: July 19, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Sharad Sambhwani, Amichai Sanderovich, Evyatar Hemo, Evgeny Levitan, Eran Hof, Mohammad Faroq Salama, Michel Adib Sarkis, Ning Bi, Yingyong Qi
  • Patent number: 11372086
    Abstract: Embodiments described herein can address these and other issues by using radar machine learning to address the radio frequency (RF) to perform object identification, including facial recognition. In particular, embodiments may obtain IQ samples by transmitting and receiving a plurality of data packets with a respective plurality of transmitter antenna elements and receiver antenna elements. I/Q samples indicative of a channel impulse responses of an identification region obtained from the transmission and reception of the plurality of data packets may then be used to identify, with an autoencoder, a physical object in the identification region.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: June 28, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Sharad Sambhwani, Amichai Sanderovich, Evyatar Hemo, Evgeny Levitan, Eran Hof, Mohammad Faroq Salama, Michel Adib Sarkis, Ning Bi, Yingyong Qi
  • Publication number: 20220189029
    Abstract: Examples are described of segmenting an image into image regions based on depicted categories of objects, and for refining the image regions semantically. For example, a system can determine that a first image region in an image depicts a first category of object. The system can generate a color distance map of the first image region that maps color distance values to each pixel in the first image region. A color distance value quantifies a difference between a color value of a pixel in the first image region and a color value of a sample pixel in a second image region in the image. The system can process the image based on a refined variant of the first image region that is refined based on the color distance map, for instance by removing pixels from the first image region whose color distances fall below a color distance threshold.
    Type: Application
    Filed: December 16, 2020
    Publication date: June 16, 2022
    Inventors: Eyasu Zemene MEQUANINT, Yingyong QI, Ning BI
  • Publication number: 20220164426
    Abstract: Techniques and systems are provided for authenticating a user of a device. For example, input biometric data associated with a person can be obtained. A similarity score for the input biometric data can be determined by comparing the input biometric data to a set of templates that include reference biometric data associated with the user. The similarity score can be compared to an authentication threshold. The person is authenticated as the user when the similarity score is greater than the authentication threshold. The similarity score can also be compared to a learning threshold that is greater than the authentication threshold. A new template including features of the input biometric data is saved for the user when the similarity score is less than the learning threshold and greater than the authentication threshold.
    Type: Application
    Filed: December 2, 2021
    Publication date: May 26, 2022
    Inventors: Eyasu Zemene MEQUANINT, Shuai ZHANG, Yingyong QI, Ning BI
  • Patent number: 11227156
    Abstract: Methods, systems, and devices for personalized (e.g., user specific) eye openness estimation are described. A network model (e.g., a convolutional neural network) may be trained using a set of synthetic eye openness image data (e.g., synthetic face images with known degrees or percentages of eye openness) and a set of real eye openness image data (e.g., facial images of real persons that are annotated as either open eyed or closed eyed). A device may estimate, using the network model, a multi-stage eye openness level (e.g., a percentage or degree to which an eye is open) of a user based on captured real time eye openness image data. The degree of eye openness estimated by the network model may then be compared to an eye size of the user (e.g., a user specific maximum eye size), and a user specific eye openness level may be estimated based on the comparison.
    Type: Grant
    Filed: January 3, 2019
    Date of Patent: January 18, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Eyasu Zemene Mequanint, Shuai Zhang, Yingyong Qi, Ning Bi
  • Patent number: 11216541
    Abstract: Techniques and systems are provided for authenticating a user of a device. For example, input biometric data associated with a person can be obtained. A similarity score for the input biometric data can be determined by comparing the input biometric data to a set of templates that include reference biometric data associated with the user. The similarity score can be compared to an authentication threshold. The person is authenticated as the user when the similarity score is greater than the authentication threshold. The similarity score can also be compared to a learning threshold that is greater than the authentication threshold. A new template including features of the input biometric data is saved for the user when the similarity score is less than the learning threshold and greater than the authentication threshold.
    Type: Grant
    Filed: September 7, 2018
    Date of Patent: January 4, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Eyasu Zemene Mequanint, Shuai Zhang, Yingyong Qi, Ning Bi
  • Patent number: 11158119
    Abstract: A method performed by an electronic device is described. The method includes receiving first optical data and first depth data corresponding to a first frame. The method also includes registering the first depth data to a first canonical model. The method further includes fitting a three-dimensional (3D) morphable model to the first optical data. The method additionally includes registering the 3D morphable model to a second canonical model. The method also includes producing a 3D object reconstruction based on the registered first depth data and the registered 3D morphable model.
    Type: Grant
    Filed: January 9, 2020
    Date of Patent: October 26, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Yan Deng, Michel Adib Sarkis, Yingyong Qi