Patents by Inventor Yingyong Qi
Yingyong Qi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12236614Abstract: Systems and techniques are provided for performing scene segmentation and object tracking. For example, a method for processing one or more frames. The method may include determining first one or more features from a first frame. The first frame includes a target object. The method may include obtaining a first mask associated with the first frame. The first mask includes an indication of the target object. The method may further include generating, based on the first mask and the first one or more features, a representation of a foreground and a background of the first frame. The method may include determining second one or more features from a second frame and determining, based on the representation of the foreground and the background of the first frame and the second one or more features, a location of the target object in the second frame.Type: GrantFiled: May 31, 2022Date of Patent: February 25, 2025Assignee: QUALCOMM IncorporatedInventors: Jiancheng Lyu, Dashan Gao, Yingyong Qi, Shuai Zhang, Ning Bi
-
Publication number: 20240394893Abstract: Systems, methods, and computer-readable media are provided for performing image segmentation with depth filtering. In some examples, a method can include obtaining a frame capturing a scene: generating, based on the frame, a first segmentation map including a target segmentation mask identifying a target of interest and one or more background masks identifying one or more background regions of the frame; and generating a second segmentation map including the first segmentation map with the one or more background masks filtered out, the one or more background masks being filtered from the first segmentation map based on a depth map associated with the frame.Type: ApplicationFiled: December 1, 2021Publication date: November 28, 2024Inventors: Yingyong QI, Xin LI, Xiaowen YING, Shuai ZHANG
-
Publication number: 20240378727Abstract: Techniques are provided for image processing. For instance, a process can include obtaining an image; extracting a first set of features at a first scale resolution; extracting a second set of features at a second scale resolution (lower than the first scale resolution); performing a self-attention transform to generate similarity scores for the second set of features; adding the similarity scores to the second set of features to generate a first feature extractor output; up-sampling the first feature extractor output to generate a second feature extractor output; adding the second feature extractor output to the first set of features to generate a third feature extractor output; receiving an instance query; performing a cross-attention transform on the instance query and the first feature extractor output to generate a set of weights; and matrix multiplying the set of weights and the third feature extractor output to generate instance masks.Type: ApplicationFiled: May 12, 2023Publication date: November 14, 2024Inventors: Xin LI, Jiancheng LYU, Yingyong QI
-
Patent number: 12141981Abstract: Systems and techniques are provided for performing semantic image segmentation using a machine learning system (e.g., including one or more cross-attention transformer layers). For instance, a process can include generating one or more input image features for a frame of image data and generating one or more input depth features for a frame of depth data. One or more fused image features can be determined, at least in part, by fusing the one or more input depth features with the one or more input image features, using a first cross-attention transformer network. One or more segmentation masks can be generated for the frame of image data based on the one or more fused image features.Type: GrantFiled: February 10, 2022Date of Patent: November 12, 2024Assignee: QUALCOMM IncorporatedInventors: Shuai Zhang, Xiaowen Ying, Jiancheng Lyu, Yingyong Qi
-
Publication number: 20240319374Abstract: Systems and techniques are described herein for determining depth information. For instance, a method for determining depth information is provided. The method may include transmitting electromagnetic (EM) radiation toward a plurality of points in an environment; comparing a phase of the transmitted EM radiation with a phase of received EM radiation to determine a respective time-of-flight estimate of the EM radiation between transmission and reception for each point of the plurality of points in the environment; determining first depth information based on the respective time-of-flight estimates determined for each point of the plurality of points in the environment; obtaining second depth information based on an image of the environment; comparing the first depth information with the second depth information to determine an inconsistency between the first depth information and the second depth information; and adjusting a depth of the first depth information based on the inconsistency.Type: ApplicationFiled: March 21, 2023Publication date: September 26, 2024Inventors: Xiaoliang BAI, Yingyong QI, Li HONG, Ning BI, Xiaoyun JIANG
-
Publication number: 20240281990Abstract: Systems and techniques are provided for performing an accurate object count using monocular three-dimensional (3D) perception. In some examples, a computing device can generate a reference depth map based on a reference frame depicting a volume of interest. The computing device can generate a current depth map based on a current frame depicting the volume of interest and one or more objects. The computing device can compare the current depth map to the reference depth map to determine a respective change in depth for each of the one or more objects. The computing device can further compare the respective change in depth for each object to a threshold. The computing device can determine whether each object is located within the volume of interest based on comparing the respective change in depth for each object to the threshold.Type: ApplicationFiled: February 22, 2023Publication date: August 22, 2024Inventors: Xiaoliang BAI, Dashan GAO, Yingyong QI, Ning BI
-
Publication number: 20240233140Abstract: Methods and systems of frame based image segmentation are provided. For example, a method for feature object tracking between frames of video data is provided. The method comprises receiving a first frame of video data, extracting a mask feature for each of one or more objects of the first frame, adjusting the first frame by applying each initial mask and corresponding identification to a respective object of the first frame, and outputting the adjusted first frame. The method further comprises tracking the one or more objects in one or more consecutive frames. The tracking comprises extracting a masked feature for each of one or more objects in the consecutive frame, adjusting the consecutive frame by applying each initial mask and corresponding identification for the consecutive frame to the respective object of the one or more objects of the consecutive frame, and outputting the adjusted consecutive frame.Type: ApplicationFiled: October 25, 2022Publication date: July 11, 2024Inventors: Xin Li, Jiancheng Lyu, Yingyong Qi
-
Publication number: 20240135549Abstract: Methods and systems of frame based image segmentation are provided. For example, a method for feature object tracking between frames of video data is provided. The method comprises receiving a first frame of video data, extracting a mask feature for each of one or more objects of the first frame, adjusting the first frame by applying each initial mask and corresponding identification to a respective object of the first frame, and outputting the adjusted first frame. The method further comprises tracking the one or more objects in one or more consecutive frames. The tracking comprises extracting a masked feature for each of one or more objects in the consecutive frame, adjusting the consecutive frame by applying each initial mask and corresponding identification for the consecutive frame to the respective object of the one or more objects of the consecutive frame, and outputting the adjusted consecutive frame.Type: ApplicationFiled: October 24, 2022Publication date: April 25, 2024Inventors: Xin Li, Jiancheng Lyu, Yingyong Qi
-
Patent number: 11887404Abstract: Techniques and systems are provided for authenticating a user of a device. For example, input biometric data associated with a person can be obtained. A similarity score for the input biometric data can be determined by comparing the input biometric data to a set of templates that include reference biometric data associated with the user. The similarity score can be compared to an authentication threshold. The person is authenticated as the user when the similarity score is greater than the authentication threshold. The similarity score can also be compared to a learning threshold that is greater than the authentication threshold. A new template including features of the input biometric data is saved for the user when the similarity score is less than the learning threshold and greater than the authentication threshold.Type: GrantFiled: December 2, 2021Date of Patent: January 30, 2024Assignee: QUALCOMM IncorporatedInventors: Eyasu Zemene Mequanint, Shuai Zhang, Yingyong Qi, Ning Bi
-
Publication number: 20230386052Abstract: Systems and techniques are provided for performing scene segmentation and object tracking. For example, a method for processing one or more frames. The method may include determining first one or more features from a first frame. The first frame includes a target object. The method may include obtaining a first mask associated with the first frame. The first mask includes an indication of the target object. The method may further include generating, based on the first mask and the first one or more features, a representation of a foreground and a background of the first frame. The method may include determining second one or more features from a second frame and determining, based on the representation of the foreground and the background of the first frame and the second one or more features, a location of the target object in the second frame.Type: ApplicationFiled: May 31, 2022Publication date: November 30, 2023Inventors: Jiancheng LYU, Dashan GAO, Yingyong QI, Shuai ZHANG, Ning BI
-
Patent number: 11776129Abstract: Examples are described of segmenting an image into image regions based on depicted categories of objects, and for refining the image regions semantically. For example, a system can determine that a first image region in an image depicts a first category of object. The system can generate a color distance map of the first image region that maps color distance values to each pixel in the first image region. A color distance value quantifies a difference between a color value of a pixel in the first image region and a color value of a sample pixel in a second image region in the image. The system can process the image based on a refined variant of the first image region that is refined based on the color distance map, for instance by removing pixels from the first image region whose color distances fall below a color distance threshold.Type: GrantFiled: December 16, 2020Date of Patent: October 3, 2023Assignee: QUALCOMM IncorporatedInventors: Eyasu Zemene Mequanint, Yingyong Qi, Ning Bi
-
Publication number: 20230306600Abstract: Systems and techniques are provided for performing semantic image segmentation using a machine learning system (e.g., including one or more cross-attention transformer layers). For instance, a process can include generating one or more input image features for a frame of image data and generating one or more input depth features for a frame of depth data. One or more fused image features can be determined, at least in part, by fusing the one or more input depth features with the one or more input image features, using a first cross-attention transformer network. One or more segmentation masks can be generated for the frame of image data based on the one or more fused image features.Type: ApplicationFiled: February 10, 2022Publication date: September 28, 2023Inventors: Shuai ZHANG, Xiaowen YING, Jiancheng LYU, Yingyong QI
-
Patent number: 11391819Abstract: Techniques and systems are provided for performing object verification using radar images. For example, a first radar image and a second radar image are obtained, and features are extracted from the first radar image and the second radar image. A similarity is determined between an object represented by the first radar image and an object represented by the second radar image based on the features extracted from the first radar image and the features extracted from the second radar image. A determined similarity between these two sets of features is used to determine whether the object represented by the first radar image matches the object represented by the second radar image. Distances between the features in the two radar images can optionally also be compared and used to determine object similarity. The objects in the radar images may optionally be faces.Type: GrantFiled: February 11, 2019Date of Patent: July 19, 2022Assignee: QUALCOMM IncorporateInventors: Michel Adib Sarkis, Ning Bi, Yingyong Qi, Amichai Sanderovich, Evyatar Hemo
-
Patent number: 11391817Abstract: Embodiments described herein can address these and other issues by using radar machine learning to address the radio frequency (RF) to perform object identification, including facial recognition. In particular, embodiments may obtain IQ samples by transmitting and receiving a plurality of data packets with a respective plurality of transmitter antenna elements and receiver antenna elements, where each data packet of the plurality of data packets comprises one or more complementary pairs of Golay sequences. I/Q samples indicative of a channel impulse responses of an identification region obtained from the transmission and reception of the plurality of data packets may then be used to identify, with a random forest model, a physical object in the identification region.Type: GrantFiled: May 7, 2019Date of Patent: July 19, 2022Assignee: QUALCOMM IncorporatedInventors: Sharad Sambhwani, Amichai Sanderovich, Evyatar Hemo, Evgeny Levitan, Eran Hof, Mohammad Faroq Salama, Michel Adib Sarkis, Ning Bi, Yingyong Qi
-
Patent number: 11372086Abstract: Embodiments described herein can address these and other issues by using radar machine learning to address the radio frequency (RF) to perform object identification, including facial recognition. In particular, embodiments may obtain IQ samples by transmitting and receiving a plurality of data packets with a respective plurality of transmitter antenna elements and receiver antenna elements. I/Q samples indicative of a channel impulse responses of an identification region obtained from the transmission and reception of the plurality of data packets may then be used to identify, with an autoencoder, a physical object in the identification region.Type: GrantFiled: May 7, 2019Date of Patent: June 28, 2022Assignee: QUALCOMM IncorporatedInventors: Sharad Sambhwani, Amichai Sanderovich, Evyatar Hemo, Evgeny Levitan, Eran Hof, Mohammad Faroq Salama, Michel Adib Sarkis, Ning Bi, Yingyong Qi
-
Publication number: 20220189029Abstract: Examples are described of segmenting an image into image regions based on depicted categories of objects, and for refining the image regions semantically. For example, a system can determine that a first image region in an image depicts a first category of object. The system can generate a color distance map of the first image region that maps color distance values to each pixel in the first image region. A color distance value quantifies a difference between a color value of a pixel in the first image region and a color value of a sample pixel in a second image region in the image. The system can process the image based on a refined variant of the first image region that is refined based on the color distance map, for instance by removing pixels from the first image region whose color distances fall below a color distance threshold.Type: ApplicationFiled: December 16, 2020Publication date: June 16, 2022Inventors: Eyasu Zemene MEQUANINT, Yingyong QI, Ning BI
-
Publication number: 20220164426Abstract: Techniques and systems are provided for authenticating a user of a device. For example, input biometric data associated with a person can be obtained. A similarity score for the input biometric data can be determined by comparing the input biometric data to a set of templates that include reference biometric data associated with the user. The similarity score can be compared to an authentication threshold. The person is authenticated as the user when the similarity score is greater than the authentication threshold. The similarity score can also be compared to a learning threshold that is greater than the authentication threshold. A new template including features of the input biometric data is saved for the user when the similarity score is less than the learning threshold and greater than the authentication threshold.Type: ApplicationFiled: December 2, 2021Publication date: May 26, 2022Inventors: Eyasu Zemene MEQUANINT, Shuai ZHANG, Yingyong QI, Ning BI
-
Patent number: 11227156Abstract: Methods, systems, and devices for personalized (e.g., user specific) eye openness estimation are described. A network model (e.g., a convolutional neural network) may be trained using a set of synthetic eye openness image data (e.g., synthetic face images with known degrees or percentages of eye openness) and a set of real eye openness image data (e.g., facial images of real persons that are annotated as either open eyed or closed eyed). A device may estimate, using the network model, a multi-stage eye openness level (e.g., a percentage or degree to which an eye is open) of a user based on captured real time eye openness image data. The degree of eye openness estimated by the network model may then be compared to an eye size of the user (e.g., a user specific maximum eye size), and a user specific eye openness level may be estimated based on the comparison.Type: GrantFiled: January 3, 2019Date of Patent: January 18, 2022Assignee: QUALCOMM IncorporatedInventors: Eyasu Zemene Mequanint, Shuai Zhang, Yingyong Qi, Ning Bi
-
Patent number: 11216541Abstract: Techniques and systems are provided for authenticating a user of a device. For example, input biometric data associated with a person can be obtained. A similarity score for the input biometric data can be determined by comparing the input biometric data to a set of templates that include reference biometric data associated with the user. The similarity score can be compared to an authentication threshold. The person is authenticated as the user when the similarity score is greater than the authentication threshold. The similarity score can also be compared to a learning threshold that is greater than the authentication threshold. A new template including features of the input biometric data is saved for the user when the similarity score is less than the learning threshold and greater than the authentication threshold.Type: GrantFiled: September 7, 2018Date of Patent: January 4, 2022Assignee: QUALCOMM IncorporatedInventors: Eyasu Zemene Mequanint, Shuai Zhang, Yingyong Qi, Ning Bi
-
Patent number: 11158119Abstract: A method performed by an electronic device is described. The method includes receiving first optical data and first depth data corresponding to a first frame. The method also includes registering the first depth data to a first canonical model. The method further includes fitting a three-dimensional (3D) morphable model to the first optical data. The method additionally includes registering the 3D morphable model to a second canonical model. The method also includes producing a 3D object reconstruction based on the registered first depth data and the registered 3D morphable model.Type: GrantFiled: January 9, 2020Date of Patent: October 26, 2021Assignee: QUALCOMM IncorporatedInventors: Yan Deng, Michel Adib Sarkis, Yingyong Qi