Patents by Inventor Ning Bi
Ning Bi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12142084Abstract: Methods, systems, and apparatuses are provided to automatically determine whether an image is spoofed. For example, a computing device may obtain an image, and may execute a trained convolutional neural network to ingest elements of the image. Further, and based on the ingested elements of the image, the executed trained convolutional neural network generates an output map that includes a plurality of intensity values. In some examples, the trained convolutional neural network includes a plurality of down sampling layers, a plurality of up sampling layers, and a plurality of joint spatial and channel attention layers. Further, the computing device may determine whether the image is spoofed based on the plurality of intensity values. The computing device may also generate output data based on the determination of whether the image is spoofed, and may store the output data within a data repository.Type: GrantFiled: December 23, 2021Date of Patent: November 12, 2024Assignee: QUALCOMM IncorporatedInventors: Chun-Ting Huang, Lei Wang, Ning Bi
-
Publication number: 20240338868Abstract: Systems and techniques are described herein for generating an image. For instance, a method for generating an image is provided. The method may include obtaining a source image of a face having source attributes and exhibiting a source pose and source gaze; obtaining at least one of a target pose and a target gaze; and generating a modified image of the face having the source attributes and exhibiting at least one of the target pose and the target gaze.Type: ApplicationFiled: April 4, 2023Publication date: October 10, 2024Inventors: Zhen WANG, Shiwei JIN, Lei WANG, Ning BI
-
Publication number: 20240319374Abstract: Systems and techniques are described herein for determining depth information. For instance, a method for determining depth information is provided. The method may include transmitting electromagnetic (EM) radiation toward a plurality of points in an environment; comparing a phase of the transmitted EM radiation with a phase of received EM radiation to determine a respective time-of-flight estimate of the EM radiation between transmission and reception for each point of the plurality of points in the environment; determining first depth information based on the respective time-of-flight estimates determined for each point of the plurality of points in the environment; obtaining second depth information based on an image of the environment; comparing the first depth information with the second depth information to determine an inconsistency between the first depth information and the second depth information; and adjusting a depth of the first depth information based on the inconsistency.Type: ApplicationFiled: March 21, 2023Publication date: September 26, 2024Inventors: Xiaoliang BAI, Yingyong QI, Li HONG, Ning BI, Xiaoyun JIANG
-
Publication number: 20240320909Abstract: Systems and techniques are described herein for generating one or more three-dimensional models. For instance, a method for generating one or more three-dimensional models is provided. The method may include obtaining a plurality of images of an object; obtaining a plurality of segmentation masks associated with the plurality of images, each segmentation mask of the plurality of segmentation masks including at least one label indicative of at least one segment of the object in a respective image of the plurality of images; training, using the plurality of images and the plurality of segmentation masks, a machine-learning model to generate one or more semantically-labeled three-dimensional models of the object; and generating using the trained machine-learning model, a semantically-labeled three-dimensional model of the object, the semantically-labeled three-dimensional model of the object including at least one label indicative of the at least one segment of the object.Type: ApplicationFiled: March 21, 2023Publication date: September 26, 2024Inventors: Yan DENG, Ze ZHANG, Michel Adib SARKIS, Ning BI
-
Patent number: 12100107Abstract: Techniques are provided for generating three-dimensional models of objects from one or more images or frames. For example, at least one frame of an object in a scene can be obtained. A portion of the object is positioned on a plane in the at least one frame. The plane can be detected in the at least one frame and, based on the detected plane, the object can be segmented from the plane in the at least one frame. A three-dimensional (3D) model of the object can be generated based on segmenting the object from the plane. A refined mesh can be generated for a portion of the 3D model corresponding to the portion of the object positioned on the plane.Type: GrantFiled: July 17, 2023Date of Patent: September 24, 2024Assignee: QUALCOMM IncorporatedInventors: Ke-Li Cheng, Kuang-Man Huang, Michel Adib Sarkis, Gerhard Reitmayr, Ning Bi
-
Publication number: 20240289970Abstract: Techniques and systems are provided for image processing. For instance, a process can include obtaining, from one or more image sensors, a first image of an environment; determining semantic labels for a plurality of pixels of the first image based on whether each pixel of the plurality of pixels is associated with an object in the first image to generate a semantically segmented image; applying a two-dimensional line representation of the first image to the semantically segmented image to generate a labeled 2D line representation; back-projecting the labeled 2D line representation to three dimensions to generate a line representation model; fusing the line representation model and the semantically segmented image to generate a labeled line representation model; and outputting the labeled line representation model.Type: ApplicationFiled: February 24, 2023Publication date: August 29, 2024Inventors: Yan DENG, Michel Adib SARKIS, Ning BI, Nikolai Konrad LEUNG
-
Publication number: 20240281990Abstract: Systems and techniques are provided for performing an accurate object count using monocular three-dimensional (3D) perception. In some examples, a computing device can generate a reference depth map based on a reference frame depicting a volume of interest. The computing device can generate a current depth map based on a current frame depicting the volume of interest and one or more objects. The computing device can compare the current depth map to the reference depth map to determine a respective change in depth for each of the one or more objects. The computing device can further compare the respective change in depth for each object to a threshold. The computing device can determine whether each object is located within the volume of interest based on comparing the respective change in depth for each object to the threshold.Type: ApplicationFiled: February 22, 2023Publication date: August 22, 2024Inventors: Xiaoliang BAI, Dashan GAO, Yingyong QI, Ning BI
-
Patent number: 12062130Abstract: Systems and techniques are provided for performing video-based activity recognition. For example, a process can include generating a three-dimensional (3D) model of a first portion of an object based on one or more frames depicting the object. The process can also include generating a mask for the one or more frames, the mask including an indication of one or more regions of the object. The process can further include generating a 3D base model based on the 3D model of the first portion of the object and the mask, the 3D base model representing the first portion of the object and a second portion of the object. The process can include generating, based on the mask and the 3D base model, a 3D model of the second portion of the object.Type: GrantFiled: August 16, 2021Date of Patent: August 13, 2024Assignee: QUALCOMM IncorporatedInventors: Yan Deng, Michel Adib Sarkis, Ning Bi, Chieh-Ming Kuo
-
Publication number: 20240259529Abstract: Systems and techniques are described for establishing one or more virtual sessions between users. For instance, a first device can transmit, to a second device, a call establishment request for a virtual representation call for a virtual session and can receive, from the second device, a call acceptance indicating acceptance of the call establishment request. The first device can transmit, to the second device, first mesh information for a first virtual representation of a first user of the first device and first mesh animation parameters for the first virtual representation. The first device can receive, from the second device, second mesh information for a second virtual representation of a second user of the second device and second mesh animation parameters for the second virtual representation. The first device can generate, based on the second mesh information and the second mesh animation parameters, the second virtual representation of the second user.Type: ApplicationFiled: September 15, 2023Publication date: August 1, 2024Inventors: Michel Adib SARKIS, Imed BOUAZIZI, Thomas STOCKHAMMER, Ning BI, Liangping MA
-
Publication number: 20240257557Abstract: System and techniques are described herein for processing images to detect expressions of a subject. In one illustrative example, a method of recognizing facial expressions in one or more images includes obtaining, by a computing device, a first image of a person; obtaining expression information based on the first image and an anchor image associated with the person; and determining an expression classification associated with the first image based on the expression information.Type: ApplicationFiled: January 30, 2023Publication date: August 1, 2024Inventors: Peng LIU, Lei WANG, Ning BI, Zhen WANG, Shiwei JIN
-
Publication number: 20240212308Abstract: System and techniques are described herein for processing images to detect objects in the provided images. In one illustrative example, a method of processing image data includes obtaining an image including at least a first object. The method can include generating a feature map based on providing the image to a neural network. The method can further include identifying a plurality of objects based on the feature map, the plurality of objects including a first part of the first object. The method can include identifying a first set of object parts within the plurality of objects corresponding to the first object.Type: ApplicationFiled: December 21, 2022Publication date: June 27, 2024Inventors: Yuan LI, Lei WANG, Leulseged Tesfaye ALEMU, Dashan GAO, Ning BI
-
Publication number: 20240119627Abstract: Methods, systems, and apparatuses to fuse a first dataset with a second dataset, and determine head pose estimation(s) based on the fused first dataset and second dataset. The first dataset may be associated with sensor data generated by a set of sensors of a first device, while the second dataset may be associated with sensor data generated by a first sensor of an apparatus. For example, an apparatus may obtain the first dataset and the second dataset. Additionally, the apparatus may generate a fused dataset based on the first dataset and the second dataset, and determine a head pose estimation of a head of the user based on the fused dataset. Further, the apparatus may output the third head pose estimation.Type: ApplicationFiled: October 7, 2022Publication date: April 11, 2024Inventors: Min GUO, Srinivasa DEEVI, Ning BI
-
Publication number: 20240062467Abstract: Systems and techniques are described for establishing one or more virtual sessions between users. For instance, a first device can transmit, to a second device, a call establishment request for a virtual representation call for a virtual session and can receive, from the second device, a call acceptance indicating acceptance of the call establishment request. The first device can transmit, to the second device, first mesh information for a first virtual representation of a first user of the first device and first mesh animation parameters for the first virtual representation. The first device can receive, from the second device, second mesh information for a second virtual representation of a second user of the second device and second mesh animation parameters for the second virtual representation. The first device can generate, based on the second mesh information and the second mesh animation parameters, the second virtual representation of the second user.Type: ApplicationFiled: July 3, 2023Publication date: February 22, 2024Inventors: Michel Adib SARKIS, Chiranjib CHOUDHURI, Ke-Li CHENG, Ajit Deepak GUPTE, Ning BI, Cristina DOBRIN, Ramesh CHANDRASEKHAR, Imed BOUAZIZI, Liangping MA, Thomas STOCKHAMMER, Nikolai Konrad LEUNG
-
Patent number: 11887404Abstract: Techniques and systems are provided for authenticating a user of a device. For example, input biometric data associated with a person can be obtained. A similarity score for the input biometric data can be determined by comparing the input biometric data to a set of templates that include reference biometric data associated with the user. The similarity score can be compared to an authentication threshold. The person is authenticated as the user when the similarity score is greater than the authentication threshold. The similarity score can also be compared to a learning threshold that is greater than the authentication threshold. A new template including features of the input biometric data is saved for the user when the similarity score is less than the learning threshold and greater than the authentication threshold.Type: GrantFiled: December 2, 2021Date of Patent: January 30, 2024Assignee: QUALCOMM IncorporatedInventors: Eyasu Zemene Mequanint, Shuai Zhang, Yingyong Qi, Ning Bi
-
Publication number: 20240029354Abstract: Systems and techniques are provided for generating a texture for a three-dimensional (3D) facial model. For example, a process can include obtaining a first frame, the first frame including a first portion of a face. In some aspects, the process can include generating a 3D facial model based on the first frame and generating a first facial feature corresponding to the first portion of the face. In some examples, the process includes obtaining a second frame, the second frame including a second portion of the face. In some cases, the second portion of the face at least partially overlaps the first portion of the face. In some examples, the process includes combining the first facial feature with the second facial feature to generate an enhanced facial feature, wherein the combining is performed to enhance an appearance of select areas of the enhanced facial feature.Type: ApplicationFiled: July 19, 2022Publication date: January 25, 2024Inventors: Ke-Li CHENG, Anupama S, Kuang-Man HUANG, Chieh-Ming KUO, Avani RAO, Chiranjib CHOUDHURI, Michel Adib SARKIS, Ning BI, Ajit Deepak GUPTE
-
Publication number: 20240005607Abstract: Techniques are provided for generating three-dimensional models of objects from one or more images or frames. For example, at least one frame of an object in a scene can be obtained. A portion of the object is positioned on a plane in the at least one frame. The plane can be detected in the at least one frame and, based on the detected plane, the object can be segmented from the plane in the at least one frame. A three-dimensional (3D) model of the object can be generated based on segmenting the object from the plane. A refined mesh can be generated for a portion of the 3D model corresponding to the portion of the object positioned on the plane.Type: ApplicationFiled: July 17, 2023Publication date: January 4, 2024Inventors: Ke-Li CHENG, Kuang-Man HUANG, Michel Adib SARKIS, Gerhard REITMAYR, Ning BI
-
Publication number: 20230410447Abstract: Systems and techniques are provided for generating a three-dimensional (3D) facial model. For example, a process can include obtaining at least one input image associated with a face. In some aspects, the process can include obtaining a pose for a 3D facial model associated with the face. In some examples, the process can include generating, by a machine learning model, the 3D facial model associated with the face. In some cases, one or more parameters associated with a shape component of the 3D facial model are conditioned on the pose. In some implementations, the 3D facial model is configured to vary in shape based on the pose for the 3D facial model associated with the face.Type: ApplicationFiled: June 21, 2022Publication date: December 21, 2023Inventors: Ke-Li CHENG, Anupama S, Kuang-Man HUANG, Chieh-Ming KUO, Avani RAO, Chiranjib CHOUDHURI, Michel Adib SARKIS, Ajit Deepak GUPTE, Ning BI
-
Publication number: 20230386052Abstract: Systems and techniques are provided for performing scene segmentation and object tracking. For example, a method for processing one or more frames. The method may include determining first one or more features from a first frame. The first frame includes a target object. The method may include obtaining a first mask associated with the first frame. The first mask includes an indication of the target object. The method may further include generating, based on the first mask and the first one or more features, a representation of a foreground and a background of the first frame. The method may include determining second one or more features from a second frame and determining, based on the representation of the foreground and the background of the first frame and the second one or more features, a location of the target object in the second frame.Type: ApplicationFiled: May 31, 2022Publication date: November 30, 2023Inventors: Jiancheng LYU, Dashan GAO, Yingyong QI, Shuai ZHANG, Ning BI
-
Patent number: 11776129Abstract: Examples are described of segmenting an image into image regions based on depicted categories of objects, and for refining the image regions semantically. For example, a system can determine that a first image region in an image depicts a first category of object. The system can generate a color distance map of the first image region that maps color distance values to each pixel in the first image region. A color distance value quantifies a difference between a color value of a pixel in the first image region and a color value of a sample pixel in a second image region in the image. The system can process the image based on a refined variant of the first image region that is refined based on the color distance map, for instance by removing pixels from the first image region whose color distances fall below a color distance threshold.Type: GrantFiled: December 16, 2020Date of Patent: October 3, 2023Assignee: QUALCOMM IncorporatedInventors: Eyasu Zemene Mequanint, Yingyong Qi, Ning Bi
-
Patent number: 11756334Abstract: Systems and techniques are provided for facial expression recognition. In some examples, a system receives an image frame corresponding to a face of a person. The system also determines, based on a three-dimensional model of the face, landmark feature information associated with landmark features of the face. The system then inputs, to at least one layer of a neural network trained for facial expression recognition, the image frame and the landmark feature information. The system further determines, using the neural network, a facial expression associated with the face.Type: GrantFiled: February 25, 2021Date of Patent: September 12, 2023Assignee: QUALCOMM IncorporatedInventors: Peng Liu, Lei Wang, Kuang-Man Huang, Michel Adib Sarkis, Ning Bi