Patents by Inventor Xiaoou Tang
Xiaoou Tang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10289897Abstract: Disclosed is an apparatus for face verification. The apparatus may comprise a feature extraction unit and a verification unit. In one embodiment, the feature extraction unit comprises a plurality of convolutional feature extraction systems trained with different face training set, wherein each of systems comprises: a plurality of cascaded convolutional, pooling, locally-connected, and fully-connected feature extraction units configured to extract facial features for face verification from face regions of face images; wherein an output unit of the unit cascade, which could be a fully-connected unit in one embodiment of the present application, is connected to at least one of previous convolutional, pooling, locally-connected, or fully-connected units, and is configured to extract facial features (referred to as deep identification-verification features or DeepID2) for face verification from the facial features in the connected units.Type: GrantFiled: December 1, 2016Date of Patent: May 14, 2019Assignee: Beijing SenseTime Technology Development Co., LtdInventors: Xiaoou Tang, Yi Sun, Xiaogang Wang
-
Publication number: 20190138798Abstract: Time domain action detecting methods and systems, electronic devices, and computer storage medium are provided. The method includes: obtaining a time domain interval in a video with an action instance and at least one adjacent segment in the time domain interval; separately extracting action features of at least two video segments in candidate segments, where the candidate segments comprises video segment corresponding to the time domain interval and adjacent segments thereof; pooling the action features of the at least two video segments in the candidate segments, to obtain a global feature of the video segment corresponding to the time domain interval; and determining, based on the global feature, an action integrity score of the video segment corresponding to the time domain interval. The embodiments of the present disclosure benefit accurately determining whether a time domain interval comprises an integral action instance, and improve the accuracy rate of action integrity identification.Type: ApplicationFiled: December 28, 2018Publication date: May 9, 2019Applicant: Beijing SenseTime Technology Development Co., LtdInventors: Xiaoou TANG, Yuanjun XIONG, Yue ZHAO, Limin WANG, Zhirong WU, Dahua LIN
-
Publication number: 20190138816Abstract: A method and an apparatus for segmenting a video object, an electronic device, a storage medium, and a program include: performing, among at least some frames of a video, inter-frame transfer of an object segmentation result of a reference frame in sequence from the reference frame, to obtain an object segmentation result of at least one other frame among the at least some frames; determining other frames having lost objects with respect to the object segmentation result of the reference frame among the at least some frames; using the determined other frames as target frames to segment the lost objects, so as to update the object segmentation results of the target frames; and transferring the updated object segmentation results of the target frames to the at least one other frame in the video in sequence. The accuracy of video object segmentation results can therefore be improved.Type: ApplicationFiled: December 29, 2018Publication date: May 9, 2019Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTDInventors: Xiaoxiao LI, Yuankai Qi, Zhe Wang, Kai Chen, Ziwei Liu, Jianping Shi, Ping Luo, Chen Change Loy, Xiaoou Tang
-
Publication number: 20180300855Abstract: A method and system for processing an image operates by: filtering a first real image to obtain a first feature map therefor with performances of image features improved; upscaling the obtained first feature map to improve a resolution thereof, the feature map with improved resolution forming a second feature map; and constructing, from the second feature map, a second real image having enhanced performances and a higher resolution than that of the first real image.Type: ApplicationFiled: June 20, 2018Publication date: October 18, 2018Inventors: Xiaoou Tang, Chao Dong, Tak Wai Hui, Chen Change Loy
-
Patent number: 10037457Abstract: Disclosed herein are a system and method for verifying face images based on canonical images. The method includes: retrieving, from a plurality of face images of an identity, a face image with a smallest frontal measurement value as a representative image of the identity; determining parameters of an image reconstruction network based on mappings between the retrieved representative image and the plurality of face images of the identity; reconstructing, by the image reconstruction network with the determined parameters, at least two input face images into corresponding canonical images respectively; and comparing the reconstructed canonical images to verify whether they belong to a same identity, where the representative image is a frontal image and the frontal measurement value represents symmetry of each face image and sharpness of the image. Thus, canonical face images can be reconstructed using only 2D information from face images under an arbitrary pose and lighting condition.Type: GrantFiled: September 30, 2016Date of Patent: July 31, 2018Assignee: Beijing SenseTime Technology Development Co., LtdInventors: Xiaoou Tang, Zhenyao Zhu, Ping Luo, Xiaogang Wang
-
Patent number: 10002296Abstract: A video classification method and apparatus relate to the field of electronic and information technologies, so that precision of video classification can be improved. The method includes: segmenting a video in a sample video library according to a time sequence, to obtain a segmentation result, and generating a motion atom set; generating, by using the motion atom set and the segmentation result, a motion phrase set that can indicate a complex motion pattern, and generating a descriptive vector, based on the motion phrase set, of the video in the sample video library; and determining, by using the descriptive vector, a to-be-detected video whose category is the same as that of the video in the sample video library. The method is applicable to a scenario of video classification.Type: GrantFiled: May 27, 2016Date of Patent: June 19, 2018Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Limin Wang, Yu Qiao, Wei Li, Chunjing Xu, Xiaoou Tang
-
Publication number: 20180144193Abstract: A method for identifying social relation of persons in an image, including: generating face regions for faces of the persons in the image; determining at least one spatial cue for each of the faces; extracting features related to social relation for each face from the face regions; determining a shared facial feature from the extracted features and the determined spatial cue, the determined feature being shared by multiple the social relation inferences; and predicting the social relation of the persons from the shared facial feature.Type: ApplicationFiled: December 29, 2017Publication date: May 24, 2018Inventors: Xiaoou TANG, Zhanpeng ZHANG, Ping LUO, Chen Change LOY
-
Publication number: 20180129919Abstract: Disclosed is a method for generating a semantic image labeling model, comprising: forming a first CNN and a second CNN, respectively; randomly initializing the first CNN; inputting a raw image and predetermined label ground truth annotations to the first CNN to iteratively update weights thereof so that a category label probability for the image, which is output from the first CNN, approaches the predetermined label ground truth annotations; randomly initializing the second CNN; inputting the category label probability to the second CNN to correct the input category label probability so as to determine classification errors of the category label probabilities; updating the second CNN by back-propagating the classification errors; concatenating the updated first and second CNNs; classifying each pixel in the raw image into one of general object categories; and back-propagating classification errors through the concatenated CNN to update weights thereof until the classification errors less than a predeterminedType: ApplicationFiled: January 8, 2018Publication date: May 10, 2018Inventors: Xiaoou Tang, Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy
-
Patent number: 9811718Abstract: Disclosed are a method and an apparatus for face verification. The apparatus comprises a feature extracting unit configured to extract HIFs (Hidden Identity Features) for different regions of faces by using differently trained ConvNets, wherein last hidden layer neuron activations of said ConvNets are considered as the HIFs. The apparatus further comprises a verification unit configured to concatenate the extracted HIFs of each of the faces to form a feature vector, and then compare two of the formed feature vectors to determine if they are from the same identity or not.Type: GrantFiled: April 11, 2014Date of Patent: November 7, 2017Assignee: Beijing Sensetime Technology Development CO., LTDInventors: Yi Sun, Xiaogang Wang, Xiaoou Tang
-
Patent number: 9798959Abstract: A method and a system for recognizing faces have been disclosed. The method may comprise: retrieving a pair of face images; segmenting each of the retrieved face images into a plurality of image patches, wherein each patch in one image and a corresponding one in the other image form a pair of patches; determining a first similarity of each pair of patches; determining, from all pair of patches, a second similarity of the pair of face images; and fusing the first similarity determined for the each pair of patches and the second similarity determined for the pair of face images.Type: GrantFiled: November 30, 2013Date of Patent: October 24, 2017Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTDInventors: Xiaoou Tang, Chaochao Lu, Deli Zhao
-
Patent number: 9710697Abstract: A method and a system for exacting face features from data of face images have been disclosed. The system may comprise: A first feature extraction unit configured to filter the data of face images into a first plurality of channels of feature maps with a first dimension and down-sample the feature maps into a second dimension of feature maps; a second feature extraction unit configured to filter the second dimension of feature maps into a second plurality of channels of feature maps with a second dimension, and to down-sample the second plurality of channels feature maps into a third dimension of feature maps; and a third feature extraction unit configured to filter the third dimension of feature maps so as to further reduce high responses outside the face region such that reduce intra-identity variances of face images, while maintain discrimination between identities of the face images.Type: GrantFiled: November 30, 2013Date of Patent: July 18, 2017Assignee: Beijing Sensetime Technology Development Co., Ltd.Inventors: Xiaoou Tang, Zhenyao Zhu, Ping Luo, Xiaogang Wang
-
Publication number: 20170147868Abstract: Disclosed are a method and an apparatus for face verification. The apparatus comprises a feature extracting unit configured to extract HIFs (Hidden identity Features) for different regions of faces by using differently trained ConvNets, wherein last hidden layer neuron activations of said ConvNets are considered as the HIFs. The apparatus further comprises a verification unit configured to concatenate the extracted HIFs of each of the faces to form a feature vector, and then compare two of the formed feature vectors to determine if they are from the same identity or not.Type: ApplicationFiled: April 11, 2014Publication date: May 25, 2017Applicant: BEIJING SESETIME TECHNOLOGY DEVELOPMENT CO., LTD.Inventors: Yi SUN, Xiaogang WANG, Xiaoou TANG
-
Publication number: 20170083754Abstract: Disclosed herein are a system and method for verifying face images based on canonical images. The method includes: retrieving, from a plurality of face images of an identity, a face image with a smallest frontal measurement value as a representative image of the identity; determining parameters of an image reconstruction network based on mappings between the retrieved representative image and the plurality of face images of the identity; reconstructing, by the image reconstruction network with the determined parameters, at least two input face images into corresponding canonical images respectively; and comparing the reconstructed canonical images to verify whether they belong to a same identity, where the representative image is a frontal image and the frontal measurement value represents symmetry of each face image and sharpness of the image. Thus, canonical face images can be reconstructed using only 2D information from face images under an arbitrary pose and lighting condition.Type: ApplicationFiled: September 30, 2016Publication date: March 23, 2017Inventors: Xiaoou Tang, Zhenyao Zhu, Ping Luo, Xiaogang Wang
-
Publication number: 20170083755Abstract: Disclosed is an apparatus for face verification. The apparatus may comprise a feature extraction unit and a verification unit. In one embodiment, the feature extraction unit comprises a plurality of convolutional feature extraction systems trained with different face training set, wherein each of systems comprises: a plurality of cascaded convolutional, pooling, locally-connected, and fully-connected feature extraction units configured to extract facial features for face verification from face regions of face images; wherein an output unit of the unit cascade, which could be a fully-connected unit in one embodiment of the present application, is connected to at least one of previous convolutional, pooling, locally-connected, or fully-connected units, and is configured to extract facial features (referred to as deep identification-verification features or DeepID2) for face verification from the facial features in the connected units.Type: ApplicationFiled: December 1, 2016Publication date: March 23, 2017Inventors: Xiaoou TANG, Yi SUN, Xiaogang WANG
-
Publication number: 20170046427Abstract: A visual semantic complex network system and a method for generating the system have been disclosed. The system may comprise a collection device configured to retrieve a plurality of images and a plurality of texts associated with the images in accordance with given query keywords; a semantic concept determination device configured to determine semantic concepts of the retrieved images and retrieved texts for the retrieved images, respectively; a descriptor generation device configured to, from the retrieved images and texts, generate text descriptors and visual descriptors for the determined semantic concepts; and a semantic correlation device configured to determine semantic correlations and visual correlations from the generated text and visual descriptor, respectively, and to combine the determined semantic correlations and the determined visual correlations to generate the visual semantic complex network system.Type: ApplicationFiled: November 30, 2013Publication date: February 16, 2017Inventors: Xiaoou TANG, Shi QIU, Xiaogang WANG
-
Patent number: 9569699Abstract: The present invention discloses a system and method for synthesizing a portrait sketch from a photo. The method includes: dividing the photo into a set of photo patches; determining first matching information between each of the photo patches and training photo patches pre-divided from a set of training photos; determining second matching information between each of the photo patches and training sketch patches pre-divided from a set of training sketches; determining a shape prior for the portrait sketch to be synthesized; determining a set of matched training sketch patches for each of the photo patches based on the first and the second matching information and the shape prior; and synthesizing the portrait sketch from the determined matched training sketch patches.Type: GrantFiled: September 3, 2010Date of Patent: February 14, 2017Assignee: Shenzhen SenseTime Technology Co., Ltd.Inventors: Xiaogang Wang, Xiaoou Tang, Wei Zhang
-
Publication number: 20170031953Abstract: A method for verifying facial data and a corresponding system, which comprises retrieving a plurality of source-domain datasets from a first database and a target-domain dataset from a second database different from the first database; determining a latent subspace matching with target-domain dataset best and a posterior distribution for the determined latent subspace from the target-domain dataset and the source-domain datasets; determining information shared between the target-domain data and the source-domain datasets; and establishing a Multi-Task learning model from the posterior distribution P and the shared information M on the target-domain dataset and the source-domain datasets.Type: ApplicationFiled: September 28, 2016Publication date: February 2, 2017Inventors: Xiaoou Tang, Chaochao Lu
-
Publication number: 20170004387Abstract: A method and a system for recognizing faces have been disclosed. The method may comprise: retrieving a pair of face images; segmenting each of the retrieved face images into a plurality of image patches, wherein each patch in one image and a corresponding one in the other image form a pair of patches; determining a first similarity of each pair of patches; determining, from all pair of patches, a second similarity of the pair of face images; and fusing the first similarity determined for the each pair of patches and the second similarity determined for the pair of face images.Type: ApplicationFiled: November 30, 2013Publication date: January 5, 2017Inventors: Xiaoou Tang, Chaochao Lu, Deli Zhao
-
Publication number: 20170004353Abstract: A method and a system for exacting face features from data of face images have been disclosed. The system may comprise: A first feature extraction unit configured to filter the data of face images into a first plurality of channels of feature maps with a first dimension and down-sample the feature maps into a second dimension of feature maps; a second feature extraction unit configured to filter the second dimension of feature maps into a second plurality of channels of feature maps with a second dimension, and to down-sample the second plurality of channels feature maps into a third dimension of feature maps; and a third feature extraction unit configured to filter the third dimension of feature maps so as to further reduce high responses outside the face region such that reduce intra-identity variances of face images, while maintain discrimination between identities of the face images.Type: ApplicationFiled: November 30, 2013Publication date: January 5, 2017Applicant: Beijing Sense Time Technology Development Co.,Ltd.Inventors: Xiaoou TANG, Zhenyao ZHU, Ping LUO, Xiaogang WANG
-
Publication number: 20160379044Abstract: A method for face image recognition is disclosed. The method comprises generating one or more face region pairs of face images to be compared and recognized; forming a plurality of feature modes by exchanging the two face regions of each face region pair and horizontally flipping each face region of each face region pair; receiving, by one or more convolutional neural networks, the plurality of feature modes, each of which forms a plurality of input maps in the convolutional neural network; extracting, by the one or more convolutional neural networks, relational features from the input maps, which reflect identity similarities of the face images; and recognizing whether the compared face images belong to the same identity based on the extracted relational features of the face images. In addition, a system for face image recognition is also disclosed.Type: ApplicationFiled: November 30, 2013Publication date: December 29, 2016Applicant: BEIJING SENSE TIME TECHNOLOGY DEVELOPMENT CO., LTD.Inventors: Xiaoou Tang, Yi Sun, Xiaogang Wang