Patents by Inventor Xiaoou Tang

Xiaoou Tang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10289897
    Abstract: Disclosed is an apparatus for face verification. The apparatus may comprise a feature extraction unit and a verification unit. In one embodiment, the feature extraction unit comprises a plurality of convolutional feature extraction systems trained with different face training set, wherein each of systems comprises: a plurality of cascaded convolutional, pooling, locally-connected, and fully-connected feature extraction units configured to extract facial features for face verification from face regions of face images; wherein an output unit of the unit cascade, which could be a fully-connected unit in one embodiment of the present application, is connected to at least one of previous convolutional, pooling, locally-connected, or fully-connected units, and is configured to extract facial features (referred to as deep identification-verification features or DeepID2) for face verification from the facial features in the connected units.
    Type: Grant
    Filed: December 1, 2016
    Date of Patent: May 14, 2019
    Assignee: Beijing SenseTime Technology Development Co., Ltd
    Inventors: Xiaoou Tang, Yi Sun, Xiaogang Wang
  • Publication number: 20190138798
    Abstract: Time domain action detecting methods and systems, electronic devices, and computer storage medium are provided. The method includes: obtaining a time domain interval in a video with an action instance and at least one adjacent segment in the time domain interval; separately extracting action features of at least two video segments in candidate segments, where the candidate segments comprises video segment corresponding to the time domain interval and adjacent segments thereof; pooling the action features of the at least two video segments in the candidate segments, to obtain a global feature of the video segment corresponding to the time domain interval; and determining, based on the global feature, an action integrity score of the video segment corresponding to the time domain interval. The embodiments of the present disclosure benefit accurately determining whether a time domain interval comprises an integral action instance, and improve the accuracy rate of action integrity identification.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 9, 2019
    Applicant: Beijing SenseTime Technology Development Co., Ltd
    Inventors: Xiaoou TANG, Yuanjun XIONG, Yue ZHAO, Limin WANG, Zhirong WU, Dahua LIN
  • Publication number: 20190138816
    Abstract: A method and an apparatus for segmenting a video object, an electronic device, a storage medium, and a program include: performing, among at least some frames of a video, inter-frame transfer of an object segmentation result of a reference frame in sequence from the reference frame, to obtain an object segmentation result of at least one other frame among the at least some frames; determining other frames having lost objects with respect to the object segmentation result of the reference frame among the at least some frames; using the determined other frames as target frames to segment the lost objects, so as to update the object segmentation results of the target frames; and transferring the updated object segmentation results of the target frames to the at least one other frame in the video in sequence. The accuracy of video object segmentation results can therefore be improved.
    Type: Application
    Filed: December 29, 2018
    Publication date: May 9, 2019
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Xiaoxiao LI, Yuankai Qi, Zhe Wang, Kai Chen, Ziwei Liu, Jianping Shi, Ping Luo, Chen Change Loy, Xiaoou Tang
  • Publication number: 20180300855
    Abstract: A method and system for processing an image operates by: filtering a first real image to obtain a first feature map therefor with performances of image features improved; upscaling the obtained first feature map to improve a resolution thereof, the feature map with improved resolution forming a second feature map; and constructing, from the second feature map, a second real image having enhanced performances and a higher resolution than that of the first real image.
    Type: Application
    Filed: June 20, 2018
    Publication date: October 18, 2018
    Inventors: Xiaoou Tang, Chao Dong, Tak Wai Hui, Chen Change Loy
  • Patent number: 10037457
    Abstract: Disclosed herein are a system and method for verifying face images based on canonical images. The method includes: retrieving, from a plurality of face images of an identity, a face image with a smallest frontal measurement value as a representative image of the identity; determining parameters of an image reconstruction network based on mappings between the retrieved representative image and the plurality of face images of the identity; reconstructing, by the image reconstruction network with the determined parameters, at least two input face images into corresponding canonical images respectively; and comparing the reconstructed canonical images to verify whether they belong to a same identity, where the representative image is a frontal image and the frontal measurement value represents symmetry of each face image and sharpness of the image. Thus, canonical face images can be reconstructed using only 2D information from face images under an arbitrary pose and lighting condition.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: July 31, 2018
    Assignee: Beijing SenseTime Technology Development Co., Ltd
    Inventors: Xiaoou Tang, Zhenyao Zhu, Ping Luo, Xiaogang Wang
  • Patent number: 10002296
    Abstract: A video classification method and apparatus relate to the field of electronic and information technologies, so that precision of video classification can be improved. The method includes: segmenting a video in a sample video library according to a time sequence, to obtain a segmentation result, and generating a motion atom set; generating, by using the motion atom set and the segmentation result, a motion phrase set that can indicate a complex motion pattern, and generating a descriptive vector, based on the motion phrase set, of the video in the sample video library; and determining, by using the descriptive vector, a to-be-detected video whose category is the same as that of the video in the sample video library. The method is applicable to a scenario of video classification.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: June 19, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Limin Wang, Yu Qiao, Wei Li, Chunjing Xu, Xiaoou Tang
  • Publication number: 20180144193
    Abstract: A method for identifying social relation of persons in an image, including: generating face regions for faces of the persons in the image; determining at least one spatial cue for each of the faces; extracting features related to social relation for each face from the face regions; determining a shared facial feature from the extracted features and the determined spatial cue, the determined feature being shared by multiple the social relation inferences; and predicting the social relation of the persons from the shared facial feature.
    Type: Application
    Filed: December 29, 2017
    Publication date: May 24, 2018
    Inventors: Xiaoou TANG, Zhanpeng ZHANG, Ping LUO, Chen Change LOY
  • Publication number: 20180129919
    Abstract: Disclosed is a method for generating a semantic image labeling model, comprising: forming a first CNN and a second CNN, respectively; randomly initializing the first CNN; inputting a raw image and predetermined label ground truth annotations to the first CNN to iteratively update weights thereof so that a category label probability for the image, which is output from the first CNN, approaches the predetermined label ground truth annotations; randomly initializing the second CNN; inputting the category label probability to the second CNN to correct the input category label probability so as to determine classification errors of the category label probabilities; updating the second CNN by back-propagating the classification errors; concatenating the updated first and second CNNs; classifying each pixel in the raw image into one of general object categories; and back-propagating classification errors through the concatenated CNN to update weights thereof until the classification errors less than a predetermined
    Type: Application
    Filed: January 8, 2018
    Publication date: May 10, 2018
    Inventors: Xiaoou Tang, Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy
  • Patent number: 9811718
    Abstract: Disclosed are a method and an apparatus for face verification. The apparatus comprises a feature extracting unit configured to extract HIFs (Hidden Identity Features) for different regions of faces by using differently trained ConvNets, wherein last hidden layer neuron activations of said ConvNets are considered as the HIFs. The apparatus further comprises a verification unit configured to concatenate the extracted HIFs of each of the faces to form a feature vector, and then compare two of the formed feature vectors to determine if they are from the same identity or not.
    Type: Grant
    Filed: April 11, 2014
    Date of Patent: November 7, 2017
    Assignee: Beijing Sensetime Technology Development CO., LTD
    Inventors: Yi Sun, Xiaogang Wang, Xiaoou Tang
  • Patent number: 9798959
    Abstract: A method and a system for recognizing faces have been disclosed. The method may comprise: retrieving a pair of face images; segmenting each of the retrieved face images into a plurality of image patches, wherein each patch in one image and a corresponding one in the other image form a pair of patches; determining a first similarity of each pair of patches; determining, from all pair of patches, a second similarity of the pair of face images; and fusing the first similarity determined for the each pair of patches and the second similarity determined for the pair of face images.
    Type: Grant
    Filed: November 30, 2013
    Date of Patent: October 24, 2017
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Xiaoou Tang, Chaochao Lu, Deli Zhao
  • Patent number: 9710697
    Abstract: A method and a system for exacting face features from data of face images have been disclosed. The system may comprise: A first feature extraction unit configured to filter the data of face images into a first plurality of channels of feature maps with a first dimension and down-sample the feature maps into a second dimension of feature maps; a second feature extraction unit configured to filter the second dimension of feature maps into a second plurality of channels of feature maps with a second dimension, and to down-sample the second plurality of channels feature maps into a third dimension of feature maps; and a third feature extraction unit configured to filter the third dimension of feature maps so as to further reduce high responses outside the face region such that reduce intra-identity variances of face images, while maintain discrimination between identities of the face images.
    Type: Grant
    Filed: November 30, 2013
    Date of Patent: July 18, 2017
    Assignee: Beijing Sensetime Technology Development Co., Ltd.
    Inventors: Xiaoou Tang, Zhenyao Zhu, Ping Luo, Xiaogang Wang
  • Publication number: 20170147868
    Abstract: Disclosed are a method and an apparatus for face verification. The apparatus comprises a feature extracting unit configured to extract HIFs (Hidden identity Features) for different regions of faces by using differently trained ConvNets, wherein last hidden layer neuron activations of said ConvNets are considered as the HIFs. The apparatus further comprises a verification unit configured to concatenate the extracted HIFs of each of the faces to form a feature vector, and then compare two of the formed feature vectors to determine if they are from the same identity or not.
    Type: Application
    Filed: April 11, 2014
    Publication date: May 25, 2017
    Applicant: BEIJING SESETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Yi SUN, Xiaogang WANG, Xiaoou TANG
  • Publication number: 20170083754
    Abstract: Disclosed herein are a system and method for verifying face images based on canonical images. The method includes: retrieving, from a plurality of face images of an identity, a face image with a smallest frontal measurement value as a representative image of the identity; determining parameters of an image reconstruction network based on mappings between the retrieved representative image and the plurality of face images of the identity; reconstructing, by the image reconstruction network with the determined parameters, at least two input face images into corresponding canonical images respectively; and comparing the reconstructed canonical images to verify whether they belong to a same identity, where the representative image is a frontal image and the frontal measurement value represents symmetry of each face image and sharpness of the image. Thus, canonical face images can be reconstructed using only 2D information from face images under an arbitrary pose and lighting condition.
    Type: Application
    Filed: September 30, 2016
    Publication date: March 23, 2017
    Inventors: Xiaoou Tang, Zhenyao Zhu, Ping Luo, Xiaogang Wang
  • Publication number: 20170083755
    Abstract: Disclosed is an apparatus for face verification. The apparatus may comprise a feature extraction unit and a verification unit. In one embodiment, the feature extraction unit comprises a plurality of convolutional feature extraction systems trained with different face training set, wherein each of systems comprises: a plurality of cascaded convolutional, pooling, locally-connected, and fully-connected feature extraction units configured to extract facial features for face verification from face regions of face images; wherein an output unit of the unit cascade, which could be a fully-connected unit in one embodiment of the present application, is connected to at least one of previous convolutional, pooling, locally-connected, or fully-connected units, and is configured to extract facial features (referred to as deep identification-verification features or DeepID2) for face verification from the facial features in the connected units.
    Type: Application
    Filed: December 1, 2016
    Publication date: March 23, 2017
    Inventors: Xiaoou TANG, Yi SUN, Xiaogang WANG
  • Publication number: 20170046427
    Abstract: A visual semantic complex network system and a method for generating the system have been disclosed. The system may comprise a collection device configured to retrieve a plurality of images and a plurality of texts associated with the images in accordance with given query keywords; a semantic concept determination device configured to determine semantic concepts of the retrieved images and retrieved texts for the retrieved images, respectively; a descriptor generation device configured to, from the retrieved images and texts, generate text descriptors and visual descriptors for the determined semantic concepts; and a semantic correlation device configured to determine semantic correlations and visual correlations from the generated text and visual descriptor, respectively, and to combine the determined semantic correlations and the determined visual correlations to generate the visual semantic complex network system.
    Type: Application
    Filed: November 30, 2013
    Publication date: February 16, 2017
    Inventors: Xiaoou TANG, Shi QIU, Xiaogang WANG
  • Patent number: 9569699
    Abstract: The present invention discloses a system and method for synthesizing a portrait sketch from a photo. The method includes: dividing the photo into a set of photo patches; determining first matching information between each of the photo patches and training photo patches pre-divided from a set of training photos; determining second matching information between each of the photo patches and training sketch patches pre-divided from a set of training sketches; determining a shape prior for the portrait sketch to be synthesized; determining a set of matched training sketch patches for each of the photo patches based on the first and the second matching information and the shape prior; and synthesizing the portrait sketch from the determined matched training sketch patches.
    Type: Grant
    Filed: September 3, 2010
    Date of Patent: February 14, 2017
    Assignee: Shenzhen SenseTime Technology Co., Ltd.
    Inventors: Xiaogang Wang, Xiaoou Tang, Wei Zhang
  • Publication number: 20170031953
    Abstract: A method for verifying facial data and a corresponding system, which comprises retrieving a plurality of source-domain datasets from a first database and a target-domain dataset from a second database different from the first database; determining a latent subspace matching with target-domain dataset best and a posterior distribution for the determined latent subspace from the target-domain dataset and the source-domain datasets; determining information shared between the target-domain data and the source-domain datasets; and establishing a Multi-Task learning model from the posterior distribution P and the shared information M on the target-domain dataset and the source-domain datasets.
    Type: Application
    Filed: September 28, 2016
    Publication date: February 2, 2017
    Inventors: Xiaoou Tang, Chaochao Lu
  • Publication number: 20170004387
    Abstract: A method and a system for recognizing faces have been disclosed. The method may comprise: retrieving a pair of face images; segmenting each of the retrieved face images into a plurality of image patches, wherein each patch in one image and a corresponding one in the other image form a pair of patches; determining a first similarity of each pair of patches; determining, from all pair of patches, a second similarity of the pair of face images; and fusing the first similarity determined for the each pair of patches and the second similarity determined for the pair of face images.
    Type: Application
    Filed: November 30, 2013
    Publication date: January 5, 2017
    Inventors: Xiaoou Tang, Chaochao Lu, Deli Zhao
  • Publication number: 20170004353
    Abstract: A method and a system for exacting face features from data of face images have been disclosed. The system may comprise: A first feature extraction unit configured to filter the data of face images into a first plurality of channels of feature maps with a first dimension and down-sample the feature maps into a second dimension of feature maps; a second feature extraction unit configured to filter the second dimension of feature maps into a second plurality of channels of feature maps with a second dimension, and to down-sample the second plurality of channels feature maps into a third dimension of feature maps; and a third feature extraction unit configured to filter the third dimension of feature maps so as to further reduce high responses outside the face region such that reduce intra-identity variances of face images, while maintain discrimination between identities of the face images.
    Type: Application
    Filed: November 30, 2013
    Publication date: January 5, 2017
    Applicant: Beijing Sense Time Technology Development Co.,Ltd.
    Inventors: Xiaoou TANG, Zhenyao ZHU, Ping LUO, Xiaogang WANG
  • Publication number: 20160379044
    Abstract: A method for face image recognition is disclosed. The method comprises generating one or more face region pairs of face images to be compared and recognized; forming a plurality of feature modes by exchanging the two face regions of each face region pair and horizontally flipping each face region of each face region pair; receiving, by one or more convolutional neural networks, the plurality of feature modes, each of which forms a plurality of input maps in the convolutional neural network; extracting, by the one or more convolutional neural networks, relational features from the input maps, which reflect identity similarities of the face images; and recognizing whether the compared face images belong to the same identity based on the extracted relational features of the face images. In addition, a system for face image recognition is also disclosed.
    Type: Application
    Filed: November 30, 2013
    Publication date: December 29, 2016
    Applicant: BEIJING SENSE TIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Xiaoou Tang, Yi Sun, Xiaogang Wang