Patents Assigned to BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
  • Publication number: 20190138799
    Abstract: A method and a system for pose estimation are provided. The method includes: extracting a plurality of sets of part-feature maps from an image, each set of the extracted part-feature maps encoding the messages for a particular body part and forming a node of a part-feature network; passing a message of each set of the extracted part-feature maps through the part-feature network to update the extracted part-feature maps, resulting in each set of the extracted part-feature maps incorporating the message of upstream nodes; estimating, based on the updated part-feature maps, the body part within the image.
    Type: Application
    Filed: January 2, 2019
    Publication date: May 9, 2019
    Applicant: Beijing SenseTime Technology Development Co., Ltd
    Inventors: Xiaogang WANG, Xiao CHU, Wanli OUYANG, Hongsheng LI
  • Publication number: 20190122035
    Abstract: The disclosures relate to a method and a system for pose estimation. The method comprises: extracting a plurality of sets of part-feature maps from an image, each set of the extracted part-feature maps encoding the messages for a particular body part and forming a node of a part-feature network; passing a message of each set of the extracted part-feature maps through the part-feature network to update the extracted part-feature maps, resulting in each set of the extracted part-feature maps incorporating the message of upstream nodes; estimating, based on the updated part-feature maps, the body part within the image.
    Type: Application
    Filed: March 28, 2016
    Publication date: April 25, 2019
    Applicant: Beijing SenseTime Technology Development Co., Ltd
    Inventors: Xiaogang WANG, Xiao CHU, Wanli OUYANG, Hongsheng LI
  • Publication number: 20190095780
    Abstract: Embodiments of the present application disclose a method and apparatus for generating a neural network structure, an electronic device, and a storage medium. The method comprises: sampling a neural network structure to generate a network block, the network block comprising at least one network layer; constructing a sampling neural network based on the network block; training the sampling neural network based on sample data, and obtaining an accuracy corresponding to the sampling neural network; and in response to that the accuracy does not meet a preset condition, regenerating a new network block according to the accuracy until a sampling neural network constructed by the new network block meets the preset condition, and using the sampling neural network meeting the preset condition as a target neural network.
    Type: Application
    Filed: November 26, 2018
    Publication date: March 28, 2019
    Applicant: Beijing SenseTime Technology Development Co., Ltd
    Inventors: Zhao ZHONG, Junjie YAN, Chenglin LIU
  • Publication number: 20190073524
    Abstract: A method for predicting walking behaviors includes: encoding walking behavior information of at least one target object in a target scene within a historical time period M to obtain a first offset matrix for representing the walking behavior information of the at least one target object within the historical time period M; inputting the first offset matrix into a neural network, and outputting by the neural network a second offset matrix for representing walking behavior information of the at least one target object within a future time period M?; and decoding the second offset matrix to obtain the walking behavior prediction information of the at least one target object within the future time period M?.
    Type: Application
    Filed: October 30, 2018
    Publication date: March 7, 2019
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Shuai YI, Hongsheng LI, Xiaogang WANG
  • Publication number: 20190043205
    Abstract: The application relates to a method and system for tracking a target object in a video. The method includes: extracting, from the video, a 3-dimension (3D) feature block containing the target object; decomposing the extracted 3D feature block into a 2-dimension (2D) spatial feature map containing spatial information of the target object and a 2D spatial-temporal feature map containing spatial-temporal information of the target object; estimating, in the 2D spatial feature map, a location of the target object; determining, in the 2D spatial-temporal feature map, a speed and an acceleration of the target object; calibrating the estimated location of the target object according to the determined speed and acceleration; and tracking the target object in the video according to the calibrated location.
    Type: Application
    Filed: October 11, 2018
    Publication date: February 7, 2019
    Applicant: Beijing SenseTime Technology Development Co., Ltd
    Inventors: Xiaogang WANG, Jing SHAO, Chen-Change LOY, Kai KANG
  • Patent number: 10037457
    Abstract: Disclosed herein are a system and method for verifying face images based on canonical images. The method includes: retrieving, from a plurality of face images of an identity, a face image with a smallest frontal measurement value as a representative image of the identity; determining parameters of an image reconstruction network based on mappings between the retrieved representative image and the plurality of face images of the identity; reconstructing, by the image reconstruction network with the determined parameters, at least two input face images into corresponding canonical images respectively; and comparing the reconstructed canonical images to verify whether they belong to a same identity, where the representative image is a frontal image and the frontal measurement value represents symmetry of each face image and sharpness of the image. Thus, canonical face images can be reconstructed using only 2D information from face images under an arbitrary pose and lighting condition.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: July 31, 2018
    Assignee: Beijing SenseTime Technology Development Co., Ltd
    Inventors: Xiaoou Tang, Zhenyao Zhu, Ping Luo, Xiaogang Wang
  • Patent number: 9811718
    Abstract: Disclosed are a method and an apparatus for face verification. The apparatus comprises a feature extracting unit configured to extract HIFs (Hidden Identity Features) for different regions of faces by using differently trained ConvNets, wherein last hidden layer neuron activations of said ConvNets are considered as the HIFs. The apparatus further comprises a verification unit configured to concatenate the extracted HIFs of each of the faces to form a feature vector, and then compare two of the formed feature vectors to determine if they are from the same identity or not.
    Type: Grant
    Filed: April 11, 2014
    Date of Patent: November 7, 2017
    Assignee: Beijing Sensetime Technology Development CO., LTD
    Inventors: Yi Sun, Xiaogang Wang, Xiaoou Tang
  • Patent number: 9798959
    Abstract: A method and a system for recognizing faces have been disclosed. The method may comprise: retrieving a pair of face images; segmenting each of the retrieved face images into a plurality of image patches, wherein each patch in one image and a corresponding one in the other image form a pair of patches; determining a first similarity of each pair of patches; determining, from all pair of patches, a second similarity of the pair of face images; and fusing the first similarity determined for the each pair of patches and the second similarity determined for the pair of face images.
    Type: Grant
    Filed: November 30, 2013
    Date of Patent: October 24, 2017
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Xiaoou Tang, Chaochao Lu, Deli Zhao
  • Patent number: 9710697
    Abstract: A method and a system for exacting face features from data of face images have been disclosed. The system may comprise: A first feature extraction unit configured to filter the data of face images into a first plurality of channels of feature maps with a first dimension and down-sample the feature maps into a second dimension of feature maps; a second feature extraction unit configured to filter the second dimension of feature maps into a second plurality of channels of feature maps with a second dimension, and to down-sample the second plurality of channels feature maps into a third dimension of feature maps; and a third feature extraction unit configured to filter the third dimension of feature maps so as to further reduce high responses outside the face region such that reduce intra-identity variances of face images, while maintain discrimination between identities of the face images.
    Type: Grant
    Filed: November 30, 2013
    Date of Patent: July 18, 2017
    Assignee: Beijing Sensetime Technology Development Co., Ltd.
    Inventors: Xiaoou Tang, Zhenyao Zhu, Ping Luo, Xiaogang Wang
  • Patent number: 9530047
    Abstract: A method for face image recognition is disclosed. The method comprises generating one or more face region pairs of face images to be compared and recognized; forming a plurality of feature modes by exchanging the two face regions of each face region pair and horizontally flipping each face region of each face region pair; receiving, by one or more convolutional neural networks, the plurality of feature modes, each of which forms a plurality of input maps in the convolutional neural network; extracting, by the one or more convolutional neural networks, relational features from the input maps, which reflect identity similarities of the face images; and recognizing whether the compared face images belong to the same identity based on the extracted relational features of the face images. In addition, a system for face image recognition is also disclosed.
    Type: Grant
    Filed: November 30, 2013
    Date of Patent: December 27, 2016
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Xiaoou Tang, Yi Sun, Xiaogang Wang