Patents by Inventor Tieniu Tan

Tieniu Tan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954599
    Abstract: A bi-directional interaction network (BINet)-based person search method, system, and apparatus are provided. The method includes: obtaining, as an input image, a tth frame of image in an input video; and normalizing the input image, and obtaining a search result of a to-be-searched target person by using a pre-trained person search model, where the person search model is constructed based on a residual network, and a new classification layer is added to a classification and regression layer of the residual network to obtain an identity classification probability of the target person. The method improves the accuracy of the person search.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: April 9, 2024
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Zhaoxiang Zhang, Tieniu Tan, Chunfeng Song, Wenkai Dong
  • Patent number: 11887354
    Abstract: A weakly supervised image semantic segmentation method based on an intra-class discriminator includes: constructing two levels of intra-class discriminators for each image-level class to determine whether pixels belonging to the image class belong to a target foreground or a background, and using weakly supervised data for training; generating a pixel-level image class label based on the two levels of intra-class discriminators, and generating and outputting a semantic segmentation result; and further training an image semantic segmentation module or network by using the label to obtain a final semantic segmentation model for an unlabeled input image. By means of the new method, intra-class image information implied in a feature code is fully mined, foreground and background pixels are accurately distinguished, and performance of a weakly supervised semantic segmentation model is significantly improved under the condition of only relying on an image-level annotation.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: January 30, 2024
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Zhaoxiang Zhang, Tieniu Tan, Chunfeng Song, Junsong Fan
  • Publication number: 20220180622
    Abstract: A weakly supervised image semantic segmentation method based on an intra-class discriminator includes: constructing two levels of intra-class discriminators for each image-level class to determine whether pixels belonging to the image class belong to a target foreground or a background, and using weakly supervised data for training; generating a pixel-level image class label based on the two levels of intra-class discriminators, and generating and outputting a semantic segmentation result; and further training an image semantic segmentation module or network by using the label to obtain a final semantic segmentation model for an unlabeled input image. By means of the new method, intra-class image information implied in a feature code is fully mined, foreground and background pixels are accurately distinguished, and performance of a weakly supervised semantic segmentation model is significantly improved under the condition of only relying on an image-level annotation.
    Type: Application
    Filed: July 2, 2020
    Publication date: June 9, 2022
    Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Zhaoxiang ZHANG, Tieniu TAN, Chunfeng SONG, Junsong FAN
  • Publication number: 20210397828
    Abstract: A bi-directional interaction network (BINet)-based person search method, system, and apparatus are provided. The method includes: obtaining, as an input image, a tth frame of image in an input video; and normalizing the input image, and obtaining a search result of a to-be-searched target person by using a pre-trained person search model, where the person search model is constructed based on a residual network, and a new classification layer is added to a classification and regression layer of the residual network to obtain an identity classification probability of the target person. The method improves the accuracy of the person search.
    Type: Application
    Filed: June 15, 2021
    Publication date: December 23, 2021
    Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Zhaoxiang ZHANG, Tieniu TAN, Chunfeng SONG, Wenkai DONG
  • Patent number: 10685434
    Abstract: The present application discloses a method for assessing aesthetic quality of a natural image based on multi-task deep learning. Said method includes: step 1: automatically learning aesthetic and semantic characteristics of the natural image based on multi-task deep learning; step 2: performing aesthetic categorization and semantic recognition to the results of automatic learning based on multi-task deep learning, thereby realizing assessment of aesthetic quality of the natural image. The present application uses semantic information to assist learning of expressions of aesthetic characteristics so as to assess aesthetic quality more effectively, besides, the present application designs various multi-task deep learning network structures so as to effectively use the aesthetic and semantic information for obtaining highly accurate image aesthetic categorization.
    Type: Grant
    Filed: March 30, 2016
    Date of Patent: June 16, 2020
    Assignee: Institute of Automation, Chinese Academy of Sciences
    Inventors: Kaiqi Huang, Tieniu Tan, Ran He, Yueying Kao
  • Patent number: 10650260
    Abstract: A perspective distortion characteristic based facial image authentication method and storage and processing device thereof are proposed. The method includes: S1: recognizing key points and a contour in a 2D facial image; S2: acquiring key points in a corresponding 3D model; S3: calculating camera parameters based on a correspondence between the key points in the 2D image and the key points in the 3D model; S4: optimizing the camera parameters based on the contour in the 2D image; S5: sampling the key points in the two-dimensional facial image by multiple times to obtain a camera intrinsic parameter estimation point cloud; and S6: calculating the inconsistency between the camera intrinsic parameter estimation point cloud and the camera nominal intrinsic parameters, and determining the authenticity of the facial image. The present disclosure can effectively authenticate the 2D image and has a relatively higher accuracy.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: May 12, 2020
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Tieniu Tan, Jing Dong, Wei Wang, Bo Peng
  • Patent number: 10600238
    Abstract: An image tampering forensics method includes labeling an observation clue of a to-be-detected image, constructing a three-dimensional morphable model of an object of a category to which the target object belongs, estimating a three-dimensional normal vector to the supporting plane according to the observation clue, estimating a three-dimensional attitude of the target object according to the observation clue and the three-dimensional morphable model to obtain a plane normal vector to a plane where a side of target object in contact with the supporting plane is located, computing a parallelism between the target object and the supporting plane, and/or among a plurality of target objects, and judging whether the to-be-detected image is a tampered image or not according to the parallelism.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: March 24, 2020
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Tieniu Tan, Jing Dong, Wei Wang, Bo Peng
  • Publication number: 20200026941
    Abstract: A perspective distortion characteristic based facial image authentication method and storage and processing device thereof are proposed. The method includes: S1: recognizing key points and a contour in a 2D facial image; S2: acquiring key points in a corresponding 3D model; S3: calculating camera parameters based on a correspondence between the key points in the 2D image and the key points in the 3D model; S4: optimizing the camera parameters based on the contour in the 2D image; S5: sampling the key points in the two-dimensional facial image by multiple times to obtain a camera intrinsic parameter estimation point cloud; and S6: calculating the inconsistency between the camera intrinsic parameter estimation point cloud and the camera nominal intrinsic parameters, and determining the authenticity of the facial image. The present disclosure can effectively authenticate the 2D image and has a relatively higher accuracy.
    Type: Application
    Filed: June 23, 2017
    Publication date: January 23, 2020
    Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Tieniu TAN, Jing DONG, Wei WANG, Bo PENG
  • Publication number: 20190228564
    Abstract: An image tampering forensics method includes labeling an observation clue of a to-be-detected image, constructing a three-dimensional morphable model of an object of a category to which the target object belongs, estimating a three-dimensional normal vector to the supporting plane according to the observation clue, estimating a three-dimensional attitude of the target object according to the observation clue and the three-dimensional morphable model to obtain a plane normal vector to a plane where a side of target object in contact with the supporting plane is located, computing a parallelism between the target object and the supporting plane, and/or among a plurality of target objects, and judging whether the to-be-detected image is a tampered image or not according to the parallelism.
    Type: Application
    Filed: March 9, 2017
    Publication date: July 25, 2019
    Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Tieniu TAN, Jing DONG, Wei WANG, Bo PENG
  • Patent number: 10223582
    Abstract: The present disclosure relates to a gait recognition method based on deep learning, which comprises recognizing an identity of a person in a video according to the gait thereof through dual-channel convolutional neural networks sharing weights by means of the strong learning capability of the deep learning convolutional neural network. Said method is quite robust to gait changes across a large view, which can effectively solve the problem of low precision in cross-view gait recognition existing with the prior art gait recognition technology. Said method can be widely used in scenarios having video monitors, such as security monitoring in airports and supermarkets, person recognition, criminal detection, etc.
    Type: Grant
    Filed: October 28, 2014
    Date of Patent: March 5, 2019
    Assignee: Watrix Technology
    Inventors: Tieniu Tan, Liang Wang, Yongzhen Huang, Zifeng Wu
  • Patent number: 10223780
    Abstract: The present invention provides a method for detecting image steganography based on deep learning, which comprises: filtering images having steganographic class label or true class label in a training set with a high-pass filter to obtain a training set including steganographic class residual images and true class residual images; training a deep network model on said training set to obtain a trained deep model for steganalysis; filtering the image to be detected with said high-pass filter to obtain a residual image to be detected; detecting said residual image to be detected on said deep model so as to determine whether said residual image to be detected is a steganographic image. The method for detecting image steganography in the present invention can create an automatic blind steganalysis model through feature learning and can identify steganographic images accurately.
    Type: Grant
    Filed: April 15, 2015
    Date of Patent: March 5, 2019
    Assignee: Institute of Automation Chinese Academy of Sciences
    Inventors: Tieniu Tan, Jing Dong, Wei Wang, Yinlong Qian
  • Publication number: 20190026884
    Abstract: The present application discloses a method for assessing aesthetic quality of a natural image based on multi-task deep learning. Said method includes: step 1: automatically learning aesthetic and semantic characteristics of the natural image based on multi-task deep learning; step 2: performing aesthetic categorization and semantic recognition to the results of automatic learning based on multi-task deep learning, thereby realizing assessment of aesthetic quality of the natural image. The present application uses semantic information to assist learning of expressions of aesthetic characteristics so as to assess aesthetic quality more effectively, besides, the present application designs various multi-task deep learning network structures so as to effectively use the aesthetic and semantic information for obtaining highly accurate image aesthetic categorization.
    Type: Application
    Filed: March 30, 2016
    Publication date: January 24, 2019
    Applicant: Institute of Automation, Chinese Academy of Sciences
    Inventors: Kaiqi HUANG, Tieniu TAN, Ran HE, Yueying KAO
  • Patent number: 10096121
    Abstract: A human-shape image segmentation method comprising: extracting multi-scale context information for all first pixel points for training a human-shape image; sending image blocks of all scales of all the first pixel points into a same convolution neural network to form a multi-channel convolutional neural network group, wherein each channel corresponds to image blocks of one scale; training the neural network group using a back propagation algorithm to obtain human-shape image segmentation training model data; extracting multi-scale context information for all second pixels points for testing the human-shape image; sending image blocks of different scales of each of the second pixel points into a neural network channel corresponding to the human-shape image segmentation training model, wherein if said first probability is larger than said second probability, the second pixel points belong to the human-shape region, otherwise, the second pixel points are outside of the human-shape region.
    Type: Grant
    Filed: May 23, 2014
    Date of Patent: October 9, 2018
    Assignee: Watrix Technology
    Inventors: Tieniu Tan, Yongzhen Huang, Liang Wang, Zifeng Wu
  • Publication number: 20180068429
    Abstract: The present invention provides a method for detecting image steganography based on deep learning, which comprises: filtering images having steganographic class label or true class label in a training set with a high-pass filter to obtain a training set including steganographic class residual images and true class residual images; training a deep network model on said training set to obtain a trained deep model for steganalysis; filtering the image to be detected with said high-pass filter to obtain a residual image to be detected; detecting said residual image to be detected on said deep model so as to determine whether said residual image to be detected is a steganographic image. The method for detecting image steganography in the present invention can create an automatic blind steganalysis model through feature learning and can identify steganographic images accurately.
    Type: Application
    Filed: April 15, 2015
    Publication date: March 8, 2018
    Inventors: Tieniu Tan, Dong Jing, Wei Wang, Yinlong Qian
  • Publication number: 20170243058
    Abstract: The present disclosure relates to a gait recognition method based on deep learning, which comprises recognizing an identity of a person in a video according to the gait thereof through dual-channel convolutional neural networks sharing weights by means of the strong learning capability of the deep learning convolutional neural network. Said method is quite robust to gait changes across a large view, which can effectively solve the problem of low precision in cross-view gait recognition existing with the prior art gait recognition technology. Said method can be widely used in scenarios having video monitors, such as security monitoring in airports and supermarkets, person recognition, criminal detection, etc.
    Type: Application
    Filed: October 28, 2014
    Publication date: August 24, 2017
    Inventors: Tieniu TAN, Liang WANG, Yongzhen HUANG, Zifeng WU
  • Publication number: 20170200274
    Abstract: A human-shape image segmentation method comprising: extracting multi-scale context information for all first pixel points for training a human-shape image; sending image blocks of all scales of all the first pixel points into a same convolution neural network to form a multi-channel convolutional neural network group, wherein each channel corresponds to image blocks of one scale; training the neural network group using a back propagation algorithm to obtain human-shape image segmentation training model data; extracting multi-scale context information for all second pixels points for testing the human-shape image; sending image blocks of different scales of each of the second pixel points into a neural network channel corresponding to the human-shape image segmentation training model, wherein if said first probability is larger than said second probability, the second pixel points belong to the human-shape region, otherwise, the second pixel points are outside of the human-shape region.
    Type: Application
    Filed: May 23, 2014
    Publication date: July 13, 2017
    Inventors: Tieniu TAN, Yongzhen HUANG, Liang WANG, Zifeng WU
  • Patent number: 9064145
    Abstract: A method for identity recognition based on multiple feature fusion for an eye image, which comprises steps of registering and recognizing, wherein the step of registering comprises: obtaining a normalized eye image and a normalized iris image, for a given registered eye image, and extracting a multimode feature of an eye image of a user to be registered, and storing the obtained multimode feature of the eye image as registration information in a registration database; and the step of recognizing comprises: obtaining a normalized eye image and a normalized iris image, for a given recognized eye image, extracting a multimode feature of an eye image of a user to be recognized, comparing the extracted multimode feature with the multimode feature stored in the database to obtain a matching score, and obtaining a fusion score by fusing matching scores at score level, and performing the multiple feature fusion identity recognition on the eye image by a classifier.
    Type: Grant
    Filed: April 20, 2011
    Date of Patent: June 23, 2015
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Tieniu Tan, Zhenan Sun, Xiaobo Zhang, Hui Zhang
  • Patent number: 8732172
    Abstract: A shape classification method based on the topological perceptual organization (TPO) theory, comprising steps of: extracting boundary points of shapes (S1); constructing topological space and computing the representation of extracted boundary points (S2); extracting global features of shapes from the representation of boundary points in topological space (S3); extracting local features of shapes from the representation of boundary points in Euclidean space (S4); combining global features and local features through adjusting the weight of local features according to the performance of global features (S5); classifying shapes using the combination of global features and local features (S6). The invention is applicable for intelligent video surveillance, e.g., objects classification and scene understanding. The invention can also be used for the automatic driving system wherein robust recognition of traffic signs plays an important role in enhancing the intelligence of the system.
    Type: Grant
    Filed: May 13, 2010
    Date of Patent: May 20, 2014
    Assignee: Institute of Automation, Chinese Academy of Sciences
    Inventors: Tieniu Tan, Kaiqi Huang, Yongzhen Huang
  • Publication number: 20140037152
    Abstract: A method for identity recognition based on multiple feature fusion for an eye image, which comprises steps of registering and recognizing, wherein the step of registering comprises: obtaining a normalized eye image and a normalized iris image, for a given registered eye image, and extracting a multimode feature of an eye image of a user to be registered, and storing the obtained multimode feature of the eye image as registration information in a registration database; and the step of recognizing comprises:obtaining a normalized eye image and a normalized iris image, for a given recognized eye image,extracting a multimode feature of an eye image of a user to be recognized,comparing the extracted multimode feature with the multimode feature stored in the database to obtain a matching score, and obtaining a fusion score by fusing matching scores at score level, and performing the multiple feature fusion identity recognition on the eye image by a classifier.
    Type: Application
    Filed: April 20, 2011
    Publication date: February 6, 2014
    Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Tieniu Tan, Zhenan Sun, Xiaobo Zhang, Hui Zhang
  • Publication number: 20130046762
    Abstract: A shape classification method based on the topological perceptual organization (TPO) theory, comprising steps of: extracting boundary points of shapes (S1); constructing topological space and computing the representation of extracted boundary points (S2); extracting global features of shapes from the representation of boundary points in topological space (S3); extracting local features of shapes from the representation of boundary points in Euclidean space (S4); combining global features and local features through adjusting the weight of local features according to the performance of global features (S5); classifying shapes using the combination of global features and local features (S6). The invention is applicable for intelligent video surveillance, e.g., objects classification and scene understanding. The invention can also be used for the automatic driving system wherein robust recognition of traffic signs plays an important role in enhancing the intelligence of the system.
    Type: Application
    Filed: May 13, 2010
    Publication date: February 21, 2013
    Applicant: Institute of Automation, Chinese Academy of Sciences
    Inventors: Tieniu Tan, Kaiqi Huang, Yongzhen Huang