Patents by Inventor Tieniu Tan
Tieniu Tan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11954599Abstract: A bi-directional interaction network (BINet)-based person search method, system, and apparatus are provided. The method includes: obtaining, as an input image, a tth frame of image in an input video; and normalizing the input image, and obtaining a search result of a to-be-searched target person by using a pre-trained person search model, where the person search model is constructed based on a residual network, and a new classification layer is added to a classification and regression layer of the residual network to obtain an identity classification probability of the target person. The method improves the accuracy of the person search.Type: GrantFiled: June 15, 2021Date of Patent: April 9, 2024Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCESInventors: Zhaoxiang Zhang, Tieniu Tan, Chunfeng Song, Wenkai Dong
-
Patent number: 11887354Abstract: A weakly supervised image semantic segmentation method based on an intra-class discriminator includes: constructing two levels of intra-class discriminators for each image-level class to determine whether pixels belonging to the image class belong to a target foreground or a background, and using weakly supervised data for training; generating a pixel-level image class label based on the two levels of intra-class discriminators, and generating and outputting a semantic segmentation result; and further training an image semantic segmentation module or network by using the label to obtain a final semantic segmentation model for an unlabeled input image. By means of the new method, intra-class image information implied in a feature code is fully mined, foreground and background pixels are accurately distinguished, and performance of a weakly supervised semantic segmentation model is significantly improved under the condition of only relying on an image-level annotation.Type: GrantFiled: July 2, 2020Date of Patent: January 30, 2024Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCESInventors: Zhaoxiang Zhang, Tieniu Tan, Chunfeng Song, Junsong Fan
-
Publication number: 20220180622Abstract: A weakly supervised image semantic segmentation method based on an intra-class discriminator includes: constructing two levels of intra-class discriminators for each image-level class to determine whether pixels belonging to the image class belong to a target foreground or a background, and using weakly supervised data for training; generating a pixel-level image class label based on the two levels of intra-class discriminators, and generating and outputting a semantic segmentation result; and further training an image semantic segmentation module or network by using the label to obtain a final semantic segmentation model for an unlabeled input image. By means of the new method, intra-class image information implied in a feature code is fully mined, foreground and background pixels are accurately distinguished, and performance of a weakly supervised semantic segmentation model is significantly improved under the condition of only relying on an image-level annotation.Type: ApplicationFiled: July 2, 2020Publication date: June 9, 2022Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCESInventors: Zhaoxiang ZHANG, Tieniu TAN, Chunfeng SONG, Junsong FAN
-
Publication number: 20210397828Abstract: A bi-directional interaction network (BINet)-based person search method, system, and apparatus are provided. The method includes: obtaining, as an input image, a tth frame of image in an input video; and normalizing the input image, and obtaining a search result of a to-be-searched target person by using a pre-trained person search model, where the person search model is constructed based on a residual network, and a new classification layer is added to a classification and regression layer of the residual network to obtain an identity classification probability of the target person. The method improves the accuracy of the person search.Type: ApplicationFiled: June 15, 2021Publication date: December 23, 2021Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCESInventors: Zhaoxiang ZHANG, Tieniu TAN, Chunfeng SONG, Wenkai DONG
-
Patent number: 10685434Abstract: The present application discloses a method for assessing aesthetic quality of a natural image based on multi-task deep learning. Said method includes: step 1: automatically learning aesthetic and semantic characteristics of the natural image based on multi-task deep learning; step 2: performing aesthetic categorization and semantic recognition to the results of automatic learning based on multi-task deep learning, thereby realizing assessment of aesthetic quality of the natural image. The present application uses semantic information to assist learning of expressions of aesthetic characteristics so as to assess aesthetic quality more effectively, besides, the present application designs various multi-task deep learning network structures so as to effectively use the aesthetic and semantic information for obtaining highly accurate image aesthetic categorization.Type: GrantFiled: March 30, 2016Date of Patent: June 16, 2020Assignee: Institute of Automation, Chinese Academy of SciencesInventors: Kaiqi Huang, Tieniu Tan, Ran He, Yueying Kao
-
Patent number: 10650260Abstract: A perspective distortion characteristic based facial image authentication method and storage and processing device thereof are proposed. The method includes: S1: recognizing key points and a contour in a 2D facial image; S2: acquiring key points in a corresponding 3D model; S3: calculating camera parameters based on a correspondence between the key points in the 2D image and the key points in the 3D model; S4: optimizing the camera parameters based on the contour in the 2D image; S5: sampling the key points in the two-dimensional facial image by multiple times to obtain a camera intrinsic parameter estimation point cloud; and S6: calculating the inconsistency between the camera intrinsic parameter estimation point cloud and the camera nominal intrinsic parameters, and determining the authenticity of the facial image. The present disclosure can effectively authenticate the 2D image and has a relatively higher accuracy.Type: GrantFiled: June 23, 2017Date of Patent: May 12, 2020Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCESInventors: Tieniu Tan, Jing Dong, Wei Wang, Bo Peng
-
Patent number: 10600238Abstract: An image tampering forensics method includes labeling an observation clue of a to-be-detected image, constructing a three-dimensional morphable model of an object of a category to which the target object belongs, estimating a three-dimensional normal vector to the supporting plane according to the observation clue, estimating a three-dimensional attitude of the target object according to the observation clue and the three-dimensional morphable model to obtain a plane normal vector to a plane where a side of target object in contact with the supporting plane is located, computing a parallelism between the target object and the supporting plane, and/or among a plurality of target objects, and judging whether the to-be-detected image is a tampered image or not according to the parallelism.Type: GrantFiled: March 9, 2017Date of Patent: March 24, 2020Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCESInventors: Tieniu Tan, Jing Dong, Wei Wang, Bo Peng
-
Publication number: 20200026941Abstract: A perspective distortion characteristic based facial image authentication method and storage and processing device thereof are proposed. The method includes: S1: recognizing key points and a contour in a 2D facial image; S2: acquiring key points in a corresponding 3D model; S3: calculating camera parameters based on a correspondence between the key points in the 2D image and the key points in the 3D model; S4: optimizing the camera parameters based on the contour in the 2D image; S5: sampling the key points in the two-dimensional facial image by multiple times to obtain a camera intrinsic parameter estimation point cloud; and S6: calculating the inconsistency between the camera intrinsic parameter estimation point cloud and the camera nominal intrinsic parameters, and determining the authenticity of the facial image. The present disclosure can effectively authenticate the 2D image and has a relatively higher accuracy.Type: ApplicationFiled: June 23, 2017Publication date: January 23, 2020Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCESInventors: Tieniu TAN, Jing DONG, Wei WANG, Bo PENG
-
Publication number: 20190228564Abstract: An image tampering forensics method includes labeling an observation clue of a to-be-detected image, constructing a three-dimensional morphable model of an object of a category to which the target object belongs, estimating a three-dimensional normal vector to the supporting plane according to the observation clue, estimating a three-dimensional attitude of the target object according to the observation clue and the three-dimensional morphable model to obtain a plane normal vector to a plane where a side of target object in contact with the supporting plane is located, computing a parallelism between the target object and the supporting plane, and/or among a plurality of target objects, and judging whether the to-be-detected image is a tampered image or not according to the parallelism.Type: ApplicationFiled: March 9, 2017Publication date: July 25, 2019Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCESInventors: Tieniu TAN, Jing DONG, Wei WANG, Bo PENG
-
Patent number: 10223780Abstract: The present invention provides a method for detecting image steganography based on deep learning, which comprises: filtering images having steganographic class label or true class label in a training set with a high-pass filter to obtain a training set including steganographic class residual images and true class residual images; training a deep network model on said training set to obtain a trained deep model for steganalysis; filtering the image to be detected with said high-pass filter to obtain a residual image to be detected; detecting said residual image to be detected on said deep model so as to determine whether said residual image to be detected is a steganographic image. The method for detecting image steganography in the present invention can create an automatic blind steganalysis model through feature learning and can identify steganographic images accurately.Type: GrantFiled: April 15, 2015Date of Patent: March 5, 2019Assignee: Institute of Automation Chinese Academy of SciencesInventors: Tieniu Tan, Jing Dong, Wei Wang, Yinlong Qian
-
Patent number: 10223582Abstract: The present disclosure relates to a gait recognition method based on deep learning, which comprises recognizing an identity of a person in a video according to the gait thereof through dual-channel convolutional neural networks sharing weights by means of the strong learning capability of the deep learning convolutional neural network. Said method is quite robust to gait changes across a large view, which can effectively solve the problem of low precision in cross-view gait recognition existing with the prior art gait recognition technology. Said method can be widely used in scenarios having video monitors, such as security monitoring in airports and supermarkets, person recognition, criminal detection, etc.Type: GrantFiled: October 28, 2014Date of Patent: March 5, 2019Assignee: Watrix TechnologyInventors: Tieniu Tan, Liang Wang, Yongzhen Huang, Zifeng Wu
-
Publication number: 20190026884Abstract: The present application discloses a method for assessing aesthetic quality of a natural image based on multi-task deep learning. Said method includes: step 1: automatically learning aesthetic and semantic characteristics of the natural image based on multi-task deep learning; step 2: performing aesthetic categorization and semantic recognition to the results of automatic learning based on multi-task deep learning, thereby realizing assessment of aesthetic quality of the natural image. The present application uses semantic information to assist learning of expressions of aesthetic characteristics so as to assess aesthetic quality more effectively, besides, the present application designs various multi-task deep learning network structures so as to effectively use the aesthetic and semantic information for obtaining highly accurate image aesthetic categorization.Type: ApplicationFiled: March 30, 2016Publication date: January 24, 2019Applicant: Institute of Automation, Chinese Academy of SciencesInventors: Kaiqi HUANG, Tieniu TAN, Ran HE, Yueying KAO
-
Patent number: 10096121Abstract: A human-shape image segmentation method comprising: extracting multi-scale context information for all first pixel points for training a human-shape image; sending image blocks of all scales of all the first pixel points into a same convolution neural network to form a multi-channel convolutional neural network group, wherein each channel corresponds to image blocks of one scale; training the neural network group using a back propagation algorithm to obtain human-shape image segmentation training model data; extracting multi-scale context information for all second pixels points for testing the human-shape image; sending image blocks of different scales of each of the second pixel points into a neural network channel corresponding to the human-shape image segmentation training model, wherein if said first probability is larger than said second probability, the second pixel points belong to the human-shape region, otherwise, the second pixel points are outside of the human-shape region.Type: GrantFiled: May 23, 2014Date of Patent: October 9, 2018Assignee: Watrix TechnologyInventors: Tieniu Tan, Yongzhen Huang, Liang Wang, Zifeng Wu
-
Publication number: 20180068429Abstract: The present invention provides a method for detecting image steganography based on deep learning, which comprises: filtering images having steganographic class label or true class label in a training set with a high-pass filter to obtain a training set including steganographic class residual images and true class residual images; training a deep network model on said training set to obtain a trained deep model for steganalysis; filtering the image to be detected with said high-pass filter to obtain a residual image to be detected; detecting said residual image to be detected on said deep model so as to determine whether said residual image to be detected is a steganographic image. The method for detecting image steganography in the present invention can create an automatic blind steganalysis model through feature learning and can identify steganographic images accurately.Type: ApplicationFiled: April 15, 2015Publication date: March 8, 2018Inventors: Tieniu Tan, Dong Jing, Wei Wang, Yinlong Qian
-
Publication number: 20170243058Abstract: The present disclosure relates to a gait recognition method based on deep learning, which comprises recognizing an identity of a person in a video according to the gait thereof through dual-channel convolutional neural networks sharing weights by means of the strong learning capability of the deep learning convolutional neural network. Said method is quite robust to gait changes across a large view, which can effectively solve the problem of low precision in cross-view gait recognition existing with the prior art gait recognition technology. Said method can be widely used in scenarios having video monitors, such as security monitoring in airports and supermarkets, person recognition, criminal detection, etc.Type: ApplicationFiled: October 28, 2014Publication date: August 24, 2017Inventors: Tieniu TAN, Liang WANG, Yongzhen HUANG, Zifeng WU
-
Publication number: 20170200274Abstract: A human-shape image segmentation method comprising: extracting multi-scale context information for all first pixel points for training a human-shape image; sending image blocks of all scales of all the first pixel points into a same convolution neural network to form a multi-channel convolutional neural network group, wherein each channel corresponds to image blocks of one scale; training the neural network group using a back propagation algorithm to obtain human-shape image segmentation training model data; extracting multi-scale context information for all second pixels points for testing the human-shape image; sending image blocks of different scales of each of the second pixel points into a neural network channel corresponding to the human-shape image segmentation training model, wherein if said first probability is larger than said second probability, the second pixel points belong to the human-shape region, otherwise, the second pixel points are outside of the human-shape region.Type: ApplicationFiled: May 23, 2014Publication date: July 13, 2017Inventors: Tieniu TAN, Yongzhen HUANG, Liang WANG, Zifeng WU
-
Patent number: 9064145Abstract: A method for identity recognition based on multiple feature fusion for an eye image, which comprises steps of registering and recognizing, wherein the step of registering comprises: obtaining a normalized eye image and a normalized iris image, for a given registered eye image, and extracting a multimode feature of an eye image of a user to be registered, and storing the obtained multimode feature of the eye image as registration information in a registration database; and the step of recognizing comprises: obtaining a normalized eye image and a normalized iris image, for a given recognized eye image, extracting a multimode feature of an eye image of a user to be recognized, comparing the extracted multimode feature with the multimode feature stored in the database to obtain a matching score, and obtaining a fusion score by fusing matching scores at score level, and performing the multiple feature fusion identity recognition on the eye image by a classifier.Type: GrantFiled: April 20, 2011Date of Patent: June 23, 2015Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCESInventors: Tieniu Tan, Zhenan Sun, Xiaobo Zhang, Hui Zhang
-
Patent number: 8732172Abstract: A shape classification method based on the topological perceptual organization (TPO) theory, comprising steps of: extracting boundary points of shapes (S1); constructing topological space and computing the representation of extracted boundary points (S2); extracting global features of shapes from the representation of boundary points in topological space (S3); extracting local features of shapes from the representation of boundary points in Euclidean space (S4); combining global features and local features through adjusting the weight of local features according to the performance of global features (S5); classifying shapes using the combination of global features and local features (S6). The invention is applicable for intelligent video surveillance, e.g., objects classification and scene understanding. The invention can also be used for the automatic driving system wherein robust recognition of traffic signs plays an important role in enhancing the intelligence of the system.Type: GrantFiled: May 13, 2010Date of Patent: May 20, 2014Assignee: Institute of Automation, Chinese Academy of SciencesInventors: Tieniu Tan, Kaiqi Huang, Yongzhen Huang
-
Publication number: 20140037152Abstract: A method for identity recognition based on multiple feature fusion for an eye image, which comprises steps of registering and recognizing, wherein the step of registering comprises: obtaining a normalized eye image and a normalized iris image, for a given registered eye image, and extracting a multimode feature of an eye image of a user to be registered, and storing the obtained multimode feature of the eye image as registration information in a registration database; and the step of recognizing comprises:obtaining a normalized eye image and a normalized iris image, for a given recognized eye image,extracting a multimode feature of an eye image of a user to be recognized,comparing the extracted multimode feature with the multimode feature stored in the database to obtain a matching score, and obtaining a fusion score by fusing matching scores at score level, and performing the multiple feature fusion identity recognition on the eye image by a classifier.Type: ApplicationFiled: April 20, 2011Publication date: February 6, 2014Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCESInventors: Tieniu Tan, Zhenan Sun, Xiaobo Zhang, Hui Zhang
-
Publication number: 20130046762Abstract: A shape classification method based on the topological perceptual organization (TPO) theory, comprising steps of: extracting boundary points of shapes (S1); constructing topological space and computing the representation of extracted boundary points (S2); extracting global features of shapes from the representation of boundary points in topological space (S3); extracting local features of shapes from the representation of boundary points in Euclidean space (S4); combining global features and local features through adjusting the weight of local features according to the performance of global features (S5); classifying shapes using the combination of global features and local features (S6). The invention is applicable for intelligent video surveillance, e.g., objects classification and scene understanding. The invention can also be used for the automatic driving system wherein robust recognition of traffic signs plays an important role in enhancing the intelligence of the system.Type: ApplicationFiled: May 13, 2010Publication date: February 21, 2013Applicant: Institute of Automation, Chinese Academy of SciencesInventors: Tieniu Tan, Kaiqi Huang, Yongzhen Huang