Patents Assigned to INSTITUTE OF AUTOMATION CHINESE ACADEMY OF SCIENCES
  • Patent number: 11283247
    Abstract: A carrier mechanism for walking on a line includes a carrier platform constituted by a first mounting plate, a second mounting plate and a longitudinal movable plate, a walking apparatus, a clamping apparatus, a driving apparatus, and a self-balance control apparatus configured to adjust a posture of the carrier mechanism. The longitudinal movable plate is slidably arranged on the second mounting plate fixedly connected to the first mounting plate in parallel. The walking apparatus includes at least two sets of walking wheels arranged along a walking direction. The clamping apparatus is slidably arranged on the lower side of the longitudinal movable plate. The driving apparatus includes a first driving apparatus configured to drive the walking wheels to roll, and a second driving apparatus configured to drive the longitudinal movable plate to move. The clamping apparatus is driven by the longitudinal movable plate to clamp or release a to-be-inspected target.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: March 22, 2022
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Guodong Yang, Xiaoyu Long, Weiqing Zhao, En Li, Zize Liang, Fengshui Jing, Han Wang, Hao Wang, Zishu Gao, Yunong Tian, Yuansong Sun, Sixi Lu, Guangyao Xu
  • Patent number: 11281945
    Abstract: A multimodal dimensional emotion recognition method includes: acquiring a frame-level audio feature, a frame-level video feature, and a frame-level text feature from an audio, a video, and a corresponding text of a sample to be tested; performing temporal contextual modeling on the frame-level audio feature, the frame-level video feature, and the frame-level text feature respectively by using a temporal convolutional network to obtain a contextual audio feature, a contextual video feature, and a contextual text feature; performing weighted fusion on these three features by using a gated attention mechanism to obtain a multimodal feature; splicing the multimodal feature and these three features together to obtain a spliced feature, and then performing further temporal contextual modeling on the spliced feature by using a temporal convolutional network to obtain a contextual spliced feature; and performing regression prediction on the contextual spliced feature to obtain a final dimensional emotion prediction r
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: March 22, 2022
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jianhua Tao, Licai Sun, Bin Liu, Zheng Lian
  • Patent number: 11266338
    Abstract: An automatic depression detection method includes the following steps of: inputting audio and video files, wherein the audio and video files contain original data in both audio and video modes; conducting segmentation and feature extraction on the audio and video files to obtain a plurality of audio segment horizontal features and video segment horizontal features; combining segment horizontal features into an audio horizontal feature and a video horizontal feature respectively by utilizing a feature evolution pooling objective function; and conducting attentional computation on the segment horizontal features to obtain a video attention audio feature and an audio attention video feature, splicing the audio horizontal feature, the video horizontal feature, the video attention audio feature and the audio attention video feature to form a multimodal spatio-temporal representation, and inputting the multimodal spatio-temporal representation into support vector regression to predict the depression level of indivi
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: March 8, 2022
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jianhua Tao, Mingyue Niu, Bin Liu, Qifei Li
  • Publication number: 20220045489
    Abstract: A carrier mechanism for walking on a line includes a carrier platform constituted by a first mounting plate, a second mounting plate and a longitudinal movable plate, a walking apparatus, a clamping apparatus, a driving apparatus, and a self-balance control apparatus configured to adjust a posture of the carrier mechanism. The longitudinal movable plate is slidably arranged on the second mounting plate fixedly connected to the first mounting plate in parallel. The walking apparatus includes at least two sets of walking wheels arranged along a walking direction. The clamping apparatus is slidably arranged on the lower side of the longitudinal movable plate. The driving apparatus includes a first driving apparatus configured to drive the walking wheels to roll, and a second driving apparatus configured to drive the longitudinal movable plate to move. The clamping apparatus is driven by the longitudinal movable plate to clamp or release a to-be-inspected target.
    Type: Application
    Filed: November 12, 2019
    Publication date: February 10, 2022
    Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Guodong YANG, Xiaoyu LONG, Weiqing ZHAO, En LI, Zize LIANG, Fengshui JING, Han WANG, Hao WANG, Zishu GAO, Yunong TIAN, Yuansong SUN, Sixi LU, Guangyao XU
  • Patent number: 11244119
    Abstract: A multi-modal lie detection method and apparatus, and a device to improve an accuracy of an automatic lie detection are provided. The multi-modal lie detection method includes inputting original data of three modalities, namely a to-be-detected audio, a to-be-detected video and a to-be-detected text; performing a feature extraction on input contents to obtain deep features of the three modalities; explicitly depicting first-order, second-order and third-order interactive relationships of the deep features of the three modalities to obtain an integrated multi-modal feature of each word; performing a context modeling on the integrated multi-modal feature of the each word to obtain a final feature of the each word; and pooling the final feature of the each word to obtain global features, and then obtaining a lie classification result by a fully-connected layer.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: February 8, 2022
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jianhua Tao, Licai Sun, Bin Liu, Zheng Lian
  • Patent number: 11241803
    Abstract: A robot for testing lower limb performance of a spacesuit includes a pressure maintaining box, an air circulation component, an air cooling unit, heat radiating hose components, and two mechanical legs. The air cooling unit is connected with the pressure maintaining box; the air circulation component is arranged in the pressure maintaining box; the mechanical legs are installed on the pressure maintaining box, and the heat radiating hose components are arranged in the mechanical legs; air in the pressure maintaining box is cooled through the air cooling unit and delivered into the heat radiating hose components through the air circulation component; each mechanical leg comprises a thigh, a knee joint component, a shank, an ankle joint component and a foot; the thigh is connected with the shank through the knee joint component; the shank is connected with the foot through the ankle joint component.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: February 8, 2022
    Assignee: SHENYANG INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jinguo Liu, Haodong Chi, Zheng Li, Tiejun Wang, Keli Chen, Qiang Sun, Lei Xiao, Huaqiang Sun, Xiaoyuan Liu, Cao Tong
  • Patent number: 11243166
    Abstract: An intraoperative near-infrared-I and near-infrared-II multi-spectral fluorescent navigation system and method of using same includes a light source module for emitting white light and excitation light for illuminating tissue to be tested to generate an emission light. An optical information collection module includes a white light camera for collecting the white light image, and near infrared-I and near infrared-II fluorescence cameras for collecting the near infrared-I and near infrared-II fluorescence images. A central control module is coupled to the light source and the optical information collection modules. An image processing unit pre-processes the white light image, and the near infrared-I and near infrared-II fluorescence images, for de-noising and enhancement.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: February 8, 2022
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jie Tian, Zhenhua Hu, Caiguang Cao, Zeyu Zhang, Meishan Cai
  • Patent number: 11235821
    Abstract: A reconfigurable joint track compound mobile robot has a main vehicle body, yaw joints and an auxiliary track module. The main vehicle body has a main track, and a clutch brake and a first wheel joint arranged in a main track driving wheel. A second wheel joint is arranged in a main track driven wheel. The main vehicle body is provided with main track driving mechanisms and a wheel joint driving mechanism. The main track driving wheel is driven to rotate by the main track driving mechanisms, which are connected with the clutch brake. The second wheel joint is driven to rotate by the wheel joint driving mechanism. Each wheel joint is correspondingly connected with the yaw joints, which are rotatably connected with the auxiliary track module. A yaw driving mechanism that drives the auxiliary track module to swing is arranged in each yaw joint.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: February 1, 2022
    Assignee: SHENYANG INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jinguo Liu, Xing Li, Jian Ding, Yuwang Liu
  • Patent number: 11238289
    Abstract: An automatic lie detection method and apparatus for interactive scenarios, a device and a medium to improve the accuracy of automatic lie detection are provided. The method includes: segmenting three modalities, namely a video, an audio and a text, of a to-be-detected sample; extracting short-term features of the three modalities; integrating the short-term features of the three modalities in the to-be-detected sample to obtain long-term features of the three modalities corresponding to each dialogue; integrating the long-term features of the three modalities by a self-attention mechanism to obtain a multi-modal feature of the each dialogue; integrating the multi-modal feature of the each dialogue with interactive information by a graph neutral network to obtain a multi-modal feature integrated with the interactive information; and predicting a lie level of the each dialogue according to the multi-modal feature integrated with the interactive information.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: February 1, 2022
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jianhua Tao, Zheng Lian, Bin Liu, Licai Sun
  • Patent number: 11232286
    Abstract: A method and an apparatus for generating a face rotation image are provided. The method includes: performing pose encoding on an obtained face image based on two or more landmarks in the face image, to obtain pose encoded images; obtaining a plurality of training images each including a face from a training data set, wherein presented rotation angles of the faces included in the plurality of training images are the same; performing pose encoding on a target face image based on two or more landmarks in the target face image in the foregoing similar manner, to obtain pose encoded images, wherein the target face image is obtained based on the plurality of training images; generating a to-be-input signal based on the face image and the foregoing two types of pose encoded images; and inputting the to-be-input signal into an face rotation image generative model to obtain a face rotation image.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: January 25, 2022
    Assignees: Huawei Technologies Co., Ltd., Institute of Automation, Chinese Academy of Sciences
    Inventors: Qiang Rao, Bing Yu, Bailan Feng, Yibo Hu, Xiang Wu, Ran He, Zhenan Sun
  • Patent number: 11227161
    Abstract: A physiological signal prediction method includes: collecting a video file, the video file containing long-term videos, and contents of the video file containing data for a face of a single person and true physiological signal data; segmenting a single long-term video into multiple short-term video clips; extracting, by using each frame of image in each of the short-term video clips, features of interested regions for identifying physiological signals so as to form features of interested regions of a single frame; splicing, for each of the short-term video clips, features of interested regions of all fixed frames corresponding to the short-term video clip into features of interested regions of a multi-frame video, and converting the features of the interested regions of the multi-frame video into a spatio-temporal graph; inputting the spatio-temporal graph into a deep learning model for training, and using the trained deep learning model to predict physiological signal parameters.
    Type: Grant
    Filed: September 10, 2021
    Date of Patent: January 18, 2022
    Assignee: Institute of Automation, Chinese Academy of Sciences
    Inventors: Jianhua Tao, Yu He, Bin Liu, Licai Sun
  • Patent number: 11216652
    Abstract: An expression recognition method under a natural scene comprises: converting an input video into a video frame sequence in terms of a specified frame rate, and performing facial expression labeling on the video frame sequence to obtain a video frame labeled sequence; removing natural light impact, non-face areas, and head posture impact elimination on facial expression from the video frame labeled sequence to obtain an expression video frame sequence; augmenting the expression video frame sequence to obtain a video preprocessed frame sequence; from the video preprocessed frame sequence, extracting HOG features that characterize facial appearance and shape features, extracting second-order features that describe a face creasing degree, and extracting facial pixel-level deep neural network features by using a deep neural network; then, performing vector fusion on these three obtain facial feature fusion vectors for training; and inputting the facial feature fusion vectors into a support vector machine for expre
    Type: Grant
    Filed: September 9, 2021
    Date of Patent: January 4, 2022
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jianhua Tao, Mingyuan Xiao, Bin Liu, Zheng Lian
  • Publication number: 20210405660
    Abstract: The control system based on multi-unmanned aerial vehicle (UAV) cooperative strategic confrontation includes a management module, a UAV formation module, a situation assessment module, a decision-making module, and a cooperative mission assigning module of both sides in a confrontation. The management module is configured to store state information acquired by the UAV formation module. The UAV formation module is configured to acquire state information of UAVs and execute a control instruction. The situation assessment module is configured to acquire situation assessment information according to the state information. The decision-making module is configured to acquire a countermeasure based on the situation assessment information. The cooperative mission assigning module is configured to generate control instructions for the UAVs based on the countermeasure and in combination with a confrontation target and an optimal situation assessment value.
    Type: Application
    Filed: August 13, 2020
    Publication date: December 30, 2021
    Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Zhen LIU, Zhiqiang PU, Tenghai QIU, Jianqiang YI
  • Patent number: 11208186
    Abstract: A water-air amphibious cross-medium bio-robotic flying fish includes a body, pitching pectoral fins, variable-structure pectoral fins, a caudal propulsion module, a sensor module and a controller. The caudal propulsion module is controlled to achieve underwater fish-like body-caudal fin (BCF) propulsion, and the variable-structure pectoral fins is adjusted to achieve air gliding and fast splash-down diving motions of the bio-robotic flying fish. The coordination between the caudal propulsion module and the pitching pectoral fins is controlled to achieve the motion of leaping out of water during water-air cross-medium transition. The ambient environment is detected by the sensor module, and the motion mode of the bio-robotic flying fish is controlled by the controller.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: December 28, 2021
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Junzhi Yu, Zhengxing Wu, Di Chen, Min Tan
  • Patent number: 11209843
    Abstract: A control method, system and device for a flexible carbon cantilever beam actuated by a smart material is provided. The new control method, system and device aims to solve the problems of control overflow and instability that are likely to occur in the distributed parameter system constructed in the prior art. The method includes: acquiring an elastic displacement of the flexible carbon cantilever beam in real time as input data; and obtaining, based on the input data, a control torque through a distributed parameter model constructed in advance, and performing vibration control on the flexible carbon cantilever beam. The new control method, system and device improves the control accuracy and stability of the distributed parameter system.
    Type: Grant
    Filed: July 12, 2021
    Date of Patent: December 28, 2021
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Hongjun Yang, Min Tan, Zengguang Hou, Junzhi Yu, Long Cheng, Zhengxing Wu, Wei He, Zhijie Liu, Tairen Sun
  • Publication number: 20210397828
    Abstract: A bi-directional interaction network (BINet)-based person search method, system, and apparatus are provided. The method includes: obtaining, as an input image, a tth frame of image in an input video; and normalizing the input image, and obtaining a search result of a to-be-searched target person by using a pre-trained person search model, where the person search model is constructed based on a residual network, and a new classification layer is added to a classification and regression layer of the residual network to obtain an identity classification probability of the target person. The method improves the accuracy of the person search.
    Type: Application
    Filed: June 15, 2021
    Publication date: December 23, 2021
    Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Zhaoxiang ZHANG, Tieniu TAN, Chunfeng SONG, Wenkai DONG
  • Patent number: 11205098
    Abstract: A single-stage small-sample-object detection method based on decoupled metric is provided to solve the following problems: low detection accuracy of existing small-sample-object detection methods, the mutual interference between classification and regression in a non-decoupled form, and over-fitting during training of a detection network in case of small samples. The method includes: obtaining a to-be-detected image as an input image; and obtaining a class and a regression box corresponding to each to-be-detected object in the input image through a pre-constructed small-sample-object detection network DMNet, where the DMNet includes a multi-scale feature extraction network, a decoupled representation transformation module, an image-level distance metric learning module and a regression box prediction module.
    Type: Grant
    Filed: July 13, 2021
    Date of Patent: December 21, 2021
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Zhengxing Wu, Junzhi Yu, Yue Lu, Xingyu Chen
  • Publication number: 20210384995
    Abstract: A method and device for implementing an FPGA-based large-scale radio frequency interference array correlator are provided. The method includes: obtaining the number of channels of data of a radio frequency interference array, and performing average division; calculating the total correlation of data group and the total correlation between the data group and other data groups respectively through corresponding correlation calculation modules, and performing an accumulation calculation in an integration period to complete the total correlation operation of the radio frequency interference array. By means of grouping division and time division multiplexing, the FPGA resource is effectively utilized, and the calculation process of FPGA is simplified. The new method is suitable for the operation process of the system with high parallelism and high real-time requirements, and provides a high-efficiency solution for the real-time calculation of massive data of the large-scale radio frequency interference array.
    Type: Application
    Filed: May 25, 2020
    Publication date: December 9, 2021
    Applicants: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES, GUANGZHOU ARTIFICIAL INTELLIGENCE AND ADVANCED COMPUTING INSTITUTE OF CASIA
    Inventors: Yafang SONG, Jie HAO, Jun LIANG, Lin SHU, Liangtian ZHAO, Qiuxiang FAN, Hui FENG, Wenqing HU
  • Publication number: 20210383239
    Abstract: A feature extraction system, method and apparatus based on neural network optimization by gradient filtering is provided. The feature extraction method includes: acquiring, by an information acquisition device, input information; constructing, by a feature extraction device, different feature extraction networks, performing iterative training on the networks in combination with corresponding training task queues to obtain optimized feature extraction networks for different input information, and calling a corresponding optimized feature extraction network to perform feature extraction according to a class of the input information; performing, by an online updating device, online updating of the networks; and outputting, by a feature output device, a feature of the input information. The new feature extraction system, method and apparatus avoids the problem of catastrophic forgetting of the artificial neural network in continuous tasks, and achieves high accuracy and precision in continuous feature extraction.
    Type: Application
    Filed: August 25, 2021
    Publication date: December 9, 2021
    Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Yang CHEN, Guanxiong ZENG, Shan YU
  • Patent number: 11194972
    Abstract: Disclosed is a semantic sentiment analysis method fusing in-depth features and time sequence models, including: converting a text into a uniformly formatted matrix of word vectors; extracting local semantic emotional text features and contextual semantic emotional text features from the matrix of word vectors; weighting the local semantic emotional text features and the contextual semantic emotional text features by using an attention mechanism to generate fused semantic emotional text features; connecting the local semantic emotional text features, the contextual semantic emotional text features and the fused semantic emotional text features to generate global semantic emotional text features; and performing final text emotional semantic analysis and recognition by using a softmax classifier and taking the global semantic emotional text features as input.
    Type: Grant
    Filed: September 1, 2021
    Date of Patent: December 7, 2021
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jianhua Tao, Ke Xu, Bin Liu, Yongwei Li