Patents by Inventor Jiyuan Yu

Jiyuan Yu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230401390
    Abstract: An automatic concrete dam defect image description generation method based on graph attention network, including: 1) extract the local grid features and whole image features of the defect image and conduct image coding by using multi-layer convolutional neural network; 2) construct the grid feature interaction graph, and fuse and encode the grid visual features and global image features of the defect image; 3) update and optimize the global and local features through the graph attention network, and fully utilize the improved visual features for defect description. The invention constructs the grid feature interaction graph, updates the node information by using the graph attention network, and realizes the feature extraction task as the graph node classification task. The invention can capture the global image information of the defect image and the potential interaction of local grid features, and the generated description text can accurately and coherently describe the defect information.
    Type: Application
    Filed: June 1, 2023
    Publication date: December 14, 2023
    Inventors: Hua Zhou, Fudong Chi, Yingchi Mao, Hao Chen, Xu Wan, Huan Zhao, Bohui Pang, Jiyuan Yu, Rui Guo, Guangyao Wu, Shunbo Wang
  • Publication number: 20230368500
    Abstract: A time-series image description method for dam defects based on local self-attention mechanism is provided, including: performing frame sampling on an input time-series image of dam defect, extracting a feature sequence using a convolutional neural network and using the sequence as an input to a self-attention encoder, where the encoder includes a Transformer network based on a variable self-attention mechanism that dynamically establishes contextual feature relations for each frame; generating description text using a long short term memory (LSTM) network based on a local attention mechanism to enable each word predicted to be feature related to an image frame, improving text generation accuracy by establishing a contextual dependency between image and text. A dynamic mechanism is added to the present application for calculating the global self-attention of image frames, and LSTM networks with added local attention directly establish the correspondence between image and text modal data.
    Type: Application
    Filed: June 19, 2023
    Publication date: November 16, 2023
    Inventors: Hongqi Ma, Haibin Xiao, Yingchi Mao, Fudong Chi, Rongzhi Qi, Bohui Pang, Xiaofeng Zhou, Hao Chen, Jiyuan Yu, Huan Zhao