Patents by Inventor Anan LIU

Anan LIU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230400301
    Abstract: The present disclosure discloses a tropical instability wave early warning method based on temporal-spatial cross-scale attention fusion, including performing cross-scale spatial map fusion on the multi-scale feature maps by a bilateral local attention mechanism, calculating a prediction loss by the global feature description map, and combining the prediction loss and the regularization loss for optimization training of neural networks; predicting a sea surface temperature at a moment T based on the optimally trained neural networks, selecting data at K moments before the moment T and inputting the data into the optimally trained neural networks, outputting a predicted value of tropical instability waves by the optimally trained neural networks, and drawing a temporal-spatial image of the tropical instability waves by associating the predicted value with coordinates, so as to achieve early warning of the tropical instability waves. The device includes a processor and a memory.
    Type: Application
    Filed: April 12, 2023
    Publication date: December 14, 2023
    Inventors: Dan SONG, Zhenghao FANG, Anan LIU, Wenhui LI, Zhiqiang WEI, Jie NIE, Wensheng ZHANG, Zhengya SUN
  • Publication number: 20230393304
    Abstract: The present invention discloses an El Nino extreme weather warning method based on incremental learning, comprising: through supervised representation learning, selectively constraining, by a multi-scale feature frequency domain distillation technology, drift of low-frequency components of the multi-scale features based on incremental training, and memorizing knowledge learned by the parallel convolutional neural networks in old tasks; adaptively learning different fusion parameters according to different time spans of the input multi-scale data by using a multi-scale feature adaptive fusion technology, so as to enhance the ability to learn new tasks; and outputting a Nino3.4 index reflecting a change rule of El Nino through fully connected layers according to the adaptively fused features, establishing a mapping function of an extreme rainfall probability r based on the Nino3.4 index, and in response to predicting that the value r goes beyond a threshold value k.
    Type: Application
    Filed: April 12, 2023
    Publication date: December 7, 2023
    Inventors: Anan LIU, Haochun LU, Wenhui LI, Dan SONG, Zhiqiang WEI, Jie NIE, Wensheng ZHANG, Zhengya SUN
  • Patent number: 11361186
    Abstract: The present disclosure discloses a visual relationship detection method based on adaptive clustering learning, including: detecting visual objects from an input image and recognizing the visual objects to obtain context representation; embedding the context representation of pair-wise visual objects into a low-dimensional joint subspace to obtain a visual relationship sharing representation; embedding the context representation into a plurality of low-dimensional clustering subspaces, respectively, to obtain a plurality of preliminary visual relationship enhancing representation; and then performing regularization by clustering-driven attention mechanism; fusing the visual relationship sharing representations and regularized visual relationship enhancing representations with a prior distribution over the category label of visual relationship predicate, to predict visual relationship predicates by synthetic relational reasoning.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: June 14, 2022
    Assignee: TIANJIN UNIVERSITY
    Inventors: Anan Liu, Yanhui Wang, Ning Xu, Weizhi Nie
  • Patent number: 11301725
    Abstract: The present invention discloses a visual relationship detection method based on a region-aware learning mechanism, comprising: acquiring a triplet graph structure and combining features after its aggregation with neighboring nodes, using the features as nodes in a second graph structure, and connecting in accordance with equiprobable edges to form the second graph structure; combining node features of the second graph structure with features of corresponding entity object nodes in the triplet, using the combined features as a visual attention mechanism and merging internal region visual features extracted by two entity objects, and using the merged region visual features as visual features to be used in the next message propagation by corresponding entity object nodes in the triplet; and after a certain number of times of message propagations, combining the output triplet node features and the node features of the second graph structure to infer predicates between object sets.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: April 12, 2022
    Assignee: TIANJIN UNIVERSITY
    Inventors: Anan Liu, Hongshuo Tian, Ning Xu, Weizhi Nie, Dan Song
  • Publication number: 20210264216
    Abstract: The present invention discloses a visual relationship detection method based on a region-aware learning mechanism, comprising: acquiring a triplet graph structure and combining features after its aggregation with neighboring nodes, using the features as nodes in a second graph structure, and connecting in accordance with equiprobable edges to form the second graph structure; combining node features of the second graph structure with features of corresponding entity object nodes in the triplet, using the combined features as a visual attention mechanism and merging internal region visual features extracted by two entity objects, and using the merged region visual features as visual features to be used in the next message propagation by corresponding entity object nodes in the triplet; and after a certain number of times of message propagations, combining the output triplet node features and the node features of the second graph structure to infer predicates between object sets.
    Type: Application
    Filed: August 31, 2020
    Publication date: August 26, 2021
    Inventors: Anan LIU, Hongshuo TIAN, Ning XU, Weizhi NIE, Dan SONG
  • Publication number: 20210192274
    Abstract: The present disclosure discloses a visual relationship detection method based on adaptive clustering learning, including: detecting visual objects from an input image and recognizing the visual objects to obtain context representation; embedding the context representation of pair-wise visual objects into a low-dimensional joint subspace to obtain a visual relationship sharing representation; embedding the context representation into a plurality of low-dimensional clustering subspaces, respectively, to obtain a plurality of preliminary visual relationship enhancing representation; and then performing regularization by clustering-driven attention mechanism; fusing the visual relationship sharing representations and regularized visual relationship enhancing representations with a prior distribution over the category label of visual relationship predicate, to predict visual relationship predicates by synthetic relational reasoning.
    Type: Application
    Filed: August 31, 2020
    Publication date: June 24, 2021
    Inventors: Anan LIU, Yanhui WANG, Ning XU, Weizhi NIE