Patents by Inventor Ziyang SONG

Ziyang SONG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240046120
    Abstract: The present disclosure provides a training method and a prediction method for a diagenetic parameter prediction model based on an artificial intelligence algorithm, which includes: obtaining a plurality of diagenesis samples each including diagenetic condition parameters and an actual diagenetic parameter evolved therefrom; constructing an initial diagenetic parameter prediction model based on the diagenesis samples and a total dimension of the diagenetic condition parameters; and training the initial diagenetic parameter prediction model with the diagenesis samples so as to obtain a trained diagenetic parameter prediction model. The present disclosure can obtain a diagenetic parameter prediction model by training with the existing diagenesis samples, thereby solving problems of large amount of calculation, high uncertainty and large deviations in the prediction of the diagenetic parameters, which leads to a low evaluation accuracy of reservoirs and limits the oil and gas exploration.
    Type: Application
    Filed: July 20, 2023
    Publication date: February 8, 2024
    Applicant: China University of Petroleum-Beijing
    Inventors: Leilei YANG, Keyu LIU, Wei YANG, Hui WANG, Wenhao YANG, Zijie ZHOU, Ke XU, Yinglin CAO, Xiaowei LI, Yi LIU, Dawei WANG, Shu XU, Ziyang SONG
  • Publication number: 20220051061
    Abstract: An artificial intelligence-based action recognition method includes: determining, according to video data comprising an interactive object, node sequence information corresponding to video frames in the video data, the node sequence information of each video frame including position information of nodes in a node sequence, the nodes in the node sequence being nodes of the interactive object that are moved to implement a corresponding interactive action; determining action categories corresponding to the video frames in the video data, including: determining, according to the node sequence information corresponding to N consecutive video frames in the video data, action categories respectively corresponding to the N consecutive video frames; and determining, according to the action categories corresponding to the video frames in the video data, a target interactive action made by the interactive object in the video data.
    Type: Application
    Filed: November 1, 2021
    Publication date: February 17, 2022
    Inventors: Wanchao CHI, Chong ZHANG, Yonggen LING, Wei LIU, Zhengyou ZHANG, Zejian YUAN, Ziyang SONG, Ziyi YIN