Patents by Inventor Haoqi Li

Haoqi Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240133845
    Abstract: The present disclosure discloses a method for detecting a natural frequency of a blade by a single blade tip timing sensor. In the method, two segments of displacement data vectors are subjected to dot product to obtain a product data vector after multiplication of corresponding serial numbers, the product data vector is subjected to low-frequency filtering, the filtered product vector is subjected to Hilbert transform to obtain an instantaneous phase, the instantaneous phase is subjected to discrete Fourier transform, an absolute value of a natural frequency difference between blades is extracted from amplitude frequency data, blades on a bladed disk are subjected to permutation and combination to obtain absolute values of the natural frequency differences between all the blades, a sum of the absolute values of the natural frequency differences between each blade and other blades is calculated.
    Type: Application
    Filed: October 20, 2022
    Publication date: April 25, 2024
    Inventors: Zhibo YANG, Jiahui CAO, Haoqi LI, Shaohua TIAN, Laihao YANG, Zengkun WANG, Xuefeng CHEN, Shuming WU
  • Patent number: 11898453
    Abstract: The present disclosure discloses a method for extracting a natural frequency difference between blades by a single blade tip timing sensor or uniformly distributed blade tip timing sensors. The method includes the following steps: acquiring actual arrival time of a rotating blade by using a single blade tip timing sensor or uniformly distributed blade tip timing sensors, and converting a difference between theoretical arrival time and the actual arrival time into displacement data of a blade tip according to a rotational speed and a blade length of the rotating blade; selecting displacement data of blade tips of two rotating blades with the same blade length at the same rotational speed; intercepting the displacement data, performing discrete Fourier transform respectively, and making a sampling frequency approximate to an average rotational speed to obtain spectrum data.
    Type: Grant
    Filed: October 20, 2022
    Date of Patent: February 13, 2024
    Assignee: XI'AN JIAOTONG UNIVERSITY
    Inventors: Zhibo Yang, Jiahui Cao, Xuefeng Chen, Zengkun Wang, Laihao Yang, Shaohua Tian, Haoqi Li, Wenbo Li
  • Patent number: 11829878
    Abstract: In sequence level prediction of a sequence of frames of high dimensional data one or more affective labels are provided at the end of the sequence. Each label pertains to the entire sequence of frames. An action is taken with an agent controlled by a machine learning algorithm for a current frame of the sequence at a current time step. An output of the action represents affective label prediction for the frame at the current time step. A pool of actions taken up until the current time step including the action taken with the agent is transformed into a predicted affective history for a subsequent time step. A reward is generated on predicted actions up to the current time step by comparing the predicted actions against corresponding annotated affective labels.
    Type: Grant
    Filed: June 29, 2022
    Date of Patent: November 28, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Ruxin Chen, Naveen Kumar, Haoqi Li
  • Publication number: 20220327828
    Abstract: In sequence level prediction of a sequence of frames of high dimensional data one or more affective labels are provided at the end of the sequence. Each label pertains to the entire sequence of frames. An action is taken with an agent controlled by a machine learning algorithm for a current frame of the sequence at a current time step. An output of the action represents affective label prediction for the frame at the current time step. A pool of actions taken up until the current time step including the action taken with the agent is transformed into a predicted affective history for a subsequent time step. A reward is generated on predicted actions up to the current time step by comparing the predicted actions against corresponding annotated affective labels.
    Type: Application
    Filed: June 29, 2022
    Publication date: October 13, 2022
    Inventors: Ruxin Chen, Naveen Kumar, Haoqi Li
  • Patent number: 11386657
    Abstract: Methods and systems for performing sequence level prediction of a video scene are described. Video information in a video scene is represented as a sequence of features depicted each frame. One or more scene affective labels are provided at the end of the sequence. Each label pertains to the entire sequence of frames of data. An action is taken with an agent controlled by a machine learning algorithm for a current frame of the sequence at a current time step. An output of the action represents affective label prediction for the frame at the current time step. A pool of actions taken up until the current time step including the action taken with the agent is transformed into a predicted affective history for a subsequent time step. A reward is generated on predicted actions up to the current time step by comparing the predicted actions against corresponding annotated scene affective labels.
    Type: Grant
    Filed: January 4, 2021
    Date of Patent: July 12, 2022
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Ruxin Chen, Naveen Kumar, Haoqi Li
  • Publication number: 20210124930
    Abstract: Methods and systems for performing sequence level prediction of a video scene are described. Video information in a video scene is represented as a sequence of features depicted each frame. One or more scene affective labels are provided at the end of the sequence. Each label pertains to the entire sequence of frames of data. An action is taken with an agent controlled by a machine learning algorithm for a current frame of the sequence at a current time step. An output of the action represents affective label prediction for the frame at the current time step. A pool of actions taken up until the current time step including the action taken with the agent is transformed into a predicted affective history for a subsequent time step. A reward is generated on predicted actions up to the current time step by comparing the predicted actions against corresponding annotated scene affective labels.
    Type: Application
    Filed: January 4, 2021
    Publication date: April 29, 2021
    Inventors: Ruxin Chen, Naveen Kumar, Haoqi Li
  • Patent number: 10885341
    Abstract: Methods and systems for performing sequence level prediction of a video scene are described. Video information in a video scene is represented as a sequence of features depicted each frame. An environment state for each time step t corresponding to each frame is represented by the video information for time step t and predicted affective information from a previous time step t?1. An action A(t) as taken with an agent controlled by a machine learning algorithm for the frame at step t, wherein an output of the action A(t) represents affective label prediction for the frame at the time step t. A pool of predicted actions is transformed to a predicted affective history at a next time step t+1. The predictive affective history is included as part of the environment state for the next time step t+1. A reward R is generated on predicted actions up to the current time step t, by comparing them against corresponding annotated movie scene affective labels.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: January 5, 2021
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Ruxin Chen, Naveen Kumar, Haoqi Li
  • Publication number: 20190163977
    Abstract: Methods and systems for performing sequence level prediction of a video scene are described. Video information in a video scene is represented as a sequence of features depicted each frame. An environment state for each time step t corresponding to each frame is represented by the video information for time step t and predicted affective information from a previous time step t?1. An action A(t) as taken with an agent controlled by a machine learning algorithm for the frame at step t, wherein an output of the action A(t) represents affective label prediction for the frame at the time step t. A pool of predicted actions is transformed to a predicted affective history at a next time step t+1. The predictive affective history is included as part of the environment state for the next time step t+1. A reward R is generated on predicted actions up to the current time step t, by comparing them against corresponding annotated movie scene affective labels.
    Type: Application
    Filed: October 25, 2018
    Publication date: May 30, 2019
    Inventors: Ruxin Chen, Naveen Kumar, Haoqi Li