Patents by Inventor Yunyun Ji

Yunyun Ji has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11404055
    Abstract: A method includes extracting, from a frame of the audio sample, spectral features indicative of cochlear impulse response of an auditory system; obtaining an estimate of a speech signal in the frame from a neural network that is trained, during a training phase, to accept the spectral features as input and output the estimate of the speech signal, where the estimate of the speech signal includes reverberation and excludes noise present in the frame; mapping the estimate of the speech signal to a frequency domain of the frame using mapping parameters obtained during the training phase to obtain an estimate of a reverberant speech spectrum in the frame; and obtaining, from a time-distributed neural network, a dereverberant frame of the frame, where the estimate of the reverberant speech spectrum in the frame is used as an input to the time-distributed neural network.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: August 2, 2022
    Assignee: Agora Lab, Inc.
    Inventors: Yunyun Ji, Ruofei Chen, Zihe Liu, Xiaohan Zhao, Siqiang Yao
  • Publication number: 20220122597
    Abstract: A method includes extracting, from a frame of the audio sample, spectral features indicative of cochlear impulse response of an auditory system; obtaining an estimate of a speech signal in the frame from a neural network that is trained, during a training phase, to accept the spectral features as input and output the estimate of the speech signal, where the estimate of the speech signal includes reverberation and excludes noise present in the frame; mapping the estimate of the speech signal to a frequency domain of the frame using mapping parameters obtained during the training phase to obtain an estimate of a reverberant speech spectrum in the frame; and obtaining, from a time-distributed neural network, a dereverberant frame of the frame, where the estimate of the reverberant speech spectrum in the frame is used as an input to the time-distributed neural network.
    Type: Application
    Filed: October 16, 2020
    Publication date: April 21, 2022
    Inventors: Yunyun Ji, Ruofei Chen, Zihe Liu, Xiaohan Zhao, Siqiang Yao