Patents by Inventor Zejun Ma

Zejun Ma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12231872
    Abstract: An audio signal playing method and apparatus, and an electronic device are provided. The method comprises: separating, from a first audio signal, a recorded audio signal corresponding to each of at least one sound source; on the basis of the first audio signal, determining a real-time orientation of each of the at least one sound source relative to the head of a user; for each sound source, according to the real-time orientation of the sound source and the recorded audio signal corresponding to the sound source, generating a target direct audio signal corresponding to the sound source, and generating a target reverberated audio signal corresponding to the sound source; and playing a second audio signal that is generated by means of fusing the target direct audio signal and the target reverberated audio signal corresponding to each sound source.
    Type: Grant
    Filed: February 28, 2024
    Date of Patent: February 18, 2025
    Assignee: Beijing Youzhuju Network Technology Co., Ltd.
    Inventors: Zheng Xue, Yangfei Xu, Wenzhi Fan, Zhifei Zhang, Yuzhou Gong, Zejun Ma
  • Publication number: 20240420678
    Abstract: A method, apparatus, a computer readable medium, and an electronic device of speech synthesis. The method includes: obtaining a phoneme sequence corresponding to text to be synthesized; generating a phonemic-level TOBI representation sequence and a prosodic-acoustic feature corresponding to the text to be synthesized based on the phoneme sequence and the text to be synthesized, and generating acoustic feature information corresponding to the text to be synthesized based on the TOBI representation sequence and the prosodic-acoustic feature; and generating first audio information corresponding to the text to be synthesized based on the acoustic feature information. The method enables the synthesized audio to be more natural, cadenced, and aligned with the intended semantics of a speaker.
    Type: Application
    Filed: August 26, 2024
    Publication date: December 19, 2024
    Inventors: Haopeng Lin, Zejun Ma
  • Publication number: 20240379116
    Abstract: The disclosure relates to an audio caption alignment method and apparatus, a medium, and an electronic device. The method includes: obtaining a target audio and a target caption text of the target audio; obtaining a plurality of first target audios by slicing the target audio according to a slicing duration in a case that a duration of the target audio is greater than a first preset duration; determining first audio feature information of each of the first target audios; obtaining target audio feature information of the target audio by concatenating all of the first audio feature information in a case that the duration of the target audio is less than or equal to a second preset duration, where the second preset duration is greater than the first preset duration; and generating caption information corresponding to the target audio according to the target caption text and the target audio feature information.
    Type: Application
    Filed: May 13, 2024
    Publication date: November 14, 2024
    Inventors: Xiusong SUN, Zejun MA
  • Publication number: 20240331706
    Abstract: A method, apparatus, device, and storage medium for speaker change point detection, the method including: acquiring target voice data to be detected; and extracting an acoustic feature characterizing acoustic information of the target voice data from the target voice data; encoding the acoustic feature to obtain speaker characterization vectors of the target voice data; integrating and firing the speaker characterization vectors of the target voice data based on a continuous integrate-and-fire CIF mechanism, to obtain a sequence of speaker characterizations in the target voice data; and determining the speaker change points, according to the sequence of the speaker characterizations bounded by the speaker change points in the target voice data. This method can effectively improve the accuracy of the detection result of a speaker change point in target voice data with a type of interaction.
    Type: Application
    Filed: June 12, 2024
    Publication date: October 3, 2024
    Inventors: Linhao DONG, Zhiyun FAN, Zejun MA
  • Patent number: 12067987
    Abstract: The present disclosure discloses a method and device of generating acoustic features, speech model training, and speech recognition. By acquiring the acoustic information vector of the current speech frame and the information weight of the current speech frame, and according to the accumulated information weight corresponding to the previous speech frame, the retention rate corresponding to the current speech frame, and the information weight of the current speech frame, the accumulated information weight corresponding to the current speech frame can be obtained. The retention rate is the difference between 1 and a leakage rate.
    Type: Grant
    Filed: January 30, 2024
    Date of Patent: August 20, 2024
    Assignee: BEIJING YOUZHUJU NETWORK TECHNOLOGY CO., LTD.
    Inventors: Linhao Dong, Zejun Ma
  • Patent number: 12039981
    Abstract: A method, apparatus, device, and storage medium for speaker change point detection, the method including: acquiring target voice data to be detected; and extracting an acoustic feature characterizing acoustic information of the target voice data from the target voice data; encoding the acoustic feature to obtain speaker characterization vectors at a voice frame level of the target voice data; integrating and firing the speaker characterization vectors at the voice frame level of the target voice data based on a continuous integrate-and-fire CIF mechanism, to obtain a sequence of speaker characterizations bounded by speaker change points in the target voice data; and determining a timestamp corresponding to the speaker change points, according to the sequence of the speaker characterizations bounded by the speaker change points in the target voice data.
    Type: Grant
    Filed: December 22, 2023
    Date of Patent: July 16, 2024
    Assignee: BEIJING YOUZHUJU NETWORK TECHNOLOGY CO., LTD.
    Inventors: Linhao Dong, Zhiyun Fan, Zejun Ma
  • Publication number: 20240221729
    Abstract: The present disclosure provides a voice recognition method and apparatus, a medium, and an electronic device. The method includes: encoding received voice data to obtain an acoustic vector sequence corresponding to the voice data; obtaining, according to the acoustic vector sequence and a first prediction model, an information amount sequence corresponding to the voice data and a first probability sequence corresponding to the voice data; obtaining a second probability sequence according to the acoustic vector sequence and a second prediction model; determining a target probability sequence according to the first probability sequence and the second probability sequence; and determining a target text corresponding to the voice data according to the target probability sequence.
    Type: Application
    Filed: May 7, 2022
    Publication date: July 4, 2024
    Inventors: Linhao DONG, Zejun MA
  • Publication number: 20240205634
    Abstract: An audio signal playing method and apparatus, and an electronic device are provided. The method comprises: separating, from a first audio signal, a recorded audio signal corresponding to each of at least one sound source; on the basis of the first audio signal, determining a real-time orientation of each of the at least one sound source relative to the head of a user; for each sound source, according to the real-time orientation of the sound source and the recorded audio signal corresponding to the sound source, generating a target direct audio signal corresponding to the sound source, and generating a target reverberated audio signal corresponding to the sound source; and playing a second audio signal that is generated by means of fusing the target direct audio signal and the target reverberated audio signal corresponding to each sound source.
    Type: Application
    Filed: February 28, 2024
    Publication date: June 20, 2024
    Inventors: Zheng XUE, Yangfei XU, Wenzhi FAN, Zhifei ZHANG, Yuzhou GONG, Zejun MA
  • Publication number: 20240185046
    Abstract: The present application relates to an intention recognition method and apparatus, a readable medium, and an electronic device. The method includes: by means of a preset intention recognition quantification model, performing a quantification operation on a dot product of a query vector and a key vector which correspond to each character in a target text, so as to obtain a fixed-point type target vector of a first bit; according to the fixed-point type target vector, determining, by means of a target mapping relationship, a floating-point type attention weight of a second bit corresponding to each character; and according to the floating-point type attention weight, determining a target intention corresponding to the target text, the first bit being smaller than the second bit.
    Type: Application
    Filed: February 16, 2024
    Publication date: June 6, 2024
    Inventors: Xiaoyang LI, Zilin YU, Xiangyang ZHANG, Xiaogang TIAN, Zejun MA
  • Publication number: 20240169988
    Abstract: The present disclosure discloses a method and device of generating acoustic features, speech model training, and speech recognition. By acquiring the acoustic information vector of the current speech frame and the information weight of the current speech frame, and according to the accumulated information weight corresponding to the previous speech frame, the retention rate corresponding to the current speech frame, and the information weight of the current speech frame, the accumulated information weight corresponding to the current speech frame can be obtained. The retention rate is the difference between 1 and a leakage rate.
    Type: Application
    Filed: January 30, 2024
    Publication date: May 23, 2024
    Inventors: Linhao DONG, Zejun MA
  • Publication number: 20240135933
    Abstract: A method, apparatus, device, and storage medium for speaker change point detection, the method including: acquiring target voice data to be detected; and extracting an acoustic feature characterizing acoustic information of the target voice data from the target voice data; encoding the acoustic feature to obtain speaker characterization vectors at a voice frame level of the target voice data; integrating and firing the speaker characterization vectors at the voice frame level of the target voice data based on a continuous integrate-and-fire CIF mechanism, to obtain a sequence of speaker characterizations bounded by speaker change points in the target voice data; and determining a timestamp corresponding to the speaker change points, according to the sequence of the speaker characterizations bounded by the speaker change points in the target voice data.
    Type: Application
    Filed: December 22, 2023
    Publication date: April 25, 2024
    Inventors: Linhao DONG, Zhiyun FAN, Zejun MA
  • Publication number: 20240127795
    Abstract: A model training method, a speech recognition method and apparatus, a medium, and a device are provided. The speech recognition model including an encoder, a CIF prediction sub-model and a CTC prediction sub-model. The model training method includes: encoding training speech data based on the encoder to obtain an acoustic vector sequence corresponding to the training speech data; obtaining an information amount sequence corresponding to the training speech data based on the acoustic vector sequence and the CIF prediction sub-model; obtaining a target probability sequence based on the acoustic vector sequence and the CTC prediction sub-model; determining a target loss of the speech recognition model based on the information amount sequence and the target probability sequence; and updating, in response to an updating condition being satisfied, a model parameter of the speech recognition model based on the target loss.
    Type: Application
    Filed: May 7, 2022
    Publication date: April 18, 2024
    Applicant: Beijing Youzhuju Network Technology Co., Ltd.
    Inventors: Linhao DONG, Zejun MA
  • Publication number: 20240095451
    Abstract: Provided are an electronic device and a computer readable storage medium. The method includes: acquiring a text to be analyzed; performing token conversion on words in the text to be analyzed to obtain a token sequence to be analyzed, where tokens in token sequences to be analyzed corresponding to texts to be analyzed in different languages belong to a same type; and performing feature extraction on the token sequence to be analyzed, and processing a target task based on the extracted feature, to determine an analysis result for the text to be analyzed.
    Type: Application
    Filed: September 18, 2023
    Publication date: March 21, 2024
    Inventors: Yuxiang ZOU, Zejun MA
  • Publication number: 20240046921
    Abstract: Embodiments of the present disclosure provide a method, apparatus, electronic device, and medium for speech processing. The method comprises generating a token-level semantic feature of target speech data based on a frame-level acoustic feature of the target speech data. The method further comprises generating a token-level voiceprint feature of the target speech data based on the frame-level acoustic feature. The method further comprises determining a token in the target speech data where speaker change occurs based on the token-level semantic feature and the token-level voiceprint feature. According to embodiments of the present disclosure, speaker change in speech data is detected at the token level in conjunction with the speaker's acoustic features and speech contents, and speaker-based speech recognition results are output directly without post-processing, simplifying the speech recognition process.
    Type: Application
    Filed: August 4, 2023
    Publication date: February 8, 2024
    Inventors: Linhao DONG, Zhenlin Liang, Zhiyun Fan, Yi Liu, Zejun Ma
  • Publication number: 20230402031
    Abstract: A speech processing method is provided. The method includes: receiving a speech block to be identified as a current speech block, where the speech block includes a past frame, a current frame and a future frame; performing a speech identification process based on the current speech block, where the speech identification process includes: performing speech identification based on the current speech block to obtain a speech identification result of the current frame and a speech identification result of the future frame; determining whether a previous speech block for the current speech block exists; in a case that the previous speech block for the current speech block exists, updating a target identification result based on the speech identification result of the current frame of the current speech block; and outputting the speech identification result of the future frame of the current speech block.
    Type: Application
    Filed: April 6, 2022
    Publication date: December 14, 2023
    Inventors: Linhao DONG, Meng CAI, Zejun MA
  • Patent number: 10373613
    Abstract: A dual-mode voice control method is disclosed. The method may comprise determining whether a user has executed an operation of activating an operate-to-speak stop determination mode in a voice input interface. The method may further comprise, in response to determining that the user has executed the operation of activating the operate-to-speak stop determination mode, determining whether a microphone is in a busy state. The method may further comprise, in response to determining that the microphone is in the busy state, switching a voice mode from a directly-speak automatic stop determination mode to the operate-to-speak stop determination mode. Before the user executes the operation of activating the operate-to-speak stop determination mode, the voice mode is in the directly-speak automatic stop determination mode if the microphone is in the busy state.
    Type: Grant
    Filed: November 29, 2016
    Date of Patent: August 6, 2019
    Assignee: Guangzhou Shenma Mobile Information Technology Co., Ltd.
    Inventors: Yajun Wang, Tuwenchang Si, Na Wang, Yi Peng, Sishou Zheng, Xiaoli Fu, Chao Li, Wei Kang, Yining Chen, Zejun Ma
  • Publication number: 20170162196
    Abstract: A dual-mode voice control method is disclosed. The method may comprise determining whether a user has executed an operation of activating an operate-to-speak stop determination mode in a voice input interface. The method may further comprise, in response to determining that the user has executed the operation of activating the operate-to-speak stop determination mode, determining whether a microphone is in a busy state. The method may further comprise, in response to determining that the microphone is in the busy state, switching a voice mode from a directly-speak automatic stop determination mode to the operate-to-speak stop determination mode. Before the user executes the operation of activating the operate-to-speak stop determination mode, the voice mode is in the directly-speak automatic stop determination mode if the microphone is in the busy state.
    Type: Application
    Filed: November 29, 2016
    Publication date: June 8, 2017
    Inventors: Yajun Wang, Tuwenchang Si, Na Wang, Yi Peng, Sishou Zheng, Xiaoli Fu, Chao Li, Wei Kang, Yining Chen, Zejun Ma