Patents by Inventor Zejun Ma
Zejun Ma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12231872Abstract: An audio signal playing method and apparatus, and an electronic device are provided. The method comprises: separating, from a first audio signal, a recorded audio signal corresponding to each of at least one sound source; on the basis of the first audio signal, determining a real-time orientation of each of the at least one sound source relative to the head of a user; for each sound source, according to the real-time orientation of the sound source and the recorded audio signal corresponding to the sound source, generating a target direct audio signal corresponding to the sound source, and generating a target reverberated audio signal corresponding to the sound source; and playing a second audio signal that is generated by means of fusing the target direct audio signal and the target reverberated audio signal corresponding to each sound source.Type: GrantFiled: February 28, 2024Date of Patent: February 18, 2025Assignee: Beijing Youzhuju Network Technology Co., Ltd.Inventors: Zheng Xue, Yangfei Xu, Wenzhi Fan, Zhifei Zhang, Yuzhou Gong, Zejun Ma
-
Publication number: 20240420678Abstract: A method, apparatus, a computer readable medium, and an electronic device of speech synthesis. The method includes: obtaining a phoneme sequence corresponding to text to be synthesized; generating a phonemic-level TOBI representation sequence and a prosodic-acoustic feature corresponding to the text to be synthesized based on the phoneme sequence and the text to be synthesized, and generating acoustic feature information corresponding to the text to be synthesized based on the TOBI representation sequence and the prosodic-acoustic feature; and generating first audio information corresponding to the text to be synthesized based on the acoustic feature information. The method enables the synthesized audio to be more natural, cadenced, and aligned with the intended semantics of a speaker.Type: ApplicationFiled: August 26, 2024Publication date: December 19, 2024Inventors: Haopeng Lin, Zejun Ma
-
Publication number: 20240379116Abstract: The disclosure relates to an audio caption alignment method and apparatus, a medium, and an electronic device. The method includes: obtaining a target audio and a target caption text of the target audio; obtaining a plurality of first target audios by slicing the target audio according to a slicing duration in a case that a duration of the target audio is greater than a first preset duration; determining first audio feature information of each of the first target audios; obtaining target audio feature information of the target audio by concatenating all of the first audio feature information in a case that the duration of the target audio is less than or equal to a second preset duration, where the second preset duration is greater than the first preset duration; and generating caption information corresponding to the target audio according to the target caption text and the target audio feature information.Type: ApplicationFiled: May 13, 2024Publication date: November 14, 2024Inventors: Xiusong SUN, Zejun MA
-
Publication number: 20240331706Abstract: A method, apparatus, device, and storage medium for speaker change point detection, the method including: acquiring target voice data to be detected; and extracting an acoustic feature characterizing acoustic information of the target voice data from the target voice data; encoding the acoustic feature to obtain speaker characterization vectors of the target voice data; integrating and firing the speaker characterization vectors of the target voice data based on a continuous integrate-and-fire CIF mechanism, to obtain a sequence of speaker characterizations in the target voice data; and determining the speaker change points, according to the sequence of the speaker characterizations bounded by the speaker change points in the target voice data. This method can effectively improve the accuracy of the detection result of a speaker change point in target voice data with a type of interaction.Type: ApplicationFiled: June 12, 2024Publication date: October 3, 2024Inventors: Linhao DONG, Zhiyun FAN, Zejun MA
-
Patent number: 12067987Abstract: The present disclosure discloses a method and device of generating acoustic features, speech model training, and speech recognition. By acquiring the acoustic information vector of the current speech frame and the information weight of the current speech frame, and according to the accumulated information weight corresponding to the previous speech frame, the retention rate corresponding to the current speech frame, and the information weight of the current speech frame, the accumulated information weight corresponding to the current speech frame can be obtained. The retention rate is the difference between 1 and a leakage rate.Type: GrantFiled: January 30, 2024Date of Patent: August 20, 2024Assignee: BEIJING YOUZHUJU NETWORK TECHNOLOGY CO., LTD.Inventors: Linhao Dong, Zejun Ma
-
Patent number: 12039981Abstract: A method, apparatus, device, and storage medium for speaker change point detection, the method including: acquiring target voice data to be detected; and extracting an acoustic feature characterizing acoustic information of the target voice data from the target voice data; encoding the acoustic feature to obtain speaker characterization vectors at a voice frame level of the target voice data; integrating and firing the speaker characterization vectors at the voice frame level of the target voice data based on a continuous integrate-and-fire CIF mechanism, to obtain a sequence of speaker characterizations bounded by speaker change points in the target voice data; and determining a timestamp corresponding to the speaker change points, according to the sequence of the speaker characterizations bounded by the speaker change points in the target voice data.Type: GrantFiled: December 22, 2023Date of Patent: July 16, 2024Assignee: BEIJING YOUZHUJU NETWORK TECHNOLOGY CO., LTD.Inventors: Linhao Dong, Zhiyun Fan, Zejun Ma
-
Publication number: 20240221729Abstract: The present disclosure provides a voice recognition method and apparatus, a medium, and an electronic device. The method includes: encoding received voice data to obtain an acoustic vector sequence corresponding to the voice data; obtaining, according to the acoustic vector sequence and a first prediction model, an information amount sequence corresponding to the voice data and a first probability sequence corresponding to the voice data; obtaining a second probability sequence according to the acoustic vector sequence and a second prediction model; determining a target probability sequence according to the first probability sequence and the second probability sequence; and determining a target text corresponding to the voice data according to the target probability sequence.Type: ApplicationFiled: May 7, 2022Publication date: July 4, 2024Inventors: Linhao DONG, Zejun MA
-
Publication number: 20240205634Abstract: An audio signal playing method and apparatus, and an electronic device are provided. The method comprises: separating, from a first audio signal, a recorded audio signal corresponding to each of at least one sound source; on the basis of the first audio signal, determining a real-time orientation of each of the at least one sound source relative to the head of a user; for each sound source, according to the real-time orientation of the sound source and the recorded audio signal corresponding to the sound source, generating a target direct audio signal corresponding to the sound source, and generating a target reverberated audio signal corresponding to the sound source; and playing a second audio signal that is generated by means of fusing the target direct audio signal and the target reverberated audio signal corresponding to each sound source.Type: ApplicationFiled: February 28, 2024Publication date: June 20, 2024Inventors: Zheng XUE, Yangfei XU, Wenzhi FAN, Zhifei ZHANG, Yuzhou GONG, Zejun MA
-
Publication number: 20240185046Abstract: The present application relates to an intention recognition method and apparatus, a readable medium, and an electronic device. The method includes: by means of a preset intention recognition quantification model, performing a quantification operation on a dot product of a query vector and a key vector which correspond to each character in a target text, so as to obtain a fixed-point type target vector of a first bit; according to the fixed-point type target vector, determining, by means of a target mapping relationship, a floating-point type attention weight of a second bit corresponding to each character; and according to the floating-point type attention weight, determining a target intention corresponding to the target text, the first bit being smaller than the second bit.Type: ApplicationFiled: February 16, 2024Publication date: June 6, 2024Inventors: Xiaoyang LI, Zilin YU, Xiangyang ZHANG, Xiaogang TIAN, Zejun MA
-
Publication number: 20240169988Abstract: The present disclosure discloses a method and device of generating acoustic features, speech model training, and speech recognition. By acquiring the acoustic information vector of the current speech frame and the information weight of the current speech frame, and according to the accumulated information weight corresponding to the previous speech frame, the retention rate corresponding to the current speech frame, and the information weight of the current speech frame, the accumulated information weight corresponding to the current speech frame can be obtained. The retention rate is the difference between 1 and a leakage rate.Type: ApplicationFiled: January 30, 2024Publication date: May 23, 2024Inventors: Linhao DONG, Zejun MA
-
Publication number: 20240135933Abstract: A method, apparatus, device, and storage medium for speaker change point detection, the method including: acquiring target voice data to be detected; and extracting an acoustic feature characterizing acoustic information of the target voice data from the target voice data; encoding the acoustic feature to obtain speaker characterization vectors at a voice frame level of the target voice data; integrating and firing the speaker characterization vectors at the voice frame level of the target voice data based on a continuous integrate-and-fire CIF mechanism, to obtain a sequence of speaker characterizations bounded by speaker change points in the target voice data; and determining a timestamp corresponding to the speaker change points, according to the sequence of the speaker characterizations bounded by the speaker change points in the target voice data.Type: ApplicationFiled: December 22, 2023Publication date: April 25, 2024Inventors: Linhao DONG, Zhiyun FAN, Zejun MA
-
Publication number: 20240127795Abstract: A model training method, a speech recognition method and apparatus, a medium, and a device are provided. The speech recognition model including an encoder, a CIF prediction sub-model and a CTC prediction sub-model. The model training method includes: encoding training speech data based on the encoder to obtain an acoustic vector sequence corresponding to the training speech data; obtaining an information amount sequence corresponding to the training speech data based on the acoustic vector sequence and the CIF prediction sub-model; obtaining a target probability sequence based on the acoustic vector sequence and the CTC prediction sub-model; determining a target loss of the speech recognition model based on the information amount sequence and the target probability sequence; and updating, in response to an updating condition being satisfied, a model parameter of the speech recognition model based on the target loss.Type: ApplicationFiled: May 7, 2022Publication date: April 18, 2024Applicant: Beijing Youzhuju Network Technology Co., Ltd.Inventors: Linhao DONG, Zejun MA
-
Publication number: 20240095451Abstract: Provided are an electronic device and a computer readable storage medium. The method includes: acquiring a text to be analyzed; performing token conversion on words in the text to be analyzed to obtain a token sequence to be analyzed, where tokens in token sequences to be analyzed corresponding to texts to be analyzed in different languages belong to a same type; and performing feature extraction on the token sequence to be analyzed, and processing a target task based on the extracted feature, to determine an analysis result for the text to be analyzed.Type: ApplicationFiled: September 18, 2023Publication date: March 21, 2024Inventors: Yuxiang ZOU, Zejun MA
-
Publication number: 20240046921Abstract: Embodiments of the present disclosure provide a method, apparatus, electronic device, and medium for speech processing. The method comprises generating a token-level semantic feature of target speech data based on a frame-level acoustic feature of the target speech data. The method further comprises generating a token-level voiceprint feature of the target speech data based on the frame-level acoustic feature. The method further comprises determining a token in the target speech data where speaker change occurs based on the token-level semantic feature and the token-level voiceprint feature. According to embodiments of the present disclosure, speaker change in speech data is detected at the token level in conjunction with the speaker's acoustic features and speech contents, and speaker-based speech recognition results are output directly without post-processing, simplifying the speech recognition process.Type: ApplicationFiled: August 4, 2023Publication date: February 8, 2024Inventors: Linhao DONG, Zhenlin Liang, Zhiyun Fan, Yi Liu, Zejun Ma
-
Publication number: 20230402031Abstract: A speech processing method is provided. The method includes: receiving a speech block to be identified as a current speech block, where the speech block includes a past frame, a current frame and a future frame; performing a speech identification process based on the current speech block, where the speech identification process includes: performing speech identification based on the current speech block to obtain a speech identification result of the current frame and a speech identification result of the future frame; determining whether a previous speech block for the current speech block exists; in a case that the previous speech block for the current speech block exists, updating a target identification result based on the speech identification result of the current frame of the current speech block; and outputting the speech identification result of the future frame of the current speech block.Type: ApplicationFiled: April 6, 2022Publication date: December 14, 2023Inventors: Linhao DONG, Meng CAI, Zejun MA
-
Patent number: 10373613Abstract: A dual-mode voice control method is disclosed. The method may comprise determining whether a user has executed an operation of activating an operate-to-speak stop determination mode in a voice input interface. The method may further comprise, in response to determining that the user has executed the operation of activating the operate-to-speak stop determination mode, determining whether a microphone is in a busy state. The method may further comprise, in response to determining that the microphone is in the busy state, switching a voice mode from a directly-speak automatic stop determination mode to the operate-to-speak stop determination mode. Before the user executes the operation of activating the operate-to-speak stop determination mode, the voice mode is in the directly-speak automatic stop determination mode if the microphone is in the busy state.Type: GrantFiled: November 29, 2016Date of Patent: August 6, 2019Assignee: Guangzhou Shenma Mobile Information Technology Co., Ltd.Inventors: Yajun Wang, Tuwenchang Si, Na Wang, Yi Peng, Sishou Zheng, Xiaoli Fu, Chao Li, Wei Kang, Yining Chen, Zejun Ma
-
Publication number: 20170162196Abstract: A dual-mode voice control method is disclosed. The method may comprise determining whether a user has executed an operation of activating an operate-to-speak stop determination mode in a voice input interface. The method may further comprise, in response to determining that the user has executed the operation of activating the operate-to-speak stop determination mode, determining whether a microphone is in a busy state. The method may further comprise, in response to determining that the microphone is in the busy state, switching a voice mode from a directly-speak automatic stop determination mode to the operate-to-speak stop determination mode. Before the user executes the operation of activating the operate-to-speak stop determination mode, the voice mode is in the directly-speak automatic stop determination mode if the microphone is in the busy state.Type: ApplicationFiled: November 29, 2016Publication date: June 8, 2017Inventors: Yajun Wang, Tuwenchang Si, Na Wang, Yi Peng, Sishou Zheng, Xiaoli Fu, Chao Li, Wei Kang, Yining Chen, Zejun Ma