Patents by Inventor Binghua Qian

Binghua Qian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10909989
    Abstract: An identity vector generation method is provided. The method includes obtaining to-be-processed speech data. Corresponding acoustic features are extracted from the to-be-processed speech data. A posterior probability that each of the acoustic features belongs to each Gaussian distribution component in a speaker background model is calculated to obtain a statistic. The statistic is mapped to a statistic space to obtain a reference statistic, the statistic space built according to a statistic corresponding to a speech sample exceeding a threshold speech duration. A corrected statistic is determined according to the calculated statistic and the reference statistic; and an identity vector is generated according to the corrected statistic.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: February 2, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Wei Li, Binghua Qian, Xingming Jin, Ke Li, Fuzhang Wu, Yongjian Wu, Feiyue Huang
  • Patent number: 10854207
    Abstract: A method and an apparatus for training a voiceprint recognition system are provided. The method includes obtaining a voice training data set comprising voice segments of users; determining identity vectors of all the voice segments; identifying identity vectors of voice segments of a same user in the determined identity vectors; placing the recognized identity vectors of the same user in the users into one of user categories; and determining an identity vector in the user category as a first identity vector. The method further includes normalizing the first identity vector by using a normalization matrix, a first value being a sum of similarity degrees between the first identity vector in the corresponding category and other identity vectors in the corresponding category; training the normalization matrix, and outputting a training value of the normalization matrix when the normalization matrix maximizes a sum of first values of all the user categories.
    Type: Grant
    Filed: December 24, 2018
    Date of Patent: December 1, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Wei Li, Binghua Qian, Xingming Jin, Ke Li, Fuzhang Wu, Yongjian Wu, Feiyue Huang
  • Patent number: 10832652
    Abstract: A method is performed by at least one processor, and includes acquiring training speech data by concatenating speech segments having a lowest target cost among candidate concatenation solutions, and extracting training speech segments of a first annotation type, from the training speech data, the first annotation type being used for annotating that a speech continuity of a respective one of the training speech segments is superior to a preset condition.
    Type: Grant
    Filed: August 14, 2017
    Date of Patent: November 10, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Haolei Yuan, Fuzhang Wu, Binghua Qian
  • Patent number: 10699699
    Abstract: The embodiments of the present disclosure disclose a method for constructing a speech decoding network in digital speech recognition. The method comprises acquiring training data obtained by digital speech recording, the training data comprising a plurality of speech segments, and each speech segment comprising a plurality of digital speeches; performing acoustic feature extraction on the training data to obtain a feature sequence corresponding to each speech segment; performing progressive training starting from a mono-phoneme acoustic model to obtain an acoustic model; acquiring a language model, and constructing a speech decoding network by the language model and the acoustic model obtained by training.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: June 30, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Fuzhang Wu, Binghua Qian, Wei Li, Ke Li, Yongjian Wu, Feiyue Huang
  • Patent number: 10692503
    Abstract: A voice data processing method and apparatus are provided. The method includes obtaining an I-Vector vector of each of voice samples, and determining a target seed sample in the voice samples. A first cosine distance is calculated between an I-Vector vector of the target seed sample and an I-Vector vector of a target remaining voice sample, where the target remaining voice sample is a voice sample other than the target seed sample in the voice samples. A target voice sample is filtered from the voice samples or the target remaining voice sample according to the first cosine distance, to obtain a target voice sample whose first cosine distance is greater than a first threshold.
    Type: Grant
    Filed: March 3, 2017
    Date of Patent: June 23, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xingming Jin, Wei Li, Fangmai Zheng, Fuzhang Wu, Bilei Zhu, Binghua Qian, Ke Li, Yongjian Wu, Feiyue Huang
  • Patent number: 10650830
    Abstract: Processing circuitry of an information processing apparatus obtains a set of identity vectors that are calculated according to voice samples from speakers. The identity vectors are classified into speaker classes respectively corresponding to the speakers. The processing circuitry selects, from the identity vectors, first subsets of interclass neighboring identity vectors respectively corresponding to the identity vectors and second subsets of intraclass neighboring identity vectors respectively corresponding to the identity vectors. The processing circuitry determines an interclass difference based on the first subsets of interclass neighboring identity vectors and the corresponding identity vectors; and determines an intraclass difference based on the second subsets of intraclass neighboring identify vectors and the corresponding identity vectors.
    Type: Grant
    Filed: April 16, 2018
    Date of Patent: May 12, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Wei Li, Binghua Qian, Xingming Jin, Ke Li, Fuzhang Wu, Yongjian Wu, Feiyue Huang
  • Publication number: 20190189109
    Abstract: A method is performed by at least one processor, and includes acquiring training speech data by concatenating speech segments having a lowest target cost among candidate concatenation solutions, and extracting training speech segments of a first annotation type, from the training speech data, the first annotation type being used for annotating that a speech continuity of a respective one of the training speech segments is superior to a preset condition.
    Type: Application
    Filed: August 14, 2017
    Publication date: June 20, 2019
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Haolei YUAN, Fuzhang WU, Binghua QIAN
  • Publication number: 20190130920
    Abstract: A method and an apparatus for training a voiceprint recognition system are provided. The method includes obtaining a voice training data set comprising voice segments of users; determining identity vectors of all the voice segments; identifying identity vectors of voice segments of a same user in the determined identity vectors; placing the recognized identity vectors of the same user in the users into one of user categories; and determining an identity vector in the user category as a first identity vector. The method further includes normalizing the first identity vector by using a normalization matrix, a first value being a sum of similarity degrees between the first identity vector in the corresponding category and other identity vectors in the corresponding category; training the normalization matrix, and outputting a training value of the normalization matrix when the normalization matrix maximizes a sum of first values of all the user categories.
    Type: Application
    Filed: December 24, 2018
    Publication date: May 2, 2019
    Inventors: Wei LI, Binghua QIAN, Xingming JIN, Ke LI, Fuzhang WU, Yongjian WU, Feiyue HUANG
  • Publication number: 20190115031
    Abstract: An identity vector generation method is provided. The method includes obtaining to-be-processed speech data. Corresponding acoustic features are extracted from the to-be-processed speech data. A posterior probability that each of the acoustic features belongs to each Gaussian distribution component in a speaker background model is calculated to obtain a statistic. The statistic is mapped to a statistic space to obtain a reference statistic, the statistic space built according to a statistic corresponding to a speech sample exceeding a threshold speech duration. A corrected statistic is determined according to the calculated statistic and the reference statistic; and an identity vector is generated according to the corrected statistic.
    Type: Application
    Filed: December 7, 2018
    Publication date: April 18, 2019
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Wei Li, Binghua Qian, Xingming Jin, Ke Li, Fuzhang Wu, Yongjian Wu, Feiyue Huang
  • Publication number: 20180286410
    Abstract: A voice data processing method and apparatus are provided. The method includes obtaining an I-Vector vector of each of voice samples, and determining a target seed sample in the voice samples. A first cosine distance is calculated between an I-Vector vector of the target seed sample and an I-Vector vector of a target remaining voice sample, where the target remaining voice sample is a voice sample other than the target seed sample in the voice samples. A target voice sample is filtered from the voice samples or the target remaining voice sample according to the first cosine distance, to obtain a target voice sample whose first cosine distance is greater than a first threshold.
    Type: Application
    Filed: March 3, 2017
    Publication date: October 4, 2018
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xingming JIN, Wei LI, Fangmai ZHENG, Fuzhang WU, Bilei ZHU, Binghua QIAN, Ke LI, Yongjian WU, Feiyue HUANG
  • Publication number: 20180277103
    Abstract: The embodiments of the present disclosure disclose a method for constructing a speech decoding network in digital speech recognition. The method comprises acquiring training data obtained by digital speech recording, the training data comprising a plurality of speech segments, and each speech segment comprising a plurality of digital speeches; performing acoustic feature extraction on the training data to obtain a feature sequence corresponding to each speech segment; performing progressive training starting from a mono-phoneme acoustic model to obtain an acoustic model; acquiring a language model, and constructing a speech decoding network by the language model and the acoustic model obtained by training.
    Type: Application
    Filed: May 30, 2018
    Publication date: September 27, 2018
    Inventors: Fuzhang WU, Binghua Qian, Wei LI, Ke LI, Yongjian Wu, Feiyue Huang
  • Publication number: 20180233151
    Abstract: Processing circuitry of an information processing apparatus obtains a set of identity vectors that are calculated according to voice samples from speakers. The identity vectors are classified into speaker classes respectively corresponding to the speakers. The processing circuitry selects, from the identity vectors, first subsets of interclass neighboring identity vectors respectively corresponding to the identity vectors and second subsets of intraclass neighboring identity vectors respectively corresponding to the identity vectors. The processing circuitry determines an interclass difference based on the first subsets of interclass neighboring identity vectors and the corresponding identity vectors; and determines an intraclass difference based on the second subsets of intraclass neighboring identify vectors and the corresponding identity vectors.
    Type: Application
    Filed: April 16, 2018
    Publication date: August 16, 2018
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Wei Li, Binghua Qian, Xingming Jin, Ke Li, Fuzhang Wu, Yongjian Wu, Feiyue Huang