Patents by Inventor Shuai Yue

Shuai Yue has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9558741
    Abstract: Systems and methods are provided for speech recognition. For example, audio characteristics are extracted from acquired voice signals; a syllable confusion network is identified based on at least information associated with the audio characteristics; a word lattice is generated based on at least information associated with the syllable confusion network and a predetermined phonetic dictionary; and an optimal character sequence is calculated in the word lattice as a speech recognition result.
    Type: Grant
    Filed: May 30, 2014
    Date of Patent: January 31, 2017
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Lou Li, Li Lu, Xiang Zhang, Feng Rao, Shuai Yue, Bo Chen, Jianxiong Ma, Haibo Liu
  • Publication number: 20160358610
    Abstract: A method is performed at a device having one or more processors and memory. The device establishes a first-level Deep Neural Network (DNN) model based on unlabeled speech data, the unlabeled speech data containing no speaker labels and the first-level DNN model specifying a plurality of basic voiceprint features for the unlabeled speech data. The device establishes a second-level DNN model by tuning the first-level DNN model based on labeled speech data, the labeled speech data containing speech samples with respective speaker labels, wherein the second-level DNN model specifies a plurality of high-level voiceprint features. Using the second-level DNN model, registers a first high-level voiceprint feature sequence for a user based on a registration speech sample received from the user. The device performs speaker verification for the user based on the first high-level voiceprint feature sequence registered for the user.
    Type: Application
    Filed: August 18, 2016
    Publication date: December 8, 2016
    Inventors: Eryu WANG, Li LU, Xiang ZHANG, Haibo LIU, Lou LI, Feng RAO, Duling LU, Shuai YUE, Bo CHEN
  • Patent number: 9508347
    Abstract: A method and a device for training a DNN model includes: at a device including one or more processors and memory: establishing an initial DNN model; dividing a training data corpus into a plurality of disjoint data subsets; for each of the plurality of disjoint data subsets, providing the data subset to a respective training processing unit of a plurality of training processing units operating in parallel, wherein the respective training processing unit applies a Stochastic Gradient Descent (SGD) process to update the initial DNN model to generate a respective DNN sub-model based on the data subset; and merging the respective DNN sub-models generated by the plurality of training processing units to obtain an intermediate DNN model, wherein the intermediate DNN model is established as either the initial DNN model for a next training iteration or a final DNN model in accordance with a preset convergence condition.
    Type: Grant
    Filed: December 16, 2013
    Date of Patent: November 29, 2016
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Eryu Wang, Li Lu, Xiang Zhang, Haibo Liu, Feng Rao, Lou Li, Shuai Yue, Bo Chen
  • Patent number: 9502038
    Abstract: A method and device for voiceprint recognition, include: establishing a first-level Deep Neural Network (DNN) model based on unlabeled speech data, the unlabeled speech data containing no speaker labels and the first-level DNN model specifying a plurality of basic voiceprint features for the unlabeled speech data; obtaining a plurality of high-level voiceprint features by tuning the first-level DNN model based on labeled speech data, the labeled speech data containing speech samples with respective speaker labels, and the tuning producing a second-level DNN model specifying the plurality of high-level voiceprint features; based on the second-level DNN model, registering a respective high-level voiceprint feature sequence for a user based on a registration speech sample received from the user; and performing speaker verification for the user based on the respective high-level voiceprint feature sequence registered for the user.
    Type: Grant
    Filed: December 12, 2013
    Date of Patent: November 22, 2016
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Eryu Wang, Li Lu, Xiang Zhang, Haibo Liu, Lou Li, Feng Rao, Duling Lu, Shuai Yue, Bo Chen
  • Patent number: 9472190
    Abstract: A method of recognizing speech is provided that includes generating a decoding network that includes a primary sub-network and a classification sub-network. The primary sub-network includes a classification node corresponding to the classification sub-network. The classification sub-network corresponds to a group of uncommon words. A speech input is received and decoded by instantiating a token in the primary sub-network and passing the token through the primary network. When the token reaches the classification node, the method includes transferring the token to the classification sub-network and passing the token through the classification sub-network. When the token reaches an accept node of the classification sub-network, the method includes returning a result of the token passing through the classification sub-network to the primary sub-network. The result includes one or more words in the group of uncommon words. A string corresponding to the speech input is output that includes the one or more words.
    Type: Grant
    Filed: April 28, 2014
    Date of Patent: October 18, 2016
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Shuai Yue, Li Lu, Xiang Zhang, Dadong Xie, Bo Chen, Feng Rao
  • Patent number: 9466289
    Abstract: An electronic device with one or more processors and memory trains an acoustic model with an international phonetic alphabet (IPA) phoneme mapping collection and audio samples in different languages, where the acoustic model includes: a foreground model; and a background model. The device generates a phone decoder based on the trained acoustic model. The device collects keyword audio samples, decodes the keyword audio samples with the phone decoder to generate phoneme sequence candidates, and selects a keyword phoneme sequence from the phoneme sequence candidates. After obtaining the keyword phoneme sequence, the device detects one or more keywords in an input audio signal with the trained acoustic model, including: matching phonemic keyword portions of the input audio signal with phonemes in the keyword phoneme sequence with the foreground model; and filtering out phonemic non-keyword portions of the input audio signal with the background model.
    Type: Grant
    Filed: December 11, 2013
    Date of Patent: October 11, 2016
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Li Lu, Xiang Zhang, Shuai Yue, Feng Rao, Eryu Wang, Lu Li
  • Patent number: 9442910
    Abstract: A method and system for adding punctuation to a voice file is disclosed. The method includes: utilizing silence or pause duration detection to divide a voice file into a plurality of speech segments for processing, the voice file includes a plurality of features units; identifying all features units that appear in the voice file according to every term or expression and semantics features of the every term or expression that form each of the plurality of speech segments; using a linguistic model to determine a sum of weight of various punctuation modes in the voice file according to all the feature units, the linguistic model is built upon semantics features of various parsed out terms or expressions from a body text of a spoken sentence according to a language library; and adding punctuations to the voice file based on the determined sum of weight of the various punctuation modes.
    Type: Grant
    Filed: March 19, 2014
    Date of Patent: September 13, 2016
    Assignee: Tencent Technology (Shenzhen) Co., Ltd.
    Inventors: Haibo Liu, Eryu Wang, Xiang Zhang, Li Lu, Shuai Yue, Bo Chen, Lou Li, Jian Liu
  • Patent number: 9396723
    Abstract: A method and a device for training an acoustic language model, include: conducting word segmentation for training samples in a training corpus using an initial language model containing no word class labels, to obtain initial word segmentation data containing no word class labels; performing word class replacement for the initial word segmentation data containing no word class labels, to obtain first word segmentation data containing word class labels; using the first word segmentation data containing word class labels to train a first language model containing word class labels; using the first language model containing word class labels to conduct word segmentation for the training samples in the training corpus, to obtain second word segmentation data containing word class labels; and in accordance with the second word segmentation data meeting one or more predetermined criteria, using the second word segmentation data containing word class labels to train the acoustic language model.
    Type: Grant
    Filed: December 17, 2013
    Date of Patent: July 19, 2016
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Duling Lu, Lu Li, Feng Rao, Bo Chen, Li Lu, Xiang Zhang, Eryu Wang, Shuai Yue
  • Patent number: 9396724
    Abstract: A method includes: acquiring data samples; performing categorized sentence mining in the acquired data samples to obtain categorized training samples for multiple categories; building a text classifier based on the categorized training samples; classifying the data samples using the text classifier to obtain a class vocabulary and a corpus for each category; mining the corpus for each category according to the class vocabulary for the category to obtain a respective set of high-frequency language templates; training on the templates for each category to obtain a template-based language model for the category; training on the corpus for each category to obtain a class-based language model for the category; training on the class vocabulary for each category to obtain a lexicon-based language model for the category; building a speech decoder according to an acoustic model, the class-based language model and the lexicon-based language model for any given field, and the data samples.
    Type: Grant
    Filed: February 14, 2014
    Date of Patent: July 19, 2016
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Feng Rao, Li Lu, Bo Chen, Xiang Zhang, Shuai Yue, Lu Li
  • Patent number: 9355637
    Abstract: A method and an apparatus are provided for retrieving keyword. The apparatus configures at least two types of language models in a model file, where each type of language model includes a recognition model and a corresponding decoding model; the apparatus extracts a speech feature from the to-be-processed speech data; performs language matching on the extracted speech feature by using recognition models in the model file one by one, and determines a recognition model based on a language matching rate; and determines a decoding model corresponding to the recognition model; decoding the extracted speech feature by using the determined decoding model, and obtains a word recognition result after the decoding; and matches a keyword in a keyword dictionary and the word recognition result, and outputs a matched keyword.
    Type: Grant
    Filed: February 11, 2015
    Date of Patent: May 31, 2016
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Jianxiong Ma, Lu Li, Li Lu, Xiang Zhang, Shuai Yue, Feng Rao, Eryu Wang, Linghui Kong
  • Publication number: 20160086609
    Abstract: The present application discloses a method, an electronic system and a non-transitory computer readable storage medium for recognizing audio commands in an electronic device. The electronic device obtains audio data based on an audio signal provided by a user and extracts characteristic audio fingerprint features from the audio data. The electronic device further determines whether the corresponding audio signal is generated by an authorized user by comparing the characteristic audio fingerprint features with an audio fingerprint model for the authorized user and with a universal background model that represents user-independent audio fingerprint features, respectively. When the corresponding audio signal is generated by the authorized user of the electronic device, an audio command is extracted from the audio data, and an operation is performed according to the audio command.
    Type: Application
    Filed: December 3, 2015
    Publication date: March 24, 2016
    Inventors: Shuai Yue, Xiang Zhang, Li Lu, Feng Rao, Eryu Wang, Haibo Liu, Bo Chen, Jian Liu, Lu Li
  • Patent number: 9257118
    Abstract: A method and an apparatus are provided for retrieving keyword. The apparatus configures at least two types of language models in a model file, where each type of language model includes a recognition model and a corresponding decoding model; the apparatus extracts a speech feature from the to-be-processed speech data; performs language matching on the extracted speech feature by using recognition models in the model file one by one, and determines a recognition model based on a language matching rate; and determines a decoding model corresponding to the recognition model; decoding the extracted speech feature by using the determined decoding model, and obtains a word recognition result after the decoding; and matches a keyword in a keyword dictionary and the word recognition result, and outputs a matched keyword.
    Type: Grant
    Filed: February 11, 2015
    Date of Patent: February 9, 2016
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Jianxiong Ma, Lu Li, Li Lu, Xiang Zhang, Shuai Yue, Feng Rao, Eryu Wang, Linghui Kong
  • Patent number: 9230541
    Abstract: This application discloses a method implemented of recognizing a keyword in a speech that includes a sequence of audio frames further including a current frame and a subsequent frame. A candidate keyword is determined for the current frame using a decoding network that includes keywords and filler words of multiple languages, and used to determine a confidence score for the audio frame sequence. A word option is also determined for the subsequent frame based on the decoding network, and when the candidate keyword and the word option are associated with two distinct types of languages, the confidence score of the audio frame sequence is updated at least based on a penalty factor associated with the two distinct types of languages. The audio frame sequence is then determined to include both the candidate keyword and the word option by evaluating the updated confidence score according to a keyword determination criterion.
    Type: Grant
    Filed: December 11, 2014
    Date of Patent: January 5, 2016
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Lu Ll, Li Lu, Jianxiong Ma, Linghui Kong, Feng Rao, Shuai Yue, Xiang Zhang, Haibo Liu, Eryu Wang, Bo Chen
  • Patent number: 9177131
    Abstract: A computer-implemented method is performed at a server having one or more processors and memory storing programs executed by the one or more processors for authenticating a user from video and audio data. The method includes: receiving a login request from a mobile device, the login request including video data and audio data; extracting a group of facial features from the video data; extracting a group of audio features from the audio data and recognizing a sequence of words in the audio data; identifying a first user account whose respective facial features match the group of facial features and a second user account whose respective audio features match the group of audio features. If the first user account is the same as the second user account, retrieve the sequence of words associated with the user account and compare the sequences of words for authentication purpose.
    Type: Grant
    Filed: April 25, 2014
    Date of Patent: November 3, 2015
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiang Zhang, Li Lu, Eryu Wang, Shuai Yue, Feng Rao, Haibo Liu, Lou Li, Duling Lu, Bo Chen
  • Publication number: 20150154955
    Abstract: A method and an apparatus are provided for retrieving keyword. The apparatus configures at least two types of language models in a model file, where each type of language model includes a recognition model and a corresponding decoding model; the apparatus extracts a speech feature from the to-be-processed speech data; performs language matching on the extracted speech feature by using recognition models in the model file one by one, and determines a recognition model based on a language matching rate; and determines a decoding model corresponding to the recognition model; decoding the extracted speech feature by using the determined decoding model, and obtains a word recognition result after the decoding; and matches a keyword in a keyword dictionary and the word recognition result, and outputs a matched keyword.
    Type: Application
    Filed: February 11, 2015
    Publication date: June 4, 2015
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jianxiong MA, Lu LI, Li LU, Xiang ZHANG, Shuai YUE, Feng RAO, Eryu WANG, Linghui KONG
  • Publication number: 20150095032
    Abstract: This application discloses a method implemented of recognizing a keyword in a speech that includes a sequence of audio frames further including a current frame and a subsequent frame. A candidate keyword is determined for the current frame using a decoding network that includes keywords and filler words of multiple languages, and used to determine a confidence score for the audio frame sequence. A word option is also determined for the subsequent frame based on the decoding network, and when the candidate keyword and the word option are associated with two distinct types of languages, the confidence score of the audio frame sequence is updated at least based on a penalty factor associated with the two distinct types of languages. The audio frame sequence is then determined to include both the candidate keyword and the word option by evaluating the updated confidence score according to a keyword determination criterion.
    Type: Application
    Filed: December 11, 2014
    Publication date: April 2, 2015
    Inventors: Lu LI, Li Lu, Jianxiong Ma, Linghui Kong, Feng Rao, Shuai Yue, Xiang Zhang, Haibo Liu, Eryu Wang, Bo Chen
  • Patent number: 8963836
    Abstract: A method and system for gesture-based human-machine interaction and computer-readable medium are provided. The system includes a capturing module, a positioning module, and a transforming module. The method includes the steps of: capturing images from a user's video streams, positioning coordinates of three or more predetermined color blocks in the foreground, simulating movements of a mouse according to the coordinates of the first color block, and simulating click actions of the mouse according to the coordinates of the other color blocks. The embodiments according to the current disclosure position coordinates of a plurality of color blocks through processing the captured user's video streams, and simulate mouse actions according to the coordinates of the color blocks. Processing apparatuses like computers may be extended to facilitate gesture-based human-machine interactions through a very simple way, and a touch-sensitive interaction effect can be simulated, without the presence of a touch screen.
    Type: Grant
    Filed: August 16, 2011
    Date of Patent: February 24, 2015
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Tong Cheng, Shuai Yue
  • Publication number: 20150019214
    Abstract: A method and a device for training a DNN model includes: at a device including one or more processors and memory: establishing an initial DNN model; dividing a training data corpus into a plurality of disjoint data subsets; for each of the plurality of disjoint data subsets, providing the data subset to a respective training processing unit of a plurality of training processing units operating in parallel, wherein the respective training processing unit applies a Stochastic Gradient Descent (SGD) process to update the initial DNN model to generate a respective DNN sub-model based on the data subset; and merging the respective DNN sub-models generated by the plurality of training processing units to obtain an intermediate DNN model, wherein the intermediate DNN model is established as either the initial DNN model for a next training iteration or a final DNN model in accordance with a preset convergence condition.
    Type: Application
    Filed: December 16, 2013
    Publication date: January 15, 2015
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Eryu WANG, Li LU, Xiang ZHANG, Haibo LIU, Feng RAO, Lou LI, Shuai YUE, Bo CHEN
  • Publication number: 20140379354
    Abstract: A method, apparatus and system for payment validation have been disclosed. The method includes: receiving a payment validation request from a terminal, wherein the payment validation request includes identification information and a current voice signal; detecting whether the identification information is identical to a pre-stored identification information; if identical: extracting voice characteristics associated with an identity information and a text password from the current voice signal; matching the current voice characteristics to a pre-stored speaker model; if successfully matched: sending an validation reply message to the terminal to indicate that payment request has been authorized. The validation reply message is utilized by the terminal to proceed with a payment transaction. The identity information identifies an owner's current voice signal, and the text password is indicated by the current voice signal.
    Type: Application
    Filed: December 2, 2013
    Publication date: December 25, 2014
    Applicant: Tencent Technology (Shenzhen) Co., Ltd.
    Inventors: Xiang Zhang, Li Lu, Eryu Wang, Shuai Yue, Feng Rao, Haibo Liu, Bo Chen
  • Publication number: 20140358539
    Abstract: A method includes: acquiring data samples; performing categorized sentence mining in the acquired data samples to obtain categorized training samples for multiple categories; building a text classifier based on the categorized training samples; classifying the data samples using the text classifier to obtain a class vocabulary and a corpus for each category; mining the corpus for each category according to the class vocabulary for the category to obtain a respective set of high-frequency language templates; training on the templates for each category to obtain a template-based language model for the category; training on the corpus for each category to obtain a class-based language model for the category; training on the class vocabulary for each category to obtain a lexicon-based language model for the category; building a speech decoder according to an acoustic model, the class-based language model and the lexicon-based language model for any given field, and the data samples.
    Type: Application
    Filed: February 14, 2014
    Publication date: December 4, 2014
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Feng Rao, Li Lu, Bo Chen, Xiang Zhang, Shuai Yue, Lu Li