Patents by Inventor Dan Su

Dan Su has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240124437
    Abstract: The present disclosure relates to an injectable lurasidone suspension and a preparation method thereof, and in particular to an irregular form of a lurasidone solid and a pharmaceutical composition thereof. The present disclosure also relates to a preparation method for the solid and the pharmaceutical composition thereof, and an application thereof in the treatment of mental diseases. According to the present disclosure, the lurasidone solid prepared has controllable particle size and has Dv5O particle size of 6 ?m to 110 ?m. The good particle size stability can also he maintained in the pharmaceutical composition. The lurasidone suspension preparation obtained by the method is fast-acting, has a long sustained release period, and can effectively reduce the risk caused by poor patient compliance.
    Type: Application
    Filed: March 21, 2022
    Publication date: April 18, 2024
    Inventors: Ming LI, Xiangyong LIANG, Zhengxing SU, Dan LI, Duo KE, Cong YI, Wei WEI, Guifu DENG, Ya PENG, Dong ZHAO, Jingyi WANG
  • Patent number: 11908483
    Abstract: This application relates to a method of extracting an inter channel feature from a multi-channel multi-sound source mixed audio signal performed at a computing device.
    Type: Grant
    Filed: August 12, 2021
    Date of Patent: February 20, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Rongzhi Gu, Shixiong Zhang, Lianwu Chen, Yong Xu, Meng Yu, Dan Su, Dong Yu
  • Patent number: 11908455
    Abstract: A speech separation model training method and apparatus, a computer-readable storage medium, and a computer device are provided, the method including: obtaining first audio and second audio, the first audio including target audio and having corresponding labeled audio, and the second audio including noise audio.
    Type: Grant
    Filed: February 15, 2022
    Date of Patent: February 20, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jun Wang, Wingyip Lam, Dan Su, Dong Yu
  • Patent number: 11900917
    Abstract: A neural network training method is provided. The method includes obtaining an audio data stream, performing, for different audio data of each time frame in the audio data stream, feature extraction in each layer of a neural network, to obtain a depth feature outputted by a corresponding time frame, fusing, for a given label in labeling data, an inter-class confusion measurement index and an intra-class distance penalty value relative to the given label in a set loss function for the audio data stream through the depth feature, and updating a parameter in the neural network by using a loss function value obtained through fusion.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: February 13, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Dan Su, Jun Wang, Jie Chen, Dong Yu
  • Patent number: 11871176
    Abstract: A far-field pickup device including a device body and a microphone pickup unit is provided. The microphone pickup unit is configured to collect user speech and an echo of a first sound signal output by the device body, and transmit, to the device body, a signal obtained through digital conversion of the collected user speech and the echo. The device body includes a signal playback source, a synchronizing signal generator, a horn, a delay determining unit, and an echo cancellation unit configured to perform echo cancellation on the signal transmitted by the microphone pickup unit to obtain a collected human voice signal.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: January 9, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LTD
    Inventors: Ji Meng Zheng, Meng Yu, Dan Su
  • Patent number: 11848008
    Abstract: This application discloses an artificial intelligence-based (AI-based) wakeup word detection method performed by a computing device. The method includes: constructing, by using a preset pronunciation dictionary, at least one syllable combination sequence for self-defined wakeup word text inputted by a user; obtaining to-be-recognized speech data, and extracting speech features of speech frames in the speech data; inputting the speech features into a pre-constructed deep neural network (DNN) model, to output posterior probability vectors of the speech features corresponding to syllable identifiers; determine a target probability vector from the posterior probability vectors according to the syllable combination sequence; and calculate a confidence according to the target probability vector, and determine that the speech frames include the wakeup word text when the confidence is greater than or equal to a threshold.
    Type: Grant
    Filed: September 23, 2021
    Date of Patent: December 19, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jie Chen, Dan Su, Mingjie Jin, Zhenling Zhu
  • Patent number: 11803618
    Abstract: A method and apparatus are provided that analyzing sequence-to-sequence data, such as sequence-to-sequence speech data or sequence-to-sequence machine translation data for example, by minimum Bayes risk (MBR) training a sequence-to-sequence model and within introduction of applications of softmax smoothing to an N-best generation of the MBR training of the sequence-to-sequence model.
    Type: Grant
    Filed: November 17, 2022
    Date of Patent: October 31, 2023
    Assignee: TENCENT AMERICA LLC
    Inventors: Chao Weng, Jia Cui, Guangsen Wang, Jun Wang, Chengzhu Yu, Dan Su, Dong Yu
  • Patent number: 11798531
    Abstract: A speech recognition method, a speech recognition apparatus, and a method and an apparatus for training a speech recognition model are provided. The speech recognition method includes: recognizing a target word speech from a hybrid speech, and obtaining, as an anchor extraction feature of a target speech, an anchor extraction feature of the target word speech based on the target word speech; obtaining a mask of the target speech according to the anchor extraction feature of the target speech; and recognizing the target speech according to the mask of the target speech.
    Type: Grant
    Filed: October 22, 2020
    Date of Patent: October 24, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jun Wang, Dan Su, Dong Yu
  • Publication number: 20230287504
    Abstract: Provided are peripheral blood miRNA markers for diagnosis of non-small cell lung cancer, wherein the peripheral blood miRNA markers comprise hsa-miR-1291, hsa-miR-1-3p, hsa-miR-214-3p, hsa-miR-375 or hsa-let-7a-5p. Five specific diagnostic markers suitable for diagnosing non-small cell lung cancer in Asian and Caucasian populations are validated on a large number of samples, and have higher population specificity compared to other miRNA markers previously reported. These five miRNA diagnostic markers are proposed for the first time and are shown to be more reliable than other miRNA molecular markers.
    Type: Application
    Filed: November 11, 2019
    Publication date: September 14, 2023
    Inventors: Ruiyang Zou, Dan Su, He Cheng, Lisha Ying, Lihan Zhou
  • Publication number: 20230092440
    Abstract: A method and apparatus are provided that analyzing sequence-to-sequence data, such as sequence-to-sequence speech data or sequence-to-sequence machine translation data for example, by minimum Bayes risk (MBR) training a sequence-to-sequence model and within introduction of applications of softmax smoothing to an N-best generation of the MBR training of the sequence-to-sequence model.
    Type: Application
    Filed: November 17, 2022
    Publication date: March 23, 2023
    Applicant: TENCENT AMERICA LLC
    Inventors: Chao WENG, Jia CUI, Guangsen WANG, Jun WANG, Chengzhu YU, Dan SU, Dong YU
  • Publication number: 20230075893
    Abstract: A speech recognition method includes obtaining a speech recognition model including a plurality of feature aggregation nodes connected via a first type operation element, where a context-dependent operation of the first type operation element is based on past speech data and is independent of future speech data. The method further includes receiving streaming speech data, the speech data comprising audio data including speech, and processing the streaming speech data via the speech recognition model to obtain a speech recognition text corresponding to the streaming speech data, and outputting the speech recognition text.
    Type: Application
    Filed: November 15, 2022
    Publication date: March 9, 2023
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Dan SU, Liqiang HE
  • Publication number: 20230013740
    Abstract: This application discloses a multi-sound area-based speech detection method and related apparatus, and a storage medium, which is applied to the field of artificial intelligence. The method includes: obtaining sound area information corresponding to each sound area in N sound areas; using the sound area as a target detection sound area, and generating a control signal corresponding to the target detection sound area according to sound area information corresponding to the target detection sound area; processing a speech input signal corresponding to the target detection sound area by using the control signal corresponding to the target detection sound area, to obtain a speech output signal corresponding to the target detection sound area; and generating a speech detection result of the target detection sound area according to the speech output signal corresponding to the target detection sound area.
    Type: Application
    Filed: September 13, 2022
    Publication date: January 19, 2023
    Inventors: Jimeng ZHENG, Lianwu CHEN, Weiwei Li, Zhiyi Duan, Meng YU, Dan Su, Kaiyu Jiang
  • Patent number: 11551136
    Abstract: A method and apparatus are provided that analyzing sequence-to-sequence data, such as sequence-to-sequence speech data or sequence-to-sequence machine translation data for example, by minimum Bayes risk (MBR) training a sequence-to-sequence model and within introduction of applications of softmax smoothing to an N-best generation of the MBR training of the sequence-to-sequence model.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: January 10, 2023
    Assignee: TENCENT AMERICA LLC
    Inventors: Chao Weng, Jia Cui, Guangsen Wang, Jun Wang, Chengzhu Yu, Dan Su, Dong Yu
  • Patent number: 11450337
    Abstract: A multi-person speech separation method is provided for a terminal. The method includes extracting a hybrid speech feature from a hybrid speech signal requiring separation, N human voices being mixed in the hybrid speech signal, N being a positive integer greater than or equal to 2; extracting a masking coefficient of the hybrid speech feature by using a generative adversarial network (GAN) model, to obtain a masking matrix corresponding to the N human voices, wherein the GAN model comprises a generative network model and an adversarial network model; and performing a speech separation on the masking matrix corresponding to the N human voices and the hybrid speech signal by using the GAN model, and outputting N separated speech signals corresponding to the N human voices.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: September 20, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Lianwu Chen, Meng Yu, Yanmin Qian, Dan Su, Dong Yu
  • Publication number: 20220248968
    Abstract: A wearable device includes a substrate, at least one light emitting component mounted on one side of the substrate and configured to emit light on at least one optical wavelength band, and at least one lens disposed on an out-light side of the at least one light emitting component, where each lens corresponds to at least one light emitting component, and each lens is capable of reducing a divergence angle of light emitted by the at least one light emitting component corresponding to the lens.
    Type: Application
    Filed: June 20, 2020
    Publication date: August 11, 2022
    Inventors: Yi Xi, Shiyou Sun, Dan Su
  • Publication number: 20220180882
    Abstract: A method of training an audio separation network is provided. The method includes obtaining a first separation sample set, the first separation sample set including at least two types of audio with dummy labels, obtaining a first sample set by performing interpolation on the first separation sample set based on perturbation data, obtaining a second separation sample set by separating the first sample set using an unsupervised network, determining losses of second separation samples in the second separation sample set, and adjusting network parameters of the unsupervised network based on the losses of the second separation samples, such that a first loss of a first separation result outputted by an adjusted unsupervised network meets a convergence condition.
    Type: Application
    Filed: February 28, 2022
    Publication date: June 9, 2022
    Applicant: TENCENT TECHNOLOGY(SHENZHEN) COMPANY LIMITED
    Inventors: Jun WANG, Wing Yip Lam, Dan Su, Dong Yu
  • Publication number: 20220172708
    Abstract: A speech separation model training method and apparatus, a computer-readable storage medium, and a computer device are provided, the method including: obtaining first audio and second audio, the first audio including target audio and having corresponding labeled audio, and the second audio including noise audio.
    Type: Application
    Filed: February 15, 2022
    Publication date: June 2, 2022
    Inventors: Jun WANG, Wingyip LAM, Dan SU, Dong YU
  • Patent number: 11341957
    Abstract: A method for detecting a keyword, applied to a terminal, includes: extracting a speech eigenvector of a speech signal; obtaining, according to the speech eigenvector, a posterior probability of each target character being a key character in any keyword in an acquisition time period of the speech signal; obtaining confidences of at least two target character combinations according to the posterior probability of each target character; and determining that the speech signal includes the keyword upon determining that all the confidences of the at least two target character combinations meet a preset condition. The target character is a character in the speech signal whose pronunciation matches a pronunciation of the key character. Each target character combination includes at least one target character, and a confidence of a target character combination represents a probability of the target character combination being the keyword or a part of the keyword.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: May 24, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yi Gao, Meng Yu, Dan Su, Jie Chen, Min Luo
  • Publication number: 20220115005
    Abstract: Methods and apparatuses are provided for performing sequence to sequence (Seq2Seq) speech recognition training performed by at least one processor. The method includes acquiring a training set comprising a plurality of pairs of input data and target data corresponding to the input data, encoding the input data into a sequence of hidden states, performing a connectionist temporal classification (CTC) model training based on the sequence of hidden states, performing an attention model training based on the sequence of hidden states, and decoding the sequence of hidden states to generate target labels by independently performing the CTC model training and the attention model training.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Applicant: TENCENT AMERICA LLC
    Inventors: Jia CUI, Chao WENG, Guangsen WANG, Jun WANG, Chengzhu YU, Dan SU, Dong YU
  • Patent number: 11257481
    Abstract: Methods and apparatuses are provided for performing sequence to sequence (Seq2Seq) speech recognition training performed by at least one processor. The method includes acquiring a training set comprising a plurality of pairs of input data and target data corresponding to the input data, encoding the input data into a sequence of hidden states, performing a connectionist temporal classification (CTC) model training based on the sequence of hidden states, performing an attention model training based on the sequence of hidden states, and decoding the sequence of hidden states to generate target labels by independently performing the CTC model training and the attention model training.
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: February 22, 2022
    Assignee: TENCENT AMERICA LLC
    Inventors: Jia Cui, Chao Weng, Guangsen Wang, Jun Wang, Chengzhu Yu, Dan Su, Dong Yu