Patents by Inventor Jinyu Li

Jinyu Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11429860
    Abstract: Systems and methods are provided for generating a DNN classifier by “learning” a “student” DNN model from a larger more accurate “teacher” DNN model. The student DNN may be trained from un-labeled training data because its supervised signal is obtained by passing the un-labeled training data through the teacher DNN. In one embodiment, an iterative process is applied to train the student DNN by minimize the divergence of the output distributions from the teacher and student DNN models. For each iteration until convergence, the difference in the output distributions is used to update the student DNN model, and output distributions are determined again, using the unlabeled training data. The resulting trained student model may be suitable for providing accurate signal processing applications on devices having limited computational or storage resources such as mobile or wearable devices. In an embodiment, the teacher DNN model comprises an ensemble of DNN models.
    Type: Grant
    Filed: September 14, 2015
    Date of Patent: August 30, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jinyu Li, Rui Zhao, Jui-Ting Huang, Yifan Gong
  • Patent number: 11407173
    Abstract: A light valve panel and a manufacturing method thereof, a three-dimensional printing system, and a three-dimensional printing method are disclosed. The light valve panel includes a first light valve array substrate and at least one second light valve array substrate, the first light valve array substrate and the at least one second light valve array substrate are arranged in a stack; the first light valve array substrate includes a plurality of first pixel units arranged in an array, and the second light valve array substrate includes a plurality of second pixel units arranged in an array; and an orthographic projection of at least one of the second pixel units on the first light valve array substrate partially overlaps with at least one of the first pixel units.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: August 9, 2022
    Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Jinyu Li, Yanchen Li, Haobo Fang, Yu Zhao, Dawei Feng, Dong Wang, Wang Guo, Hailong Wang
  • Patent number: 11376596
    Abstract: A microfluidic chip configured to move a microdroplet along a predetermined path, includes a plurality of probe electrode groups spaced apart along the predetermined path. Each of the plurality of probe electrode groups includes a first probe electrode and a second probe electrode spaced apart from each other. The first probe electrode and the second probe electrode among a plurality of first probe electrodes and a plurality of second probe electrodes are configured to form an electrical loop with the microdroplet to thereby facilitate determining a position of the microdroplet.
    Type: Grant
    Filed: December 25, 2018
    Date of Patent: July 5, 2022
    Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Mingyang Lv, Yue Li, Jinyu Li, Yanchen Li, Dawei Feng, Dong Wang, Yu Zhao, Shaojun Hou, Wang Guo
  • Publication number: 20220189461
    Abstract: A computer system is provided that includes a processor configured to store a set of audio training data that includes a plurality of audio segments and metadata indicating a word or phrase associated with each audio segment. For a target training statement of a set of structured text data, the processor is configured to generate a concatenated audio signal that matches a word content of a target training statement by comparing the words or phrases of a plurality of text segments of the target training statement to respective words or phrases of audio segments of the stored set of audio training data, selecting a plurality of audio segments from the set of audio training data based on a match in the words or phrases between the plurality of text segments of the target training statement and the selected plurality of audio segments, and concatenating the selected plurality of audio segments.
    Type: Application
    Filed: December 16, 2020
    Publication date: June 16, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Rui ZHAO, Jinyu LI, Yifan GONG
  • Publication number: 20220165290
    Abstract: To generate substantially condition-invariant and speaker-discriminative features, embodiments are associated with a feature extractor capable of extracting features from speech frames based on first parameters, a speaker classifier capable of identifying a speaker based on the features and on second parameters, and a condition classifier capable of identifying a noise condition based on the features and on third parameters. The first parameters of the feature extractor and the second parameters of the speaker classifier are trained to minimize a speaker classification loss, the first parameters of the feature extractor are further trained to maximize a condition classification loss, and the third parameters of the condition classifier are trained to minimize the condition classification loss.
    Type: Application
    Filed: November 30, 2021
    Publication date: May 26, 2022
    Inventors: Zhong MENG, Yong ZHAO, Jinyu LI, Yifan GONG
  • Publication number: 20220157324
    Abstract: Embodiments may include determination, for each of a plurality of speech frames associated with an acoustic feature, of a phonetic feature based on the associated acoustic feature, generation of one or more two-dimensional feature maps based on the plurality of phonetic features, input of the one or more two-dimensional feature maps to a trained neural network to generate a plurality of speaker embeddings, and aggregation of the plurality of speaker embeddings into a speaker embedding based on respective weights determined for each of the plurality of speaker embeddings, wherein the speaker embedding is associated with an identity of the speaker.
    Type: Application
    Filed: February 7, 2022
    Publication date: May 19, 2022
    Inventors: Yong ZHAO, Tianyan ZHOU, Jinyu LI, Yifan GONG, Jian WU, Zhuo CHEN
  • Publication number: 20220139380
    Abstract: A computer device is provided that includes one or more processors configured to receive an end-to-end (E2E) model that has been trained for automatic speech recognition with training data from a source-domain, and receive an external language model that has been trained with training data from a target-domain. The one or more processors are configured to perform an inference of the probability of an output token sequence given a sequence of input speech features. Performing the inference includes computing an E2E model score, computing an external language model score, and computing an estimated internal language model score for the E2E model. The estimated internal language model score is computed by removing a contribution of an intrinsic acoustic model. The processor is further configured to compute an integrated score based at least on E2E model score, the external language model score, and the estimated internal language model score.
    Type: Application
    Filed: January 21, 2021
    Publication date: May 5, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Zhong MENG, Sarangarajan PARTHASARATHY, Xie SUN, Yashesh GAUR, Naoyuki KANDA, Liang LU, Xie CHEN, Rui ZHAO, Jinyu LI, Yifan GONG
  • Patent number: 11318465
    Abstract: An electrowetting panel includes a base substrate; an electrode array layer, including a plurality of electrodes arranged into an array; an insulating hydrophobic layer; a microfluidic channel layer located on the base substrate. Each electrode of the plurality of electrodes is connected to a driving circuit, and a droplet can move along a first direction by applying an electric voltage on each electrode. The insulating hydrophobic layer is located on the electrode array layer, and the microfluidic channel layer is located on the insulating hydrophobic layer. The electrodes includes a plurality of driving electrodes and a plurality of detecting electrodes. Along the first direction, a number N of the driving electrodes is located between every two adjacent detecting electrodes, where N is a natural number. The electrowetting panel also includes a detecting chip electrically connected to the detecting electrodes.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: May 3, 2022
    Assignee: Shanghai Tianma Micro-Electronics Co., Ltd.
    Inventors: Baiquan Lin, Kerui Xi, Junting Ouyang, Jinyu Li, Xiaohe Li
  • Publication number: 20220130376
    Abstract: Embodiments are associated with a speaker-independent attention-based encoder-decoder model to classify output tokens based on input speech frames, the speaker-independent attention-based encoder-decoder model associated with a first output distribution, and a speaker-dependent attention-based encoder-decoder model to classify output tokens based on input speech frames, the speaker-dependent attention-based encoder-decoder model associated with a second output distribution. The second attention-based encoder-decoder model is trained to classify output tokens based on input speech frames of a target speaker and simultaneously trained to maintain a similarity between the first output distribution and the second output distribution.
    Type: Application
    Filed: January 5, 2022
    Publication date: April 28, 2022
    Inventors: Zhong MENG, Yashesh GAUR, Jinyu LI, Yifan GONG
  • Patent number: 11308853
    Abstract: There are provided in the present disclosure a shift register and a driving method thereof, a gate driving circuit and a display apparatus. The shift register of the present disclosure includes: a forward scanning input sub-circuit for pre-charging a potential of a pull-up node by an operation level signal under control of a forward input signal and a forward scanning signal upon scanning forwards; a backward scanning input sub-circuit for pre-charging the potential of the pull-up node by an operation level signal under control of a backward input signal and a backward scanning signal upon scanning backwards; an output sub-circuit for outputting a clock signal through a signal output terminal under control of the potential of the pull-up node; wherein the pull-up node is a connection node of the forward scanning input sub-circuit, the backward scanning input sub-circuit and the output sub-circuit.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: April 19, 2022
    Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Wang Guo, Yanchen Li, Yue Li, Jinyu Li, Dawei Feng, Yu Zhao, Shaojun Hou, Dong Wang, Mingyang Lv
  • Patent number: 11276410
    Abstract: Embodiments may include reception of a plurality of speech frames, determination of a multi-dimensional acoustic feature associated with each of the plurality of speech frames, determination of a plurality of multi-dimensional phonetic features, each of the plurality of multi-dimensional phonetic features determined based on a respective one of the plurality of speech frames, generation of a plurality of two-dimensional feature maps based on the phonetic features, input of the feature maps and the plurality of acoustic features to a convolutional neural network, the convolutional neural network to generate a plurality of speaker embeddings based on the plurality of feature maps and the plurality of acoustic features, aggregation of the plurality of speaker embeddings into a first speaker embedding based on respective weights determined for each of the plurality of speaker embeddings, and determination of a speaker associated with the plurality of speech frames based on the first speaker embedding.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: March 15, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Yong Zhao, Tianyan Zhou, Jinyu Li, Yifan Gong, Jian Wu, Zhuo Chen
  • Publication number: 20220058442
    Abstract: Representative embodiments disclose machine learning classifiers used in scenarios such as speech recognition, image captioning, machine translation, or other sequence-to-sequence embodiments. The machine learning classifiers have a plurality of time layers, each layer having a time processing block and a depth processing block. The time processing block is a recurrent neural network such as a Long Short Term Memory (LSTM) network. The depth processing blocks can be an LSTM network, a gated Deep Neural Network (DNN) or a maxout DNN. The depth processing blocks account for the hidden states of each time layer and uses summarized layer information for final input signal feature classification. An attention layer can also be used between the top depth processing block and the output layer.
    Type: Application
    Filed: November 3, 2021
    Publication date: February 24, 2022
    Inventors: Jinyu Li, Liang Lu, Changliang Liu, Yifan Gong
  • Patent number: 11244673
    Abstract: Streaming machine learning unidirectional models is facilitated by the use of embedding vectors. Processing blocks in the models apply embedding vectors as input. The embedding vectors utilize context of future data (e.g., data that is temporally offset into the future within a data stream) to improve the accuracy of the outputs generated by the processing blocks. The embedding vectors cause a temporal shift between the outputs of the processing blocks and the inputs to which the outputs correspond. This temporal shift enables the processing blocks to apply the embedding vector inputs from processing blocks that are associated with future data.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: February 8, 2022
    Assignee: MICROSOFT TECHNOLOGLY LICENSING, LLC
    Inventors: Jinyu Li, Amit Kumar Agarwal, Yifan Gong, Harini Kesavamoorthy, Changliang Liu, Liang Lu
  • Patent number: 11243648
    Abstract: The present disclosure provides a touch panel, an array substrate and a display device. The touch panel includes a touch control electrode unit including a first electrode and a second electrode which are insulated from each other, and the second electrode surrounds the first electrode. In the touch panel in which the second electrode surrounds the first electrode in the touch control electrode unit, since the first electrode and the second electrode are provided independently, touch control driving and sensing are performed on the first electrode and the second electrode respectively when the touch panel is being touched.
    Type: Grant
    Filed: February 12, 2018
    Date of Patent: February 8, 2022
    Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Xingyou Luo, Yue Li, Xi Chen, Yanchen Li, Jinyu Li, Dawei Feng, Yu Zhao, Shaojun Hou, Dong Wang, Mingyang Lv, Wang Guo
  • Patent number: 11237381
    Abstract: An electrowetting display panel includes a plurality of subpixels. Each of the plurality of subpixels having a subpixel area and an hater-subpixel area. The electrowetting display panel includes a first substrate, including a first insulating layer, a first electrode layer on the first insulating layer, and a first lyophobic layer on a side of the first electrode layer away from the first insulating layer; a second substrate facing the first substrate, including a second electrode layer, and a second lyophobic layer on the second electrode layer; and a plurality of sealing elements between the first substrate and the second substrate to define a plurality of fluid channels, each of the plurality of sealing elements being in the inter-subpixel area. The electrowetting display panel includes a first fluid reservoir and a respective one of the plurality of fluid channels between the first lyophobic layer and the second lyophobic layer.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: February 1, 2022
    Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE Technology Group Co., Ltd.
    Inventors: Mingyang Lv, Yue Li, Yanchen Li, Jinyu Li, Yu Zhao, Dawei Feng, Wang Guo
  • Publication number: 20220028399
    Abstract: To generate substantially domain-invariant and speaker-discriminative features, embodiments may operate to extract features from input data based on a first set of parameters, generate outputs based on the extracted features and on a second set of parameters, and identify words represented by the input data based on the outputs, wherein the first set of parameters and the second set of parameters have been trained to minimize a network loss associated with the second set of parameters, wherein the first set of parameters has been trained to maximize the domain classification loss of a network comprising 1) an attention network to determine, based on a third set of parameters, relative importances of features extracted based on the first parameters to domain classification and 2) a domain classifier to classify a domain based on the extracted features, the relative importances, and a fourth set of parameters, and wherein the third set of parameters and the fourth set of parameters have been trained to minimize
    Type: Application
    Filed: October 5, 2021
    Publication date: January 27, 2022
    Inventors: Zhong MENG, Jinyu LI, Yifan GONG
  • Patent number: 11232782
    Abstract: Embodiments are associated with a speaker-independent attention-based encoder-decoder model to classify output tokens based on input speech frames, the speaker-independent attention-based encoder-decoder model associated with a first output distribution, a speaker-dependent attention-based encoder-decoder model to classify output tokens based on input speech frames, the speaker-dependent attention-based encoder-decoder model associated with a second output distribution, training of the second attention-based encoder-decoder model to classify output tokens based on input speech frames of a target speaker and simultaneously training the speaker-dependent attention-based encoder-decoder model to maintain a similarity between the first output distribution and the second output distribution, and performing automatic speech recognition on speech frames of the target speaker using the trained speaker-dependent attention-based encoder-decoder model.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: January 25, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Zhong Meng, Yashesh Gaur, Jinyu Li, Yifan Gong
  • Patent number: 11217265
    Abstract: To generate substantially condition-invariant and speaker-discriminative features, embodiments are associated with a feature extractor capable of extracting features from speech frames based on first parameters, a speaker classifier capable of identifying a speaker based on the features and on second parameters, and a condition classifier capable of identifying a noise condition based on the features and on third parameters. The first parameters of the feature extractor and the second parameters of the speaker classifier are trained to minimize a speaker classification loss, the first parameters of the feature extractor are further trained to maximize a condition classification loss, and the third parameters of the condition classifier are trained to minimize the condition classification loss.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: January 4, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Zhong Meng, Yong Zhao, Jinyu Li, Yifan Gong
  • Publication number: 20210408754
    Abstract: A terahertz radiator is based on coherent Smith-Purcell radiation amplified by stimulation. The terahertz radiator includes an electron emission source configured to emit electron beams and a pumping source configured to emit pumping signals. The pumping signal interacts with a primary grating structure to obtain preliminarily bunched electrons. The preliminarily bunched electrons interact with the primary grating structure to generate coherent Smith-Purcell radiation. The coherent Smith-Purcell radiation and the pumping signals vertically resonate in a primary resonant cavity structure, so that the electron bunching density is increased, and in turn, the coherent Smith-Purcell radiation is enhanced. A positive feedback process is formed by free electrons and the coherent Smith-Purcell radiation, and the coherent Smith-Purcell radiation amplified by stimulation and periodic bunched electron bunches are obtained.
    Type: Application
    Filed: June 23, 2021
    Publication date: December 30, 2021
    Inventors: Fang Liu, Yuechai Lin, Jinyu Li, Yidong Huang, Kaiyu Cui, Xue Feng, Wei Zhang
  • Patent number: 11210565
    Abstract: Representative embodiments disclose machine learning classifiers used in scenarios such as speech recognition, image captioning, machine translation, or other sequence-to-sequence embodiments. The machine learning classifiers have a plurality of time layers, each layer having a time processing block and a depth processing block. The time processing block is a recurrent neural network such as a Long Short Term Memory (LSTM) network. The depth processing blocks can be an LSTM network, a gated Deep Neural Network (DNN) or a maxout DNN. The depth processing blocks account for the hidden states of each time layer and uses summarized layer information for final input signal feature classification. An attention layer can also be used between the top depth processing block and the output layer.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: December 28, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jinyu Li, Liang Lu, Changliang Liu, Yifan Gong