Patents by Inventor Jinyu Li
Jinyu Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200335122Abstract: To generate substantially condition-invariant and speaker-discriminative features, embodiments are associated with a feature extractor capable of extracting features from speech frames based on first parameters, a speaker classifier capable of identifying a speaker based on the features and on second parameters, and a condition classifier capable of identifying a noise condition based on the features and on third parameters. The first parameters of the feature extractor and the second parameters of the speaker classifier are trained to minimize a speaker classification loss, the first parameters of the feature extractor are further trained to maximize a condition classification loss, and the third parameters of the condition classifier are trained to minimize the condition classification loss.Type: ApplicationFiled: June 7, 2019Publication date: October 22, 2020Inventors: Zhong MENG, Yong ZHAO, Jinyu LI, Yifan GONG
-
Publication number: 20200306750Abstract: An electrowetting panel includes a base substrate; an electrode array layer, including a plurality of electrodes arranged into an array; an insulating hydrophobic layer; a microfluidic channel layer located on the base substrate. Each electrode of the plurality of electrodes is connected to a driving circuit, and a droplet can move along a first direction by applying an electric voltage on each electrode. The insulating hydrophobic layer is located on the electrode array layer, and the microfluidic channel layer is located on the insulating hydrophobic layer. The electrodes includes a plurality of driving electrodes and a plurality of detecting electrodes. Along the first direction, a number N of the driving electrodes is located between every two adjacent detecting electrodes, where N is a natural number. The electrowetting panel also includes a detecting chip electrically connected to the detecting electrodes.Type: ApplicationFiled: June 12, 2019Publication date: October 1, 2020Inventors: Baiquan LIN, Kerui XI, Junting OUYANG, Jinyu LI, Xiaohe LI
-
Patent number: 10706806Abstract: A pixel driving circuit includes a pixel unit including a blue sub-pixel connected to a data line to receive a data voltage, and a limit circuit connected between the data line and a reference voltage line configured to transfer a fixed DC voltage, the limit circuit being configured to limit the received data voltage when the received data voltage exceeds a voltage threshold.Type: GrantFiled: April 12, 2018Date of Patent: July 7, 2020Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.Inventors: Yu Zhao, Yue Li, Yanchen Li, Jinyu Li, Dong Wang, Shaojun Hou, Mingyang Lv, Dawei Feng, Wang Guo
-
Publication number: 20200175335Abstract: Representative embodiments disclose machine learning classifiers used in scenarios such as speech recognition, image captioning, machine translation, or other sequence-to-sequence embodiments. The machine learning classifiers have a plurality of time layers, each layer having a time processing block and a depth processing block. The time processing block is a recurrent neural network such as a Long Short Term Memory (LSTM) network. The depth processing blocks can be an LSTM network, a gated Deep Neural Network (DNN) or a maxout DNN. The depth processing blocks account for the hidden states of each time layer and uses summarized layer information for final input signal feature classification. An attention layer can also be used between the top depth processing block and the output layer.Type: ApplicationFiled: November 30, 2018Publication date: June 4, 2020Inventors: Jinyu Li, Liang Lu, Changliang Liu, Yifan Gong
-
Publication number: 20200171491Abstract: A digital microfluidic chip and a digital microfluidic system. The digital microfluidic chip comprises: an upper substrate and a lower substrate arranged opposite to each other; multiple driving circuits and multiple addressing circuits disposed between the lower substrate and the upper substrate; and a control circuit, electrically connected to the driving circuits and the addressing circuits. The control circuit is configured to apply, in a driving stage, a driving voltage to each driving circuit, such that a droplet is controlled to move inside a droplet accommodation space according to a set path, measure, in a detection stage, after a bias voltage is applied to each addressing circuit, a charge loss amount of each addressing circuit, and to determine the position of the droplet according to the charge loss amount. The charge loss amount of each addressing circuit is related to the intensity of received external light.Type: ApplicationFiled: July 26, 2019Publication date: June 4, 2020Inventors: Mingyang LV, Yue LI, Yanchen LI, Jinyu LI, Dawei FENG, Yu ZHAO, Dong WANG, Wang GUO, Hailong WANG, Yue GENG, Peizhi CAI, Fengchun PANG, Le GU, Chuncheng CHE, Haochen CUI, Yingying ZHAO, Nan ZHAO, Yuelei XIAO, Huyi LIAO
-
Patent number: 10649564Abstract: A touch display panel and a display device are disclosed. The touch display panel includes a plurality of touch signal lines and a plurality of data lines disposed in a display area, and a plurality of lead terminals disposed in a peripheral area. The plurality of lead terminals includes a plurality of first terminals respectively connected to the plurality of data lines and a plurality of second terminals respectively connected to the plurality of touch signal lines. The plurality of lead terminals are arranged in a matrix. The first terminals and the second terminals are provided in a row direction or a column direction so as to be consistent with the sequence in which the data lines connected to the first terminals and the touch signal lines connected to the second terminals are arranged.Type: GrantFiled: July 28, 2017Date of Patent: May 12, 2020Assignees: BOE Technology Group Co., Ltd., Beijing BOE Optoelectronics Technology Co., Ltd.Inventors: Jinyu Li, Yue Li, Yanchen Li
-
Patent number: 10650226Abstract: Systems and methods for identifying a false representation of a human face are provided. In one example, a method for identifying a false representation of a human face includes receiving one or more data streams captured by one or more sensors sensing a candidate face. In a plurality of stages that each comprises a different analysis, one or more of the data streams are analyzed, and the stages comprise determining whether a plurality of candidate face depth points lies on a single flat plane or a curving plane. Based at least in part on determining that the plurality of candidate face depth points lies on the single flat plane, an indication of the false representation of the human face is outputted.Type: GrantFiled: June 19, 2018Date of Patent: May 12, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Chun-Te Chu, Michael J. Conrad, Dijia Wu, Jinyu Li
-
Publication number: 20200143021Abstract: The use of user-specific data to process a biometric print, such that use of the biometric print is revoked by invalidating the user-specific data. The processed print is generated by performing one-way processing of the biometric print using the user-specific data. The processed print, not the biometric print, is then provided to the authentication system for later authentication of the user. During matching, the user later provides a current biometric, resulting in generation of a current biometric print. For each of multiple users, the user-specific is obtained for that user, and at least one processed print is generated for each user based on the current biometric print. The current processed prints are used by the authentication system to match against each of the enrolled processed prints. If a match is found, the user is identified as being the user associated with the matching enrolled print.Type: ApplicationFiled: November 1, 2018Publication date: May 7, 2020Inventors: Peter Dawoud Shenouda DAWOUD, Rachel PETERS, Jinyu LI
-
Patent number: 10643602Abstract: Methods, systems, and computer programs are presented for training, with adversarial constraints, a student model for speech recognition based on a teacher model. One method includes operations for training a teacher model based on teacher speech data, initializing a student model with parameters obtained from the teacher model, and training the student model with adversarial teacher-student learning based on the teacher speech data and student speech data. Training the student model with adversarial teacher-student learning further includes minimizing a teacher-student loss that measures a divergence of outputs between the teacher model and the student model; minimizing a classifier condition loss with respect to parameters of a condition classifier; and maximizing the classifier condition loss with respect to parameters of a feature extractor. The classifier condition loss measures errors caused by acoustic condition classification. Further, speech is recognized with the trained student model.Type: GrantFiled: March 16, 2018Date of Patent: May 5, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Jinyu Li, Zhong Meng, Yifan Gong
-
Patent number: 10629193Abstract: Non-limiting examples of the present disclosure describe advancements in acoustic-to-word modeling that improve accuracy in speech recognition processing through the replacement of out-of-vocabulary (OOV) tokens. During the decoding of speech signals, better accuracy in speech recognition processing is achieved through training and implementation of multiple different solutions that present enhanced speech recognition models. In one example, a hybrid neural network model for speech recognition processing combines a word-based neural network model as a primary model and a character-based neural network model as an auxiliary model. The primary word-based model emits a word sequence, and an output of character-based auxiliary model is consulted at a segment where the word-based model emits an OOV token. In another example, a mixed unit speech recognition model is developed and trained to generate a mixed word and character sequence during decoding of a speech signal without requiring generation of OOV tokens.Type: GrantFiled: March 9, 2018Date of Patent: April 21, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Guoli Ye, James Droppo, Jinyu Li, Rui Zhao, Yifan Gong
-
Patent number: 10580432Abstract: Generally discussed herein are devices, systems, and methods for speech recognition. Processing circuitry can implement a connectionist temporal classification (CTC) neural network (NN) including an encode NN to receive an audio frame and generate a current encoded hidden feature vector, an attend NN to generate, based on a current encoded hidden feature vector and a first context vector from a previous time slice, a weight vector indicating an amount the current encoded hidden feature vector, a previous encoded hidden feature vector, and a future encoded hidden feature vector from a future time slice contribute to a current, second context vector, an annotate NN to generate the current, second context vector based on the weight vector, the current encoded hidden feature vector, the previous encoded hidden feature vector, and the future encoded hidden feature vector, and a normal NN to generate a normalized output vector based on the context vector.Type: GrantFiled: February 28, 2018Date of Patent: March 3, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Amit Das, Jinyu Li, Rui Zhao, Yifan Gong
-
Patent number: 10515301Abstract: Conversion of a large-footprint DNN to a small-print DNN is performed using a variety of techniques, including split-vector quantization. The small-foot print DNN may be distributed to a variety of devices, including mobile devices. Further, the small-footprint DNN may aid a digital assistant on a device in interpreting speech input.Type: GrantFiled: January 19, 2016Date of Patent: December 24, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Jinyu Li, Yifan Gong, Yongqiang Wang
-
Publication number: 20190351418Abstract: A microfluidic chip configured to move a microdroplet along a predetermined path, includes a plurality of probe electrode groups spaced apart along the predetermined path. Each of the plurality of probe electrode groups includes a first probe electrode and a second probe electrode spaced apart from each other. The first probe electrode and the second probe electrode among a plurality of first probe electrodes and a plurality of second probe electrodes are configured to form an electrical loop with the microdroplet to thereby facilitate determining a position of the microdroplet.Type: ApplicationFiled: December 25, 2018Publication date: November 21, 2019Applicants: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.Inventors: Mingyang LV, Yue LI, Jinyu LI, Yanchen LI, Dawei FENG, Dong WANG, Yu ZHAO, Shaojun HOU, Wang GUO
-
Patent number: 10452935Abstract: Examples are disclosed herein that relate to detecting spoofed human faces. One example provides a computing device comprising a processor configured to compute a first feature distance between registered image data of a human face in a first spectral region and test image data of the human face in the first spectral region, compute a second feature distance between the registered image data and test image data of the human face in a second spectral region, compute a test feature distance between the test image data in the first spectral region and the test image data in the second spectral region, determine, based on a predetermined relationship, whether the human face to which the test image data in the first and second spectral regions corresponds is a real human face or a spoofed human face, and modify a behavior of the computing device.Type: GrantFiled: October 30, 2015Date of Patent: October 22, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Jinyu Li, Fang Wen, Yichen Wei, Michael John Conrad, Chun-Te Chu, Aamir Jawaid
-
Publication number: 20190287515Abstract: Methods, systems, and computer programs are presented for training, with adversarial constraints, a student model for speech recognition based on a teacher model. One method includes operations for training a teacher model based on teacher speech data, initializing a student model with parameters obtained from the teacher model, and training the student model with adversarial teacher-student learning based on the teacher speech data and student speech data. Training the student model with adversarial teacher-student learning further includes minimizing a teacher-student loss that measures a divergence of outputs between the teacher model and the student model; minimizing a classifier condition loss with respect to parameters of a condition classifier; and maximizing the classifier condition loss with respect to parameters of a feature extractor. The classifier condition loss measures errors caused by acoustic condition classification. Further, speech is recognized with the trained student model.Type: ApplicationFiled: March 16, 2018Publication date: September 19, 2019Inventors: Jinyu Li, Zhong Meng, Yifan Gong
-
Publication number: 20190279614Abstract: Non-limiting examples of the present disclosure describe advancements in acoustic-to-word modeling that improve accuracy in speech recognition processing through the replacement of out-of-vocabulary (OOV) tokens. During the decoding of speech signals, better accuracy in speech recognition processing is achieved through training and implementation of multiple different solutions that present enhanced speech recognition models. In one example, a hybrid neural network model for speech recognition processing combines a word-based neural network model as a primary model and a character-based neural network model as an auxiliary model. The primary word-based model emits a word sequence, and an output of character-based auxiliary model is consulted at a segment where the word-based model emits an OOV token. In another example, a mixed unit speech recognition model is developed and trained to generate a mixed word and character sequence during decoding of a speech signal without requiring generation of OOV tokens.Type: ApplicationFiled: March 9, 2018Publication date: September 12, 2019Inventors: Guoli YE, James DROPPO, Jinyu LI, Rui ZHAO, Yifan GONG
-
Patent number: 10409134Abstract: An electronic paper display panel, including a first and second substrate; an electrophoresis layer arranged between the first and second substrates, the electrophoresis layer including black electrophoretic particle, white electrophoretic particle and at least one color electrophoretic particle; a first electrode layer arranged at a side of the first substrate facing the second substrate including multiple first electrodes; a second electrode layer arranged at a side of the second substrate facing the first substrate including multiple second electrodes; and a drive circuit; multiple pixel areas correspond multiple second electrodes; each first electrode includes a first sub-electrode and a second sub-electrode placed in same pixel area, which are insulated from each other, correspond to one second electrode and are connected with drive circuit, the first sub-electrode receives voltage signal different from voltage signal the second sub-electrode receives; the first electrode is common electrode and the secoType: GrantFiled: October 12, 2017Date of Patent: September 10, 2019Assignee: SHANGHAI TIANMA MICRO-ELECTRONIC CO., LTD.Inventors: Jinyu Li, Kerui Xi, Zuzhao Xu, Wenqin Xu, Lei Du, Yian Zhou
-
Publication number: 20190267023Abstract: Generally discussed herein are devices, systems, and methods for speech recognition. Processing circuitry can implement a connectionist temporal classification (CTC) neural network (NN) including an encode NN to receive an audio frame and generate a current encoded hidden feature vector, an attend NN to generate, based on a current encoded hidden feature vector and a first context vector from a previous time slice, a weight vector indicating an amount the current encoded hidden feature vector, a previous encoded hidden feature vector, and a future encoded hidden feature vector from a future time slice contribute to a current, second context vector, an annotate NN to generate the current, second context vector based on the weight vector, the current encoded hidden feature vector, the previous encoded hidden feature vector, and the future encoded hidden feature vector, and a normal NN to generate a normalized output vector based on the context vector.Type: ApplicationFiled: February 28, 2018Publication date: August 29, 2019Inventors: Amit Das, Jinyu Li, Rui Zhao, Yifan Gong
-
Patent number: 10354656Abstract: Improvements in speaker identification and verification are provided via an attention model for speaker recognition and the end-to-end training thereof. A speaker discriminative convolutional neural network (CNN) is used to directly extract frame-level speaker features that are weighted and combined to form an utterance-level speaker recognition vector via the attention model. The CNN and attention model are join-optimized via an end-to-end training algorithm that imitates the speaker recognition process and uses the most-similar utterances from imposters for each speaker.Type: GrantFiled: June 23, 2017Date of Patent: July 16, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Yong Zhao, Jinyu Li, Yifan Gong, Shixiong Zhang, Zhuo Chen
-
Patent number: 10347241Abstract: Systems and methods can be implemented to conduct speaker-invariant training for speech recognition in a variety of applications. An adversarial multi-task learning scheme for speaker-invariant training can be implemented, aiming at actively curtailing the inter-talker feature variability, while maximizing its senone discriminability to enhance the performance of a deep neural network (DNN) based automatic speech recognition system. In speaker-invariant training, a DNN acoustic model and a speaker classifier network can be jointly optimized to minimize the senone (triphone state) classification loss, and simultaneously mini-maximize the speaker classification loss. A speaker invariant and senone-discriminative intermediate feature is learned through this adversarial multi-task learning, which can be applied to an automatic speech recognition system. Additional systems and methods are disclosed.Type: GrantFiled: March 23, 2018Date of Patent: July 9, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Zhong Meng, Vadim Aleksandrovich Mazalov, Yifan Gong, Yong Zhao, Zhuo Chen, Jinyu Li