Patents by Inventor Jui-Ting Huang
Jui-Ting Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240119283Abstract: A method of performing automatic tuning on a deep learning model includes: utilizing an instruction-based learned cost model to estimate a first type of operational performance metrics based on a tuned configuration of layer fusion and tensor tiling; utilizing statistical data gathered during a compilation process of the deep learning model to determine a second type of operational performance metrics based on the tuned configuration of layer fusion and tensor tiling; performing an auto-tuning process to obtain a plurality of optimal configurations based on the first type of operational performance metrics and the second type of operational performance metrics; and configure the deep learning model according to one of the plurality of optimal configurations.Type: ApplicationFiled: October 6, 2023Publication date: April 11, 2024Applicant: MEDIATEK INC.Inventors: Jui-Yang Hsu, Cheng-Sheng Chan, Jen-Chieh Tsai, Huai-Ting Li, Bo-Yu Kuo, Yen-Hao Chen, Kai-Ling Huang, Ping-Yuan Tseng, Tao Tu, Sheng-Je Hung
-
Patent number: 11429860Abstract: Systems and methods are provided for generating a DNN classifier by “learning” a “student” DNN model from a larger more accurate “teacher” DNN model. The student DNN may be trained from un-labeled training data because its supervised signal is obtained by passing the un-labeled training data through the teacher DNN. In one embodiment, an iterative process is applied to train the student DNN by minimize the divergence of the output distributions from the teacher and student DNN models. For each iteration until convergence, the difference in the output distributions is used to update the student DNN model, and output distributions are determined again, using the unlabeled training data. The resulting trained student model may be suitable for providing accurate signal processing applications on devices having limited computational or storage resources such as mobile or wearable devices. In an embodiment, the teacher DNN model comprises an ensemble of DNN models.Type: GrantFiled: September 14, 2015Date of Patent: August 30, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Jinyu Li, Rui Zhao, Jui-Ting Huang, Yifan Gong
-
Patent number: 10515423Abstract: System and methods for generating a shareability score in accordance with some example embodiments are disclosed. A social networking system receives a request to generate a shareability score for a list of content items for an organization. The social networking system identifies a plurality of members associated with the organization and analyzes past share data for the plurality of members to generate an organization sharing profile. The social networking system retrieves early sharing information for each content item in the list of content items. The social networking system generates a shareability score for each particular content item and ranks the list of content items based on the generated shareability scores. The social networking system then transmits the ranked list of content items to a client device, receives a selection of one or more content items, and broadcasts the one or more selected items to a plurality of client devices.Type: GrantFiled: August 2, 2016Date of Patent: December 24, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Abhishek Gupta, Jui-Ting Huang, Siegfried Joseph Bilstein
-
Publication number: 20180039631Abstract: System and methods for generating a shareability score in accordance with some example embodiments are disclosed. A social networking system receives a request to generate a shareability score for a list of content items for an organization. The social networking system identifies a plurality of members associated with the organization and analyzes past share data for the plurality of members to generate an organization sharing profile. The social networking system retrieves early sharing information for each content item in the list of content items. The social networking system generates a shareability score for each particular content item and ranks the list of content items based on the generated shareability scores. The social networking system then transmits the ranked list of content items to a client device, receives a selection of one or more content items, and broadcasts the one or more selected items to a plurality of client devices.Type: ApplicationFiled: August 2, 2016Publication date: February 8, 2018Inventors: Abhishek Gupta, Jui-Ting Huang, Siegfried Joseph Bilstein
-
Patent number: 9842585Abstract: Described herein are various technologies pertaining to a multilingual deep neural network (MDNN). The MDNN includes a plurality of hidden layers, wherein values for weight parameters of the plurality of hidden layers are learned during a training phase based upon training data in terms of acoustic raw features for multiple languages. The MDNN further includes softmax layers that are trained for each target language separately, making use of the hidden layer values trained jointly with multiple source languages. The MDNN is adaptable, such that a new softmax layer may be added on top of the existing hidden layers, where the new softmax layer corresponds to a new target language.Type: GrantFiled: March 11, 2013Date of Patent: December 12, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Jui-Ting Huang, Jinyu Li, Dong Yu, Li Deng, Yifan Gong
-
Publication number: 20160078339Abstract: Systems and methods are provided for generating a DNN classifier by “learning” a “student” DNN model from a larger more accurate “teacher” DNN model. The student DNN may be trained from un-labeled training data because its supervised signal is obtained by passing the un-labeled training data through the teacher DNN. In one embodiment, an iterative process is applied to train the student DNN by minimize the divergence of the output distributions from the teacher and student DNN models. For each iteration until convergence, the difference in the output distributions is used to update the student DNN model, and output distributions are determined again, using the unlabeled training data. The resulting trained student model may be suitable for providing accurate signal processing applications on devices having limited computational or storage resources such as mobile or wearable devices. In an embodiment, the teacher DNN model comprises an ensemble of DNN models.Type: ApplicationFiled: September 14, 2015Publication date: March 17, 2016Inventors: Jinyu Li, Rui Zhao, Jui-Ting Huang, Yifan Gong
-
Patent number: 9129591Abstract: Speech recognition systems may perform the following operations: receiving audio; recognizing the audio using language models for different languages to produce recognition candidates for the audio, where the recognition candidates are associated with corresponding recognition scores; identifying a candidate language for the audio; selecting a recognition candidate based on the recognition scores and the candidate language; and outputting data corresponding to the selected recognition candidate as a recognized version of the audio.Type: GrantFiled: December 26, 2012Date of Patent: September 8, 2015Assignee: Google Inc.Inventors: Yun-hsuan Sung, Francoise Beaufays, Brian Strope, Hui Lin, Jui-Ting Huang
-
Publication number: 20140257805Abstract: Described herein are various technologies pertaining to a multilingual deep neural network (MDNN). The MDNN includes a plurality of hidden layers, wherein values for weight parameters of the plurality of hidden layers are learned during a training phase based upon training data in terms of acoustic raw features for multiple languages. The MDNN further includes softmax layers that are trained for each target language separately, making use of the hidden layer values trained jointly with multiple source languages. The MDNN is adaptable, such that a new softmax layer may be added on top of the existing hidden layers, where the new softmax layer corresponds to a new target language.Type: ApplicationFiled: March 11, 2013Publication date: September 11, 2014Applicant: MICROSOFT CORPORATIONInventors: Jui-Ting Huang, Jinyu Li, Dong Yu, Li Deng, Yifan Gong
-
Publication number: 20130238336Abstract: Speech recognition systems may perform the following operations: receiving audio; recognizing the audio using language models for different languages to produce recognition candidates for the audio, where the recognition candidates are associated with corresponding recognition scores; identifying a candidate language for the audio; selecting a recognition candidate based on the recognition scores and the candidate language; and outputting data corresponding to the selected recognition candidate as a recognized version of the audio.Type: ApplicationFiled: December 26, 2012Publication date: September 12, 2013Inventors: Yun-hsuan Sung, Francoise Beaufays, Brian Strope, Hui Lin, Jui-Ting Huang