Patents by Inventor Pengcheng He
Pengcheng He has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12008459Abstract: This document relates to architectures and training procedures for multi-task machine learning models, such as neural networks. One example method involves providing a multi-task machine learning model having one or more shared layers and two or more task-specific layers. The method can also involve performing a pretraining stage on the one or more shared layers using one or more unsupervised prediction tasks. The method can also involve performing a tuning stage on the one or more shared layers and the two or more task-specific layers using respective task-specific objectives.Type: GrantFiled: June 17, 2019Date of Patent: June 11, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Weizhu Chen, Pengcheng He, Xiaodong Liu, Jianfeng Gao
-
Patent number: 11969818Abstract: A split-type friction stir welding head with an adjustable stirring pin length includes a stirring head housing, where a clamping handle and a detachable stirring pin are successively mounted in the stirring head housing towards a welding direction, the clamping handle is provided with external threads on a periphery thereof and in drive connection with an adjusting plate through threads, the adjusting plate is limited, fixed and mounted in the stirring head housing, and a pore-diameter-adjustable aperture shoulder is mounted between a bottom of the stirring head housing and the detachable stirring pin in order to compensate for an outside gap between a stirring pin channel of the stirring head housing and the detachable stirring pin.Type: GrantFiled: September 22, 2023Date of Patent: April 30, 2024Assignee: Hefei University of TechnologyInventors: Jingfeng Wang, Beibei Li, Pengcheng He, Ao Liu
-
Patent number: 11958127Abstract: A shoulder-angle-adjustable friction stir welding head suitable for a fillet joint includes a stirring head body. A front end of the stirring head body is mounted with a movable shoulder, a stirring pin channel is arranged on the movable shoulder, and the stirring pin channel may allow a stirring pin of the stirring head body to pass through. The present disclosure can respond to welding tasks of the fillet joint of different angles and enlarges an application scope of the friction stir welding head in a manner that the front end of the stirring head body is mounted with the movable shoulder, the stirring pin channel is arranged on the movable shoulder, the stirring pin channel is used for allowing the stirring pin of the stirring head body to pass through, and the angle of the movable shoulder is adjusted.Type: GrantFiled: September 14, 2023Date of Patent: April 16, 2024Assignee: Hefei University of TechnologyInventors: Beibei Li, Pengcheng He, Jingfeng Wang, Wenqi Qi, Guoqiang Li
-
Publication number: 20240086619Abstract: Generally discussed herein are devices, systems, and methods for generating an embedding that is both local string dependent and global string dependent. The generated embedding can improve machine learning (ML) model performance. A method can include converting a string of words to a series of tokens, generating a local string-dependent embedding of each token of the series of tokens, generating a global string-dependent embedding of each token of the series of tokens, combining the local string dependent embedding the global string dependent embedding to generate an n-gram induced embedding of each token of the series of tokens, obtaining a masked language model (MLM) previously trained to generate a masked word prediction, and executing the MLM based on the n-gram induced embedding of each token to generate the masked word prediction.Type: ApplicationFiled: October 26, 2023Publication date: March 14, 2024Inventors: Pengcheng HE, Xiaodong Liu, Jianfeng Gao, Weizhu Chen
-
Publication number: 20240062018Abstract: Systems and methods are provided for training and using a novel unified language foundation model. An encoder-decoder natural language model is obtained and various training data is obtained and used for training. The training process integrates a combination of replaced token detection, corrupted span reconstruction, and disentangled attention methodologies to produce a unified encoder-decoder model. The trained model is trained for performing both natural language understanding (NLU) tasks and natural language generation (NLG) tasks. Attention applied to the model is applied discretely to segmented chunks of encoded data during processing to improve the efficiency of applying attention by the model.Type: ApplicationFiled: October 20, 2022Publication date: February 22, 2024Inventors: Pengcheng HE, Jianfeng GAO, Nanshan ZENG, Xuedong HUANG, Wei XIONG, Baolin PENG
-
Publication number: 20240062020Abstract: Systems and methods are provided for training and using a novel unified language foundation model. An encoder-decoder natural language model is obtained and various training data is obtained and used for training. The training process integrates a combination of replaced token detection, corrupted span reconstruction, and disentangled attention methodologies to produce a unified encoder-decoder model. The trained model is trained for performing both natural language understanding (NLU) tasks and natural language generation (NLG) tasks. Attention applied to the model is applied discretely to segmented chunks of encoded data during processing to improve the efficiency of applying attention by the model.Type: ApplicationFiled: October 20, 2022Publication date: February 22, 2024Inventors: Pengcheng HE, Jianfeng GAO, Nanshan ZENG, Xuedong HUANG, Wei XIONG, Baolin PENG
-
Publication number: 20240013055Abstract: This document relates to training of machine learning models. One example method involves providing a machine learning model having one or more mapping layers. The one or more mapping layers can include at least a first mapping layer configured to map components of pretraining examples into first representations in a space. The example method also includes performing a pretraining stage on the one or more mapping layers using the pretraining examples. The pretraining stage can include adding noise to the first representations of the components of the pretraining examples to obtain noise-adjusted first representations. The pretraining stage can also include performing a self-supervised learning process to pretrain the one or more mapping layers using at least the first representations of the training data items and the noise-adjusted first representations of the training data items.Type: ApplicationFiled: September 26, 2023Publication date: January 11, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Xiaodong Liu, Hao Cheng, Yu Wang, Jianfeng Gao, Weizhu Chen, Pengcheng He, Hoifung Poon
-
Patent number: 11836438Abstract: Generally discussed herein are devices, systems, and methods for generating an embedding that is both local string dependent and global string dependent. The generated embedding can improve machine learning (ML) model performance. A method can include converting a string of words to a series of tokens, generating a local string-dependent embedding of each token of the series of tokens, generating a global string-dependent embedding of each token of the series of tokens, combining the local string dependent embedding the global string dependent embedding to generate an n-gram induced embedding of each token of the series of tokens, obtaining a masked language model (MLM) previously trained to generate a masked word prediction, and executing the MLM based on the n-based induced embedding of each token to generate the masked word prediction.Type: GrantFiled: April 13, 2021Date of Patent: December 5, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen
-
Patent number: 11803758Abstract: This document relates to training of machine learning models. One example method involves providing a machine learning model having one or more mapping layers. The one or more mapping layers can include at least a first mapping layer configured to map components of pretraining examples into first representations in a space. The example method also includes performing a pretraining stage on the one or more mapping layers using the pretraining examples. The pretraining stage can include adding noise to the first representations of the components of the pretraining examples to obtain noise-adjusted first representations. The pretraining stage can also include performing a self-supervised learning process to pretrain the one or more mapping layers using at least the first representations of the training data items and the noise-adjusted first representations of the training data items.Type: GrantFiled: May 22, 2020Date of Patent: October 31, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Xiaodong Liu, Hao Cheng, Yu Wang, Jianfeng Gao, Weizhu Chen, Pengcheng He, Hoifung Poon
-
Patent number: 11720757Abstract: Methods, systems, apparatuses, and computer program products are provided for extracting an entity value from a sentence. An embedding set that may include one or more sentence embeddings is generated for at least part of a first sentence that is tagged to associate a first named entity in the sentence with an entity type. A plurality of candidate embeddings is also generated for at least part of a second sentence. The one or more sentence embeddings in the embedding set may be compared with each of the plurality of candidate embeddings, and a match score may be assigned to each comparison to generate a match score set. A particular match score of the match score set may be identified that exceeds a similarity threshold, and an entity value of the entity type may be extracted from the second sentence associated with the identified match score.Type: GrantFiled: August 19, 2019Date of Patent: August 8, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Vikas Bahirwani, Jade Huang, Matthew Brigham Hall, Yu Zhao, Pengcheng He, Weizhu Chen, Eslam K. Abdelreheem, Jiayuan Huang, Yuting Sun
-
Publication number: 20230222295Abstract: Systems and methods are provided for facilitating the building and use of natural language understanding models. The systems and methods identify a plurality of tokens and use them to generate one or more pre-trained natural language models using a transformer. The transformer disentangles the content embedding and positional embedding in the computation of its attention matrix. Systems and methods are also provided to facilitate self-training of the pre-trained natural language model by utilizing multi-step decoding to better reconstruct masked tokens and improve pre-training convergence.Type: ApplicationFiled: December 9, 2022Publication date: July 13, 2023Inventors: Pengcheng HE, Xiaodong LIU, Jianfeng GAO, Weizhu CHEN
-
Publication number: 20230153532Abstract: A method for training a language model comprises (a) receiving vectorized training data as input to a multitask pretraining problem; (b) generating modified vectorized training data based on the vectorized training data, according to an upstream data embedding; (c) emitting pretraining output based on the modified vectorized training data, according to a downstream data embedding equivalent to the upstream data embedding; and (d) adjusting the upstream data embedding and the downstream data embedding by computing, based on the pretraining output, a gradient of the upstream data embedding disentangled from a gradient of the downstream data embedding, thereby advancing the multitask pretraining problem toward a pretrained state.Type: ApplicationFiled: May 18, 2022Publication date: May 18, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Pengcheng HE, Jianfeng GAO, Weizhu CHEN
-
Patent number: 11526679Abstract: Systems and methods are provided for facilitating the building and use of natural language understanding models. The systems and methods identify a plurality of tokens and use them to generate one or more pre-trained natural language models using a transformer. The transformer disentangles the content embedding and positional embedding in the computation of its attention matrix. Systems and methods are also provided to facilitate self-training of the pre-trained natural language model by utilizing multi-step decoding to better reconstruct masked tokens and improve pre-training convergence.Type: GrantFiled: June 24, 2020Date of Patent: December 13, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen
-
Publication number: 20220379322Abstract: The present disclosure relates to a showerhead including an upper cover, a lower cover, and a middle plate disposed between the upper cover and the lower cover. A bottom portion of the middle plate defines a first water cavity and a second water cavity that is independent from the first water cavity. The middle plate further defines a first water inlet channel in fluid communication with the first water cavity and a second water inlet channel in fluid communication with the second water cavity. The lower cover includes a first water outlet in fluid communication with the first water cavity and a second water outlet in fluid communication with the second water cavity. The second water inlet channel includes a plug that can open and/or close the second water inlet channel responsive to water pressure exerted on the plug within the showerhead.Type: ApplicationFiled: April 21, 2022Publication date: December 1, 2022Applicant: Nanchang Kohler Ltd.Inventors: Pengcheng He, Shenghe Chen
-
Publication number: 20210334475Abstract: Systems and methods are provided for facilitating the building and use of natural language understanding models. The systems and methods identify a plurality of tokens and use them to generate one or more pre-trained natural language models using a transformer. The transformer disentangles the content embedding and positional embedding in the computation of its attention matrix. Systems and methods are also provided to facilitate self-training of the pre-trained natural language model by utilizing multi-step decoding to better reconstruct masked tokens and improve pre-training convergence.Type: ApplicationFiled: June 24, 2020Publication date: October 28, 2021Inventors: Pengcheng HE, Xiaodong LIU, Jianfeng GAO, Weizhu CHEN
-
Publication number: 20210326751Abstract: This document relates to training of machine learning models. One example method involves providing a machine learning model having one or more mapping layers. The one or more mapping layers can include at least a first mapping layer configured to map components of pretraining examples into first representations in a space. The example method also includes performing a pretraining stage on the one or more mapping layers using the pretraining examples. The pretraining stage can include adding noise to the first representations of the components of the pretraining examples to obtain noise-adjusted first representations. The pretraining stage can also include performing a self-supervised learning process to pretrain the one or more mapping layers using at least the first representations of the training data items and the noise-adjusted first representations of the training data items.Type: ApplicationFiled: May 22, 2020Publication date: October 21, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Xiaodong Liu, Hao Cheng, Yu Wang, Jianfeng Gao, Weizhu Chen, Pengcheng He, Hoifung Poon
-
Publication number: 20210232753Abstract: Generally discussed herein are devices, systems, and methods for generating an embedding that is both local string dependent and global string dependent. The generated embedding can improve machine learning (ML) model performance. A method can include converting a string of words to a series of tokens, generating a local string-dependent embedding of each token of the series of tokens, generating a global string-dependent embedding of each token of the series of tokens, combining the local string dependent embedding the global string dependent embedding to generate an n-gram induced embedding of each token of the series of tokens, obtaining a masked language model (MLM) previously trained to generate a masked word prediction, and executing the MLM based on the n-gram induced embedding of each token to generate the masked word prediction.Type: ApplicationFiled: April 13, 2021Publication date: July 29, 2021Inventors: Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen
-
Publication number: 20210142181Abstract: This document relates to training of machine learning models such as neural networks. One example method involves providing a machine learning model having one or more layers and associated parameters and performing a pretraining stage on the parameters of the machine learning model to obtain pretrained parameters. The example method also involves performing a tuning stage on the machine learning model by using labeled training samples to tune the pretrained parameters. The tuning stage can include performing noise adjustment of the labeled training examples to obtain noise-adjusted training samples. The tuning stage can also include adjusting the pretrained parameters based at least on the labeled training examples and the noise-adjusted training examples to obtain adapted parameters. The example method can also include outputting a tuned machine learning model having the adapted parameters.Type: ApplicationFiled: January 29, 2020Publication date: May 13, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Xiaodong LIU, Jianfeng GAO, Pengcheng HE, Weizhu CHEN
-
Patent number: 11003863Abstract: A system for training and deploying an artificial conversational entity using an artificial intelligence (AI) based communications system is disclosed. The system may comprise a memory storing machine readable instructions. The system may also comprise a processor to execute the machine readable instructions to receive a request via an artificial conversational entity. The processor may also transmit a response to the request based on a dialog tree generated from at least a model-based action generator and a memory-based action generator. The processor may further provide a training option to a user in the event the response is suboptimal. The processor may additionally receive a selection from the user via the training option. The selection may be associated with an optimal response.Type: GrantFiled: March 22, 2019Date of Patent: May 11, 2021Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Matthew Brigham Hall, Weizhu Chen, Junyan Chen, Pengcheng He, Yu Zhao, Yi-Min Wang, Yuting Sun, Zheng Chen, Katherine Winant Osborne
-
Patent number: D990933Type: GrantFiled: April 17, 2023Date of Patent: July 4, 2023Inventor: Pengcheng He