Patents by Inventor Qiang Lou

Qiang Lou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12353998
    Abstract: A tagging system appends supplemental information to an original sequence of items, to produce a supplemented sequence of items. The tagging system includes a transformer-based encoder neural network that maps the supplemented sequence into hidden state information. The tagging system includes a post-processing neural network that transform the hidden state information into a tagged output sequence of items. That is, each item in the tagged output sequence includes a tag that identifies its entity class or some other characteristic. The tagging system can increase the accuracy of the tags it produces by virtue of the inclusion of the supplemental information added to each original sequence. A training system trains the tagging system to perform plural tasks, which further increases the accuracy of the tags it produces. The training system may commence training of the tagging system using a pre-trained model for the encoder neural network.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: July 8, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Luis Gerardo Mojica De La Vega, Qiang Lou, Jian Jiao, Ruofei Zhang
  • Publication number: 20250111162
    Abstract: A computing system is disclosed that includes a processor and memory. The memory stores instructions that, when executed by the processor, cause the processor to perform several acts. The acts comprise receiving conversational data indicative of an interaction between a client computing device and a generative model. The conversational data is provided as input into an intent classification module and the intent classification module produces an output indicative of a user intent based upon the conversational data. An anchor generation module generates anchor text indicative of portions of the conversational data correlated with the user intent. A content query based upon the anchor text is generated and content responsive to the content query is obtained and presented at the client computing device.
    Type: Application
    Filed: September 29, 2023
    Publication date: April 3, 2025
    Inventors: Xinyu HU, Pengfei TANG, Simiao ZUO, Qiang LOU, Jian JIAO, Denis Xavier CHARLES, Eren MANAVOGLU
  • Publication number: 20240046037
    Abstract: Systems and methods are provided for training a data model based on training data. The training includes pre-training and fine-tuning the data model based on a combination of an autoregressive (AR) model and a non-autoregressive (NAR) model. Training data may be received and encoded into streams of tokens. A pre-trainer during decoding generates a continuum of data structures of the AR and NAR combined model including a main stream and a series of predicting streams. Masked tokens in predicting streams reference or attend to one or more preceding tokens in the main stream or the preceding predicting streams. A fine-tuner selects streams to generate a trained model according to a target data model. The target data model is determined based on balancing an accuracy constraint and an efficiency constraint for predicting tokens. The decoder acts as abridge between the AR and NAR models in generating a trained data model.
    Type: Application
    Filed: December 25, 2020
    Publication date: February 8, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jian JIAO, Yeyun GONG, Nan DUAN, Weizhu CHEN, Kewen TANG, Qiang LOU, Ruofei ZHANG, Yu YAN, Jiusheng CHEN
  • Patent number: 9477654
    Abstract: Functionality is described herein for transforming first and second symbolic linguistic items into respective first and second continuous-valued concept vectors, using a deep learning model, such as a convolutional latent semantic model. The model is designed to capture both the local and global linguistic contexts of the linguistic items. The functionality then compares the first concept vector with the second concept vector to produce a similarity measure. More specifically, the similarity measure expresses the closeness between the first and second linguistic items in a high-level semantic space. In one case, the first linguistic item corresponds to a query, and the second linguistic item may correspond to a phrase, or a document, or a keyword, or an ad, etc. In one implementation, the convolutional latent semantic model is produced in a training phase based on click-through data.
    Type: Grant
    Filed: April 1, 2014
    Date of Patent: October 25, 2016
    Assignee: Microsoft Corporation
    Inventors: Xiaodong He, Jianfeng Gao, Li Deng, Qiang Lou, Yunhong Zhou, Guowei Liu, Gregory T. Buehrer, Jianchang Mao, Yelong Shen, Ruofei Zhang
  • Publication number: 20150278200
    Abstract: Functionality is described herein for transforming first and second symbolic linguistic items into respective first and second continuous-valued concept vectors, using a deep learning model, such as a convolutional latent semantic model. The model is designed to capture both the local and global linguistic contexts of the linguistic items. The functionality then compares the first concept vector with the second concept vector to produce a similarity measure. More specifically, the similarity measure expresses the closeness between the first and second linguistic items in a high-level semantic space. In one case, the first linguistic item corresponds to a query, and the second linguistic item may correspond to a phrase, or a document, or a keyword, or an ad, etc. In one implementation, the convolutional latent semantic model is produced in a training phase based on click-through data.
    Type: Application
    Filed: April 1, 2014
    Publication date: October 1, 2015
    Applicant: Microsoft Corporation
    Inventors: Xiaodong He, Jianfeng Gao, Li Deng, Qiang Lou, Yunhong Zhou, Guowei Liu, Gregory T. Buehrer, Jianchang Mao, Yelong Shen, Ruofei Zhang