Patents by Inventor Tao Qian
Tao Qian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250094138Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of software code generation by large language models are described herein. In one embodiment, a method accesses a collection of software code samples that intermix sample code and human language description. The method generates prompts to an LLM to write code that performs as described by the human language description of the sample code. The method fine-tunes a large language model to generate software code based on a code generation loss function that evaluates code generated by the LLM in response to the prompts. The method generates an evaluation score for performance of the tuned large language model as a code generator based on code generation loss for second generated code. And, the method automatically signals that fine-tuning of the tuned large language is complete in response to the evaluation score satisfying a threshold.Type: ApplicationFiled: June 14, 2024Publication date: March 20, 2025Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
-
Publication number: 20250094704Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of text summarization for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a body of text and an example summary. The method fine-tunes a large language model (LLM) based on a loss function that compares the example summary and a generated summary generated by the LLM. The example and generated summaries are compared at sentence, paragraph, and/or article levels. The method generates an evaluation score for performance of the tuned LLM as a text summarizer based on a further comparison of a reference summary and a summary generated by the tuned LLM. The method then automatically determines to deploy the tuned LLM to a text summarization task in response to the evaluation score satisfying a threshold.Type: ApplicationFiled: April 5, 2024Publication date: March 20, 2025Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
-
Publication number: 20250094687Abstract: Techniques for generating repetition-free text using a large language model (LLM) are provided. In one technique, textual content that was generated by an LLM is accessed, where the textual content comprises a plurality of sub-components including a first sub-component and a second sub-component. A first embedding that represents the first sub-component is generated and a second embedding that represents the second sub-component is generated. Based on a similarity between the first embedding and the second embedding, it is determined whether the second sub-component is repetitious with respect to the first sub-component. In response to determining that the second sub-component is repetitious with respect to the first sub-component, at least a portion of the second sub-component is removed from the textual content.Type: ApplicationFiled: June 28, 2024Publication date: March 20, 2025Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
-
Publication number: 20250094816Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of text generation for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a natural language text prompt that combines content and instructions. The method extracts the instructions from the text prompt. The method fine-tunes a large language model to generate text in natural language based on a text generation loss function that penalizes non-compliance with the extracted instructions by a generated text response to the text prompt. The method generates an evaluation score for performance of the tuned large language model as a text generator based on a value of the text generation loss function for a second generated text response. And, the method automatically signals that the fine tuning of the tuned large language model is complete in response to the evaluation score satisfying a threshold.Type: ApplicationFiled: April 30, 2024Publication date: March 20, 2025Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
-
Publication number: 20250094686Abstract: Techniques for modifying a narrative point of view for content generated by a machine-learned model, such as a large language model (LLM), are provided. In one technique, a first textual content that was generated by an LLM is accessed. A narrative point of view (NPOV) detection operation is performed on a first portion of the first textual content to identify a first NPOV corresponding to the first portion of the first textual content. Based on an output, of the NPOV detection operation, that indicates that the first NPOV does not meet one or more NPOV criteria, the first portion of the first textual content is modified to generate a modified textual content. The modified textual content is submitted to the LLM, causing the LLM to generate a second textual content.Type: ApplicationFiled: June 28, 2024Publication date: March 20, 2025Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
-
Publication number: 20250094814Abstract: Techniques are provided for fine-tuning large language models (LLMs) to reduce the instability of LLM outputs to prompt. In one technique, a plurality of prompts is stored. For each prompt of the plurality of prompts, a plurality of variants of that prompt is generated. A prompt generating LLM is fine-tuned based on that prompt and the plurality of variants. Each variant-prompt association (where the variant is generated based on the prompt and has an identical or similar meaning) is a training sample that is used to train or fine-tune the prompt generating LLM. The prompt generating LLM is configured to generate standardized prompts based on input prompts. In another technique, a response generating LLM is fine-tuned based on sets of training samples, each training sample in a set comprising a different variant of a prompt and a response that the response generating LLM generated based on the prompt.Type: ApplicationFiled: September 4, 2024Publication date: March 20, 2025Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M Mamtani
-
Publication number: 20250094716Abstract: Techniques for language model (LM) summarization using semantical clustering are provided. In one technique, a plurality of concepts reflected in text data is identified. A plurality of concept clusters is generated based on similarity among the plurality of concepts. Thus, some concept clusters may include multiple concepts. For each concept cluster of the plurality of concept clusters, an LM generates a summary of the text corresponding to that concept cluster. A summary response of the text data is generated by aggregating the summary of each concept cluster of the plurality of concept clusters. In another technique, an LM generates a summary based on text data. A first set of concepts reflected in the summary is identified and a second set of concepts reflected in the text data is identified. A difference between the two sets may indicate that the summary is missing one or more concepts.Type: ApplicationFiled: May 7, 2024Publication date: March 20, 2025Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M. Mamtani
-
Publication number: 20250094865Abstract: Techniques for ensuring that language models follow instructions indicated in prompts are provided. In one technique, a first language model generates a response based on a prompt. A set of instructions in the prompt is identified. For each instruction in the set, a second language model determines whether the response indicates that the first language model followed the instruction. In another technique, for each prompt of a plurality of prompts: (1) a first language model generates a response based on the prompt; (2) multiple instructions are identified based on the prompt; (3) a second language model generates, based on the plurality of instructions, an output that indicates that the first language model followed each instruction; and (4) the prompt, the response, and the multiple instructions are stored in a training instance. The first language model is finetuned based on the training instances.Type: ApplicationFiled: April 8, 2024Publication date: March 20, 2025Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M. Mamtani
-
Publication number: 20250094866Abstract: Techniques for correcting hallucinations produced by generative large language models (LLMs). In one technique, a computing system accesses first output generated by an LLM. The computing system identifies, within the first output, a plurality of assertions. The computing system determines that a first assertion in the plurality of assertions is false. The computing system generates a prompt that indicates that the first assertion is false. The computing system submits the prompt as input to the LLM. The computing system accesses second output that is generated by the LLM, where the second output includes a second assertion that is different than the first assertion and corresponds to the first assertion.Type: ApplicationFiled: May 30, 2024Publication date: March 20, 2025Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
-
Publication number: 20250097171Abstract: Systems, methods, and other embodiments automated fine-tuning of chatbot performance for large language models are described herein. In one embodiment, a method accesses a collection of sample conversations between two entities. An individual sample conversation includes one or more rounds of natural language example prompt by a querent and example response by an agent. The method fine-tunes an LLM to generate responses in natural language based on a chatbot loss function that evaluates first responses generated by the LLM to the example prompts by the querent. The method generates an evaluation score for performance of the tuned LLM as a chatbot based on second responses generated by the tuned LLM to test prompts from a test conversation. And, the method automatically signals that the fine-tuning of the tuned LLM is complete in response to the evaluation score satisfying a threshold.Type: ApplicationFiled: July 10, 2024Publication date: March 20, 2025Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
-
Publication number: 20250086851Abstract: Disclosed are an image processing method and an electronic device. The image processing method includes: displaying a first screen, where the first screen includes a first control; detecting a first operation on the first control; determining a first mapping in a three-dimensional lookup table in response to the first operation, where the first mapping is corresponding to the first control; converting a to-be-processed image in a first color space into a first image in a second color space, where the to-be-processed image is an image obtained in the first color space; processing the first image according to the first mapping to obtain a second image; and converting the second image in the second color space into a third image in the first color space.Type: ApplicationFiled: May 7, 2022Publication date: March 13, 2025Inventors: Meng JIN, Tao SHAO, Dongmiao XI, Junqin SU, Yanlin QIAN
-
Publication number: 20250077868Abstract: Systems, methods, and other embodiments associated with contribution metric-based pruning of a neural network are described. In one embodiment, an example method includes accessing a trained neural network that has a plurality of channels. The neural network is to be evaluated for pruning of the channels. The example method may also include determining contribution metrics for the channels by measuring changes in error of the convolutional neural network with individual channels removed in turn. The contribution metrics are determined based at least in part on higher order analysis of the changes. And, the example method may also include pruning out of the convolutional neural network a set of the channels for which the contribution metrics do not satisfy a threshold.Type: ApplicationFiled: August 29, 2023Publication date: March 6, 2025Inventors: Baopu LI, Tao SHENG, Jun QIAN
-
Patent number: 12226784Abstract: The present invention provides a magnetic bead purification system, including: a housing; a liquid path treatment system provided inside the housing, the liquid path treatment system being connectable to a reagent barrel and a waste liquid barrel; a sample addition needle group connected to the liquid path treatment system, the sample addition needle group being movable within the housing and connected to the liquid path treatment system, so as to receive a reagent from the liquid path treatment system or to discharge a waste liquid to the liquid path treatment system; a purification magnetic separation system, including a magnetic element, the purification magnetic separation system being controllable to apply a lateral magnetic force to a purification treatment position inside the housing or stop the application of the magnetic force by the magnetic element; and a purification station system movable between a purification treatment position inside the housing and a loading position outside the housing, theType: GrantFiled: April 26, 2019Date of Patent: February 18, 2025Assignee: Nanjing GenScript Biotech Co., Ltd.Inventors: Jinxin Zhu, Ruina He, Hong Qian, Tao Bai, Deming Li, Cheng Zheng, Guodong Chen
-
Patent number: 12204590Abstract: An information processing method includes receiving first request information entered by a user, determining a first task engine for the first request information, where a first slot is set in the first task engine, extracting key information from the first request information based on the first slot, and if the key information fails to be extracted from the first request information based on the first slot, or if the key information is extracted from the first request information based on the first slot, but the extracted key information does not meet a condition, obtaining target key information from a shared parameter list of the user.Type: GrantFiled: May 7, 2020Date of Patent: January 21, 2025Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Zhefeng Yan, Lifeng Shang, Tao Cai, Li Qian
-
Patent number: 11979436Abstract: A communication method comprises that if a policy control device does not support setup of an Internet Protocol (IP) multimedia subsystem (IMS) default bearer during setup of an IMS default bearer for a terminal, the control plane gateway sends second indication information to a user plane gateway, where the second indication information indicates the control plane gateway bypasses the policy control device. When the user plane gateway receives an IMS session request from the terminal and determines that the control plane gateway bypasses the policy control device, the user plane gateway sends first indication information to the control plane gateway, and the first indication information indicates the control plane gateway to send, to the policy control device, a first request to request to establish a mapping relationship between the terminal and the control plane gateway such that an IMS session can be set up between terminals.Type: GrantFiled: June 30, 2022Date of Patent: May 7, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Zexu Huang, Guojun Wu, Fan Yang, Wenge Zhang, Tao Qian, Ridong Xu, Shubing Zhang
-
Publication number: 20220337637Abstract: A communication method comprises that if a policy control device does not support setup of an Internet Protocol (IP) multimedia subsystem (IMS) default bearer during setup of an IMS default bearer for a terminal, the control plane gateway sends second indication information to a user plane gateway, where the second indication information indicates the control plane gateway bypasses the policy control device. When the user plane gateway receives an IMS session request from the terminal and determines that the control plane gateway bypasses the policy control device, the user plane gateway sends first indication information to the control plane gateway, and the first indication information indicates the control plane gateway to send, to the policy control device, a first request to request to establish a mapping relationship between the terminal and the control plane gateway such that an IMS session can be set up between terminals.Type: ApplicationFiled: June 30, 2022Publication date: October 20, 2022Inventors: Zexu Huang, Guojun Wu, Fan Yang, Wenge Zhang, Tao Qian, Ridong Xu, Shubing Zhang
-
Patent number: 11163869Abstract: A method, a system and a computer program product are provided for identity authentication. A personal identity information indicative of an identity is received. A plurality of questions, is presented, each of the questions being related to an aspect of features of the password associated with the personal identity information. The A responsive answer is received to the questions including individual answers to the questions. The identity is authenticated in response to determining that the responsive answer is correct.Type: GrantFiled: October 27, 2017Date of Patent: November 2, 2021Assignee: International Business Machines CorporationInventors: Xin He, Qu Jiang, Tao Qian, Tan Sheng
-
Patent number: 10452422Abstract: A method, a corresponding apparatus and device for deploying a virtual machine instance in order to lower requirements for a communication capability of a virtualized value-added server (VAS) and improve processing efficiency of a service chain, where the method includes obtaining communication relationships between a VAS instances and a service switch (SSW) instances from a service template, where the VAS instances and the SSW instances provide services in a service chain, and the service chain and the communication relationships between the VAS instances and the SSW instances are defined in the service template, and deploying, according to the communication relationships, an SSW instance and a VAS instance that need to communicate with each other in the SSW instances and the VAS instances on a same physical machine.Type: GrantFiled: June 29, 2017Date of Patent: October 22, 2019Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Yueyun Lou, Jidong Zhang, Tao Qian, Meisheng Liu, Chuntao Wang
-
Patent number: D861924Type: GrantFiled: June 15, 2018Date of Patent: October 1, 2019Assignee: ICAN Inc.Inventor: Tao Qian
-
Patent number: D864425Type: GrantFiled: June 15, 2018Date of Patent: October 22, 2019Assignee: ICAN Inc.Inventor: Tao Qian