Patents by Inventor Tao Hu
Tao Hu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250097171Abstract: Systems, methods, and other embodiments automated fine-tuning of chatbot performance for large language models are described herein. In one embodiment, a method accesses a collection of sample conversations between two entities. An individual sample conversation includes one or more rounds of natural language example prompt by a querent and example response by an agent. The method fine-tunes an LLM to generate responses in natural language based on a chatbot loss function that evaluates first responses generated by the LLM to the example prompts by the querent. The method generates an evaluation score for performance of the tuned LLM as a chatbot based on second responses generated by the tuned LLM to test prompts from a test conversation. And, the method automatically signals that the fine-tuning of the tuned LLM is complete in response to the evaluation score satisfying a threshold.Type: ApplicationFiled: July 10, 2024Publication date: March 20, 2025Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
-
Publication number: 20250094816Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of text generation for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a natural language text prompt that combines content and instructions. The method extracts the instructions from the text prompt. The method fine-tunes a large language model to generate text in natural language based on a text generation loss function that penalizes non-compliance with the extracted instructions by a generated text response to the text prompt. The method generates an evaluation score for performance of the tuned large language model as a text generator based on a value of the text generation loss function for a second generated text response. And, the method automatically signals that the fine tuning of the tuned large language model is complete in response to the evaluation score satisfying a threshold.Type: ApplicationFiled: April 30, 2024Publication date: March 20, 2025Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
-
Publication number: 20250089910Abstract: The present disclosure relates to a crib, which includes a body, a bed frame and a support base. The body has an accommodation space configured to accommodate a child and includes a slide rod. The bed frame is configured to support the body. The support base is fixed to the bed frame and the slide rod extends through the support base and is slidable relative to the support base. The crib is configured for a child to sleep or play, can function as a cradle which facilitates calming the child, has a simple structure and is easy to operate.Type: ApplicationFiled: January 18, 2023Publication date: March 20, 2025Inventors: Junjie Hu, Tao Wang
-
Publication number: 20250094704Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of text summarization for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a body of text and an example summary. The method fine-tunes a large language model (LLM) based on a loss function that compares the example summary and a generated summary generated by the LLM. The example and generated summaries are compared at sentence, paragraph, and/or article levels. The method generates an evaluation score for performance of the tuned LLM as a text summarizer based on a further comparison of a reference summary and a summary generated by the tuned LLM. The method then automatically determines to deploy the tuned LLM to a text summarization task in response to the evaluation score satisfying a threshold.Type: ApplicationFiled: April 5, 2024Publication date: March 20, 2025Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
-
Publication number: 20250094138Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of software code generation by large language models are described herein. In one embodiment, a method accesses a collection of software code samples that intermix sample code and human language description. The method generates prompts to an LLM to write code that performs as described by the human language description of the sample code. The method fine-tunes a large language model to generate software code based on a code generation loss function that evaluates code generated by the LLM in response to the prompts. The method generates an evaluation score for performance of the tuned large language model as a code generator based on code generation loss for second generated code. And, the method automatically signals that fine-tuning of the tuned large language is complete in response to the evaluation score satisfying a threshold.Type: ApplicationFiled: June 14, 2024Publication date: March 20, 2025Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
-
Publication number: 20250094814Abstract: Techniques are provided for fine-tuning large language models (LLMs) to reduce the instability of LLM outputs to prompt. In one technique, a plurality of prompts is stored. For each prompt of the plurality of prompts, a plurality of variants of that prompt is generated. A prompt generating LLM is fine-tuned based on that prompt and the plurality of variants. Each variant-prompt association (where the variant is generated based on the prompt and has an identical or similar meaning) is a training sample that is used to train or fine-tune the prompt generating LLM. The prompt generating LLM is configured to generate standardized prompts based on input prompts. In another technique, a response generating LLM is fine-tuned based on sets of training samples, each training sample in a set comprising a different variant of a prompt and a response that the response generating LLM generated based on the prompt.Type: ApplicationFiled: September 4, 2024Publication date: March 20, 2025Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M Mamtani
-
Publication number: 20250094716Abstract: Techniques for language model (LM) summarization using semantical clustering are provided. In one technique, a plurality of concepts reflected in text data is identified. A plurality of concept clusters is generated based on similarity among the plurality of concepts. Thus, some concept clusters may include multiple concepts. For each concept cluster of the plurality of concept clusters, an LM generates a summary of the text corresponding to that concept cluster. A summary response of the text data is generated by aggregating the summary of each concept cluster of the plurality of concept clusters. In another technique, an LM generates a summary based on text data. A first set of concepts reflected in the summary is identified and a second set of concepts reflected in the text data is identified. A difference between the two sets may indicate that the summary is missing one or more concepts.Type: ApplicationFiled: May 7, 2024Publication date: March 20, 2025Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M. Mamtani
-
Publication number: 20250094686Abstract: Techniques for modifying a narrative point of view for content generated by a machine-learned model, such as a large language model (LLM), are provided. In one technique, a first textual content that was generated by an LLM is accessed. A narrative point of view (NPOV) detection operation is performed on a first portion of the first textual content to identify a first NPOV corresponding to the first portion of the first textual content. Based on an output, of the NPOV detection operation, that indicates that the first NPOV does not meet one or more NPOV criteria, the first portion of the first textual content is modified to generate a modified textual content. The modified textual content is submitted to the LLM, causing the LLM to generate a second textual content.Type: ApplicationFiled: June 28, 2024Publication date: March 20, 2025Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
-
Publication number: 20250094687Abstract: Techniques for generating repetition-free text using a large language model (LLM) are provided. In one technique, textual content that was generated by an LLM is accessed, where the textual content comprises a plurality of sub-components including a first sub-component and a second sub-component. A first embedding that represents the first sub-component is generated and a second embedding that represents the second sub-component is generated. Based on a similarity between the first embedding and the second embedding, it is determined whether the second sub-component is repetitious with respect to the first sub-component. In response to determining that the second sub-component is repetitious with respect to the first sub-component, at least a portion of the second sub-component is removed from the textual content.Type: ApplicationFiled: June 28, 2024Publication date: March 20, 2025Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
-
Publication number: 20250094866Abstract: Techniques for correcting hallucinations produced by generative large language models (LLMs). In one technique, a computing system accesses first output generated by an LLM. The computing system identifies, within the first output, a plurality of assertions. The computing system determines that a first assertion in the plurality of assertions is false. The computing system generates a prompt that indicates that the first assertion is false. The computing system submits the prompt as input to the LLM. The computing system accesses second output that is generated by the LLM, where the second output includes a second assertion that is different than the first assertion and corresponds to the first assertion.Type: ApplicationFiled: May 30, 2024Publication date: March 20, 2025Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
-
Publication number: 20250094865Abstract: Techniques for ensuring that language models follow instructions indicated in prompts are provided. In one technique, a first language model generates a response based on a prompt. A set of instructions in the prompt is identified. For each instruction in the set, a second language model determines whether the response indicates that the first language model followed the instruction. In another technique, for each prompt of a plurality of prompts: (1) a first language model generates a response based on the prompt; (2) multiple instructions are identified based on the prompt; (3) a second language model generates, based on the plurality of instructions, an output that indicates that the first language model followed each instruction; and (4) the prompt, the response, and the multiple instructions are stored in a training instance. The first language model is finetuned based on the training instances.Type: ApplicationFiled: April 8, 2024Publication date: March 20, 2025Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M. Mamtani
-
Publication number: 20250095637Abstract: A method includes receiving a textual prompt in a first language and obtaining a fine-tuned prompt embedding configured to guide a large language model (LLM) to generate text in a target language from textual prompts in the first language. The method also includes processing, using the LLM, the textual prompt conditioned on the fine-tuned prompt embedding to generate output text in the target language and concatenating the textual prompt and the generated output text to provide an unspoken textual utterance. The method also includes training a multilingual automatic speech recognition (ASR) model to learn how to recognize speech in the target language by injecting the unspoken textual utterance into a text encoder associated with the multilingual ASR model.Type: ApplicationFiled: September 16, 2024Publication date: March 20, 2025Applicant: Google LLCInventors: Ke Hu, Tara N. Sainath, Bo Li, Yu Zhang, Yong Cheng, Tao Wang, Yujing Zhang, Frederick Liu
-
Patent number: 12252695Abstract: Disclosed are a peony PoWOX4 gene and applications of a coded protein thereof, belonging to the field of biotechnology. According to the present disclosure, Arabidopsis thaliana is taken as a model plant, and the peony PoWOX4 gene is transformed into Arabidopsis thaliana to promote the early bolting, flowering and vegetative growth of Arabidopsis thaliana.Type: GrantFiled: September 17, 2024Date of Patent: March 18, 2025Assignee: INTERNATIONAL CENTRE FOR BAMBOO AND RATTANInventors: Wenbo Zhang, Yanting Chang, Yanjun Ma, Tao Hu, Zehui Jiang, Yayun Deng, Yufei Meng, Xue Zhang, Mengsi Xia
-
Patent number: 12252137Abstract: The present disclosure relates to vehicle positioning methods, apparatus, controllers, intelligent vehicles, and systems. One example vehicle positioning method includes obtaining a first relative pose between a first vehicle and a help providing object, obtaining a global pose of the help providing object, and calculating a global pose of the first vehicle based on the first relative pose and the global pose.Type: GrantFiled: July 14, 2022Date of Patent: March 18, 2025Assignee: Shenzhen Yinwang Intelligent Technologies Co., Ltd.Inventors: Yangjie Pan, Weilong Hu, Xupeng Li, Tao Ding
-
Publication number: 20250082026Abstract: A power-supply mode switching circuit and an aerosol generating device are provided. The power-supply mode switching circuit includes an external-power-source power-supply circuit, a protection circuit, and a battery power-supply circuit. A node where the external-power-source power-supply circuit is connected to the protection circuit serves as a power-source input end. A node where the external-power-source power-supply circuit is connected to the battery power-supply circuit serves as a power supply end. The protection circuit is configured to control the battery power-supply circuit to be cut off when the power-source input end is connected to the external power source, to make the external power source supply power to the power supply end. The protection circuit is further configured to control the battery power-supply circuit to be conducted when the power-source input end is not connected to the external power source, to make the battery supply power to the power supply end.Type: ApplicationFiled: May 16, 2024Publication date: March 13, 2025Applicant: SHENZHEN WOODY VAPES TECHNOLOGY CO., LTD.Inventors: Hong SHANG, Tao ZHANG, Yong ZHOU, Yanming NIU, Lingzhi XIAO, Shuangliang KUANG, Dan ZHU, Hua FANG, Yalei TIAN, Weinan JIANG, Zhipeng HU
-
Patent number: 12249271Abstract: Devices and methods are provided to overdrive or underdrive a display panel to account for display pixel hysteresis due to several frames of pixel history. An electronic device may include an electronic display and processing circuitry. The electronic display includes a number of display pixels. The processing circuitry may generate image data for the display pixels. The processing circuitry may receive a current frame value of the image data targeted for a first display pixel and, based at least in part on the current frame value and a pixel history of the first display pixel—may indicate a gray level for a number of previous frames—generate a compensated value by which to drive the first pixel to overcome pixel hysteresis to reach the desired luminance at an initial response.Type: GrantFiled: July 24, 2023Date of Patent: March 11, 2025Assignee: Apple Inc.Inventors: Jenny Hu, Alexandre V Gauthier, Tao Jia, Scott R Johnston, Yingying Tang, Chaohao Wang
-
Patent number: 12248050Abstract: Disclosed are a dynamic error measurement apparatus, system, and method for an electricity meter. The measurement apparatus includes a test signal generation unit configured to generate a voltage test signal and two-circuit current test signals, output the voltage test signal to a measurement unit, and output the two-circuit current test signals to the measurement unit and a current summation unit; the measurement unit configured to determine two-circuit electric energy values based on the voltage test signal and the two-circuit current test signals, and output the two-circuit electric energy values to a calculation control unit; the current summation unit configured to determine a combined current signal based on the two-circuit current test signals; and the calculation control unit configured to determine a total electric energy value based on the two-circuit electric energy values. The system includes a standard meter and a measurement apparatus.Type: GrantFiled: November 4, 2022Date of Patent: March 11, 2025Assignees: Power Supply Service and Management Center, State Grid Jiangxi Electric Power Co., Ltd., Beijing University of Chemical TechnologyInventors: Jian Ma, Tao Hu, Xuewei Wang, Kexu Chen, Di Wu, Yan Zhao, Gaofeng Deng, Qiang Liu, Aichao Yang, Yanlinzi Huang
-
Publication number: 20250080862Abstract: This application relates to the field of photographing technologies, and discloses an image processing method and an electronic device, so as to dynamically adjust LUTs in a process of photographing or visual recording, and enrich display effects of photographing or visual recording. The electronic device obtains a first image, where the first image is an image captured by a camera of the electronic device, and the first image includes a first shot object; the electronic device determines a first scenario corresponding to the first image, where the first scenario is used to identify a scenario corresponding to the first shot object; the electronic device determines a first LUT based on the first scenario; and the electronic device processes the first image based on the first LUT to obtain a second image, and displays the second image, where a display effect of the second image corresponds to the first LUT.Type: ApplicationFiled: April 29, 2022Publication date: March 6, 2025Inventors: Bin XIAO, Hantao CUI, Yu WANG, Congchao ZHU, Tao SHAO, Shuhong HU
-
Patent number: D1066475Type: GrantFiled: June 6, 2023Date of Patent: March 11, 2025Inventor: Tao Hu
-
Patent number: D1066476Type: GrantFiled: July 25, 2023Date of Patent: March 11, 2025Inventor: Tao Hu