Patents by Inventor Tao Wang

Tao Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12262418
    Abstract: Methods, systems, and devices for wireless communication are described. A first user equipment (UE) may receive, from a second UE, an indication of a first uplink timing advance for communications from the second UE to a base station. The UE may estimate, based at least in part on the first uplink timing advance received from the second UE, a second uplink timing advance for transmission of a random access message from the first UE to the base station. The UE may transmit, to the base station, the random access message using the second uplink timing advance.
    Type: Grant
    Filed: January 4, 2022
    Date of Patent: March 25, 2025
    Assignee: QUALCOMM Incorporated
    Inventors: Hua Wang, Sony Akkarakaran, Tao Luo, Junyi Li, Jung Ho Ryu
  • Patent number: 12262533
    Abstract: A three-dimensional (3D) memory device includes a first memory cell, a second memory cell, a control gate between the first and second memory cells, a top contact coupled to the first memory cell, and a bottom contact coupled to the second memory cell. The first memory cell can include a first pillar, a first insulating layer surrounding the first pillar, a first gate contact coupled to a first word line, and a second gate contact coupled to a first plate line. The second memory cell can include a second pillar, a second insulating layer surrounding the second pillar, a third gate contact coupled to a second word line, and a fourth gate contact coupled to a second plate line. The 3D memory device can utilize dynamic flash memory (DFM), increase storage density, provide multi-cell storage, provide a three-state logic, decrease leakage current, increase retention time, and decrease refresh rates.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: March 25, 2025
    Assignee: Yangtze Memory Technologies Co., Ltd.
    Inventors: Tao Yang, Dongxue Zhao, Yuancheng Yang, Lei Liu, Kun Zhang, Di Wang, Wenxi Zhou, Zhiliang Xia, Zongliang Huo
  • Publication number: 20250093000
    Abstract: A portable illumination apparatus such as, for example, a task light, that is lightweight and compact to provide enhanced illumination in an ambient environment when placed in one of a plurality of operating modes. The portable illumination apparatus includes a support base and a light module that includes one or more light sources and a rechargeable power source. The light module may be manually manipulated for movement between a collapsed, stowed position and a deployed position by rotation of about a first rotational axis. In the deployed position, the light module is rotatable about the second rotational axis that is perpendicular to the first rotational axis. The support base incorporates a carabiner clip member that facilitates secure attachment of the portable illumination apparatus to an object or surface in a manner that facilitates hands-free use of the portable illumination apparatus.
    Type: Application
    Filed: December 5, 2024
    Publication date: March 20, 2025
    Inventors: Tao WANG, Karen M. BAUMEISTER, Jonathan KIRKPATRICK, Xiaotian ZENG, Ling YANG, Timothy J. PAYNE
  • Publication number: 20250093215
    Abstract: The invention discloses a charging gun that can realize changeable color according to temperature change and a preparation method thereof, relating to the field of charging gun, including a gun head with one end provided with silicone waterproof pad, the rear side of silicone waterproof pad is connected with terminal cover; interior of said terminal cover is connected with terminals, and outer side of the terminals are provided with terminal rubber coatings, one end of said terminals is connected with a terminal wiring harness. The invention realizes changeable color block different from the material used elsewhere, which adopts PC and temperature-sensitive material mixed injection molding, in the charging process of charging gun, phenomenon of heat will occur in the event of improper use or poor contact, at this time temperature change can be perceived through color change of gun shell so as to confirm using state of charging gun.
    Type: Application
    Filed: September 23, 2022
    Publication date: March 20, 2025
    Inventors: Wei YANG, Ning YE, Tao YANG, Jihua WANG
  • Publication number: 20250092572
    Abstract: The present disclosure provides a method for preparing a gallium nitride (GaN) single-crystal substrate with an edge metal mask technology. The method includes: preparing a metal mask ring on a composite epitaxial substrate, epitaxially growing a GaN single-crystal sacrificial layer in a confined manner, performing separation with interlayer decoupling of single-crystal graphene through an in-situ temperature gradient method to obtain a self-supporting GaN single-crystal sacrificial layer, epitaxially growing a GaN single-crystal thick film in a diameter expanded manner, and performing chemico-mechanical trimming on the GaN single-crystal thick film to obtain a stress-free self-supporting GaN single-crystal substrate. The metal mask ring is compatible with the GaN single-crystal preparation process (hydride vapor phase epitaxy (HVPE)), and efficiently catalyzes decomposition reaction of the nitrogen source.
    Type: Application
    Filed: September 19, 2023
    Publication date: March 20, 2025
    Inventors: Xinqiang WANG, Fang LIU, Qiang LIU, Yucheng GUO, Tao WANG, Jiejun WU, Bo SHEN, Guoyi ZHANG
  • Publication number: 20250094816
    Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of text generation for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a natural language text prompt that combines content and instructions. The method extracts the instructions from the text prompt. The method fine-tunes a large language model to generate text in natural language based on a text generation loss function that penalizes non-compliance with the extracted instructions by a generated text response to the text prompt. The method generates an evaluation score for performance of the tuned large language model as a text generator based on a value of the text generation loss function for a second generated text response. And, the method automatically signals that the fine tuning of the tuned large language model is complete in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: April 30, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
  • Publication number: 20250094704
    Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of text summarization for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a body of text and an example summary. The method fine-tunes a large language model (LLM) based on a loss function that compares the example summary and a generated summary generated by the LLM. The example and generated summaries are compared at sentence, paragraph, and/or article levels. The method generates an evaluation score for performance of the tuned LLM as a text summarizer based on a further comparison of a reference summary and a summary generated by the tuned LLM. The method then automatically determines to deploy the tuned LLM to a text summarization task in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: April 5, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
  • Publication number: 20250092761
    Abstract: A method of treating a subterranean formation to protect against solid deposition and corrosion attack includes flowing a pre-flush fluid to a specified downhole location within the subterranean formation. A main treatment fluid is flowed to the specified downhole location. The main treatment fluid includes a scale inhibitor configured to inhibit formation of scale within the subterranean formation. At a subsurface level, an overflush foam is generated. The overflush foam includes an aqueous fluid and a gas. The overflush foam is shear thinning and non-Newtonian. The overflush foam is flowed to the specified downhole location.
    Type: Application
    Filed: September 18, 2023
    Publication date: March 20, 2025
    Inventors: Qiwei Wang, Tao Chen, Hemant K. Sharma
  • Publication number: 20250094814
    Abstract: Techniques are provided for fine-tuning large language models (LLMs) to reduce the instability of LLM outputs to prompt. In one technique, a plurality of prompts is stored. For each prompt of the plurality of prompts, a plurality of variants of that prompt is generated. A prompt generating LLM is fine-tuned based on that prompt and the plurality of variants. Each variant-prompt association (where the variant is generated based on the prompt and has an identical or similar meaning) is a training sample that is used to train or fine-tune the prompt generating LLM. The prompt generating LLM is configured to generate standardized prompts based on input prompts. In another technique, a response generating LLM is fine-tuned based on sets of training samples, each training sample in a set comprising a different variant of a prompt and a response that the response generating LLM generated based on the prompt.
    Type: Application
    Filed: September 4, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M Mamtani
  • Publication number: 20250097905
    Abstract: Various aspects of the present disclosure generally relate to wireless communication. In some aspects, a user equipment (UE) may receive a sidelink configuration that indicates a set of sidelink slots comprising a first subset of sidelink slots to be used for first transmissions of sidelink communications and a second subset of sidelink slots to be used for second transmissions of the sidelink communications. The UE may communicate based at least in part on the sidelink configuration. Numerous other aspects are described.
    Type: Application
    Filed: December 4, 2024
    Publication date: March 20, 2025
    Inventors: Hua WANG, Sony AKKARAKARAN, Tao LUO, Junyi LI, Yan ZHOU, Hong CHENG, Jelena DAMNJANOVIC, Peter GAAL, Jung Ho RYU
  • Publication number: 20250094887
    Abstract: The present disclosure provides a method for optimizing parameters of a ladder-type carbon trading mechanism based on an improved particle swarm optimization (IPSO) algorithm. The method first obtains information and operating data of a park-level integrated energy system, establishes equipment models and constraints of the park-level integrated energy system, and establishes a ladder-type carbon trading model; then encapsulates a process of optimized low-carbon dispatching of the park-level integrated energy system as a fitness function whose input is parameters of a carbon trading mechanism and output is a carbon emission of the system; and finally, introduces an IPSO algorithm to optimize the fitness function, and outputs optimization result information of the algorithm. The present disclosure verifies effectiveness and rationality of the model and the method that give full play to a role of the ladder-type carbon trading mechanism in the park-level integrated energy system through example analysis.
    Type: Application
    Filed: October 20, 2022
    Publication date: March 20, 2025
    Inventors: Quan Chen, Xuanjun Zong, Sheng Zou, Hongwei Zhou, Tao Peng, Weiliang Wang, Wenjia Zhang, Chen Wu, Qun Zhang, Yuan Shen, Wei Feng, Gaofeng Shen, Min Zhang, Kai Yang, Xinyue Kong
  • Publication number: 20250095637
    Abstract: A method includes receiving a textual prompt in a first language and obtaining a fine-tuned prompt embedding configured to guide a large language model (LLM) to generate text in a target language from textual prompts in the first language. The method also includes processing, using the LLM, the textual prompt conditioned on the fine-tuned prompt embedding to generate output text in the target language and concatenating the textual prompt and the generated output text to provide an unspoken textual utterance. The method also includes training a multilingual automatic speech recognition (ASR) model to learn how to recognize speech in the target language by injecting the unspoken textual utterance into a text encoder associated with the multilingual ASR model.
    Type: Application
    Filed: September 16, 2024
    Publication date: March 20, 2025
    Applicant: Google LLC
    Inventors: Ke Hu, Tara N. Sainath, Bo Li, Yu Zhang, Yong Cheng, Tao Wang, Yujing Zhang, Frederick Liu
  • Publication number: 20250094866
    Abstract: Techniques for correcting hallucinations produced by generative large language models (LLMs). In one technique, a computing system accesses first output generated by an LLM. The computing system identifies, within the first output, a plurality of assertions. The computing system determines that a first assertion in the plurality of assertions is false. The computing system generates a prompt that indicates that the first assertion is false. The computing system submits the prompt as input to the LLM. The computing system accesses second output that is generated by the LLM, where the second output includes a second assertion that is different than the first assertion and corresponds to the first assertion.
    Type: Application
    Filed: May 30, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
  • Publication number: 20250096810
    Abstract: A training signal generator for forming an input signal for an ADC-under-test includes a one-bit DAC and an analog low-pass filter. The one-bit DAC converts a binary sequence into a DAC output signal that is then filtered by the analog low-pass filter to form an ADC input signal. The ADC-under-test converts the ADC input signal into an ADC output signal. A digital low-pass filter converts the binary sequence into a plurality of samples. A digital signal processing system processes the plurality of samples and the ADC output signal to form an estimate of the ADC input signal. An ADC linearizer may then be trained to characterize a non-linear impairment of the ADC-under-test responsive to a comparison of the estimate of the ADC input signal and the ADC output signal.
    Type: Application
    Filed: September 19, 2023
    Publication date: March 20, 2025
    Inventors: Igor GUTMAN, Elias DAGHER, Hua WANG, Behnam SEDIGHI, Seyed Arash MIRHAJ, Tao LUO
  • Publication number: 20250089910
    Abstract: The present disclosure relates to a crib, which includes a body, a bed frame and a support base. The body has an accommodation space configured to accommodate a child and includes a slide rod. The bed frame is configured to support the body. The support base is fixed to the bed frame and the slide rod extends through the support base and is slidable relative to the support base. The crib is configured for a child to sleep or play, can function as a cradle which facilitates calming the child, has a simple structure and is easy to operate.
    Type: Application
    Filed: January 18, 2023
    Publication date: March 20, 2025
    Inventors: Junjie Hu, Tao Wang
  • Publication number: 20250094138
    Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of software code generation by large language models are described herein. In one embodiment, a method accesses a collection of software code samples that intermix sample code and human language description. The method generates prompts to an LLM to write code that performs as described by the human language description of the sample code. The method fine-tunes a large language model to generate software code based on a code generation loss function that evaluates code generated by the LLM in response to the prompts. The method generates an evaluation score for performance of the tuned large language model as a code generator based on code generation loss for second generated code. And, the method automatically signals that fine-tuning of the tuned large language is complete in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: June 14, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
  • Publication number: 20250097171
    Abstract: Systems, methods, and other embodiments automated fine-tuning of chatbot performance for large language models are described herein. In one embodiment, a method accesses a collection of sample conversations between two entities. An individual sample conversation includes one or more rounds of natural language example prompt by a querent and example response by an agent. The method fine-tunes an LLM to generate responses in natural language based on a chatbot loss function that evaluates first responses generated by the LLM to the example prompts by the querent. The method generates an evaluation score for performance of the tuned LLM as a chatbot based on second responses generated by the tuned LLM to test prompts from a test conversation. And, the method automatically signals that the fine-tuning of the tuned LLM is complete in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: July 10, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
  • Publication number: 20250094687
    Abstract: Techniques for generating repetition-free text using a large language model (LLM) are provided. In one technique, textual content that was generated by an LLM is accessed, where the textual content comprises a plurality of sub-components including a first sub-component and a second sub-component. A first embedding that represents the first sub-component is generated and a second embedding that represents the second sub-component is generated. Based on a similarity between the first embedding and the second embedding, it is determined whether the second sub-component is repetitious with respect to the first sub-component. In response to determining that the second sub-component is repetitious with respect to the first sub-component, at least a portion of the second sub-component is removed from the textual content.
    Type: Application
    Filed: June 28, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
  • Publication number: 20250094716
    Abstract: Techniques for language model (LM) summarization using semantical clustering are provided. In one technique, a plurality of concepts reflected in text data is identified. A plurality of concept clusters is generated based on similarity among the plurality of concepts. Thus, some concept clusters may include multiple concepts. For each concept cluster of the plurality of concept clusters, an LM generates a summary of the text corresponding to that concept cluster. A summary response of the text data is generated by aggregating the summary of each concept cluster of the plurality of concept clusters. In another technique, an LM generates a summary based on text data. A first set of concepts reflected in the summary is identified and a second set of concepts reflected in the text data is identified. A difference between the two sets may indicate that the summary is missing one or more concepts.
    Type: Application
    Filed: May 7, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M. Mamtani
  • Publication number: 20250094865
    Abstract: Techniques for ensuring that language models follow instructions indicated in prompts are provided. In one technique, a first language model generates a response based on a prompt. A set of instructions in the prompt is identified. For each instruction in the set, a second language model determines whether the response indicates that the first language model followed the instruction. In another technique, for each prompt of a plurality of prompts: (1) a first language model generates a response based on the prompt; (2) multiple instructions are identified based on the prompt; (3) a second language model generates, based on the plurality of instructions, an output that indicates that the first language model followed each instruction; and (4) the prompt, the response, and the multiple instructions are stored in a training instance. The first language model is finetuned based on the training instances.
    Type: Application
    Filed: April 8, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M. Mamtani