Patents by Inventor Zheng Wang

Zheng Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250145730
    Abstract: Provided is a molecule comprising an Fc variant. Compared with a molecule comprising an IgG wild-type Fc, the molecule is resistant to the degradation of an immunoglobulin enzyme, and the molecule comprises a human IgG Fc region having a mutation, the mutation being one or more substitutions or deletions at a G237 position (EU numbering), and the substitution preferably being G237A. Further provided are a nucleic acid encoding the molecule, a vector comprising the nucleic acid and a host cell, and a method for preparing the molecule.
    Type: Application
    Filed: December 16, 2022
    Publication date: May 8, 2025
    Applicant: SHANGHAI BAO PHARMACEUTICALS CO., LTD.
    Inventors: Zheng WANG, Yunxia XU, Yanjun LIU, Baojie LV
  • Publication number: 20250135336
    Abstract: The embodiments of the disclosure disclose a method, an apparatus, an electronic device, a storage medium and a product of cloud game control, and the method includes the steps: obtaining predetermined device information of a corresponding handheld device in response to obtaining a handle connection event each time; interacting with a target cloud game server to register the handheld device based on the predetermined device information, and determining a device management number for the handheld device that is successfully registered; and obtaining a handle operation event, and forwarding the handle operation event to the target cloud game server based on a device management number associated with the handle operation event, to implement corresponding cloud game operation control.
    Type: Application
    Filed: October 25, 2024
    Publication date: May 1, 2025
    Inventor: Zheng WANG
  • Patent number: 12272282
    Abstract: A display panel, a driving method therefor, and a display device. The display panel includes a scan drive module and multiple rows of sub-pixel units arranged in sequence. The multiple rows of sub-pixel units are divided into n sub-pixel unit groups, and two adjacent rows of the sub-pixel units in each sub-pixel unit group are spaced by (n?1) rows of the sub-pixel units in other sub-pixel unit groups. The scan drive module is configured to sequentially scan and drive each row of the sub-pixel units in each sub-pixel unit group and scan and drive the n sub-pixel unit groups in divided periods within a scan cycle.
    Type: Grant
    Filed: December 29, 2023
    Date of Patent: April 8, 2025
    Assignee: KUNSHAN GO-VISIONOX OPTO-ELECTRONICS CO., LTD
    Inventors: Zhaomin Lin, Zheng Wang, Tao Tang
  • Publication number: 20250109047
    Abstract: Biosynthetic melanin can capture fentanyl from aqueous environments. The captured fentanyl can be released by introducing the fentanyl-bound melanin to an acidic environment. In this way, fentanyl can capture melanin for subsequent detection/analysis, in addition to operating to remove fentanyl for decontamination.
    Type: Application
    Filed: September 30, 2024
    Publication date: April 3, 2025
    Applicant: The Government of the United States of America, as represented by the Secretary of the Navy
    Inventors: Zheng Wang, Dagmar Leary, Jaimee Compton, Christopher Katilie, Gregory Ellis, Gary Vora
  • Patent number: 12266287
    Abstract: A Gamma debugging method, apparatus and device. The method includes: determining a target display parameter of a display module at an ith gray scale binding point according to a target display parameter of the display module at a first gray scale binding point and a preset Gamma mapping relationship, controlling the display module to display an initial gray scale picture according to the target display parameter at the ith gray scale binding point, and collecting an actual display parameter of the display module at the ith gray scale binding point; and adjusting, under a condition that a difference between the actual display parameter at the ith gray scale binding point and the target display parameter at the ith gray scale binding point goes beyond a preset deviation threshold range, a data signal parameter corresponding to the ith gray scale binding point.
    Type: Grant
    Filed: June 23, 2023
    Date of Patent: April 1, 2025
    Assignee: KunShan Go Visionox Opto Electronics Co., Ltd
    Inventors: Yuqing Wang, Tao Tang, Zheng Wang
  • Publication number: 20250103980
    Abstract: A system, device and method are provided for at least in part automating remediation in an enterprise system. An illustrative method includes receiving a request to perform an action, the action requiring at least one of a plurality of subsystems to be completed. The plurality of subsystems include employee and customer subsystems. The method includes transmitting an event, based on the request, to the at least one of a plurality of subsystems via an event subsystem. The method includes with an automated remediation platform: monitoring events sent from the at least one of a plurality of subsystems to the event subsystem to detect a failure; and in response to detecting the failure, generating a trigger event for consumption by the event subsystem. The method includes in response to receiving the trigger event, transmitting a remediation event for consumption by the at least one of a plurality of subsystems.
    Type: Application
    Filed: May 22, 2024
    Publication date: March 27, 2025
    Applicant: The Toronto-Dominion Bank
    Inventors: Arash DELJAVAN FARSHI, Adam COWIN, Arthur BYDON, Gilbert CHANG, Zheng WANG
  • Publication number: 20250104029
    Abstract: A system, device and method are provided for accelerating transfers via intermediaries. The illustrative method includes receiving a request to perform a transfer of funds and routing the first request through an account management service to a payment service. The method can include populating a second request to the interbank intermediary to initiate the transfer, and receiving, at the payment service, confirmation of transfer from the interbank intermediary, the received funds being routed by the payment service to a multi-tenant account. The method can include generating an event for the account management service that represents that the confirmation has been received and initiating, via the account management service, completion of the transfer to the first account by request to the multi-tenant account. The method can include generating, by the account management service, an event to update a distribution subsystem of the completed transfer.
    Type: Application
    Filed: September 26, 2023
    Publication date: March 27, 2025
    Applicant: The Toronto-Dominion Bank
    Inventors: Arash DELJAVAN FARSHI, Adam COWIN, Arthur Bydon, Gilbert Chan, Zheng Wang
  • Publication number: 20250099557
    Abstract: An antibiotic pharmaceutical composition for intradermal or subcutaneous administration is provided, which includes an antibiotic and hyaluronidase (HAase). In the antibiotic pharmaceutical composition, a content of the antibiotic is 10 mg/mL-5 g/Ml, and a content of the HAase is 45 units/ml-4500000 units/ml. A kit is provided, which includes the pharmaceutical composition. A method for preparing the kit, and use of the pharmaceutical composition are further provided.
    Type: Application
    Filed: July 22, 2022
    Publication date: March 27, 2025
    Applicant: SHANGHAI BAO PHARMACEUTICALS CO., LTD.
    Inventors: Yanjun LIU, Zheng WANG, Lin LU, Zhen ZHU
  • Publication number: 20250094686
    Abstract: Techniques for modifying a narrative point of view for content generated by a machine-learned model, such as a large language model (LLM), are provided. In one technique, a first textual content that was generated by an LLM is accessed. A narrative point of view (NPOV) detection operation is performed on a first portion of the first textual content to identify a first NPOV corresponding to the first portion of the first textual content. Based on an output, of the NPOV detection operation, that indicates that the first NPOV does not meet one or more NPOV criteria, the first portion of the first textual content is modified to generate a modified textual content. The modified textual content is submitted to the LLM, causing the LLM to generate a second textual content.
    Type: Application
    Filed: June 28, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
  • Publication number: 20250094687
    Abstract: Techniques for generating repetition-free text using a large language model (LLM) are provided. In one technique, textual content that was generated by an LLM is accessed, where the textual content comprises a plurality of sub-components including a first sub-component and a second sub-component. A first embedding that represents the first sub-component is generated and a second embedding that represents the second sub-component is generated. Based on a similarity between the first embedding and the second embedding, it is determined whether the second sub-component is repetitious with respect to the first sub-component. In response to determining that the second sub-component is repetitious with respect to the first sub-component, at least a portion of the second sub-component is removed from the textual content.
    Type: Application
    Filed: June 28, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
  • Publication number: 20250094816
    Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of text generation for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a natural language text prompt that combines content and instructions. The method extracts the instructions from the text prompt. The method fine-tunes a large language model to generate text in natural language based on a text generation loss function that penalizes non-compliance with the extracted instructions by a generated text response to the text prompt. The method generates an evaluation score for performance of the tuned large language model as a text generator based on a value of the text generation loss function for a second generated text response. And, the method automatically signals that the fine tuning of the tuned large language model is complete in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: April 30, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
  • Publication number: 20250094865
    Abstract: Techniques for ensuring that language models follow instructions indicated in prompts are provided. In one technique, a first language model generates a response based on a prompt. A set of instructions in the prompt is identified. For each instruction in the set, a second language model determines whether the response indicates that the first language model followed the instruction. In another technique, for each prompt of a plurality of prompts: (1) a first language model generates a response based on the prompt; (2) multiple instructions are identified based on the prompt; (3) a second language model generates, based on the plurality of instructions, an output that indicates that the first language model followed each instruction; and (4) the prompt, the response, and the multiple instructions are stored in a training instance. The first language model is finetuned based on the training instances.
    Type: Application
    Filed: April 8, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M. Mamtani
  • Publication number: 20250094814
    Abstract: Techniques are provided for fine-tuning large language models (LLMs) to reduce the instability of LLM outputs to prompt. In one technique, a plurality of prompts is stored. For each prompt of the plurality of prompts, a plurality of variants of that prompt is generated. A prompt generating LLM is fine-tuned based on that prompt and the plurality of variants. Each variant-prompt association (where the variant is generated based on the prompt and has an identical or similar meaning) is a training sample that is used to train or fine-tune the prompt generating LLM. The prompt generating LLM is configured to generate standardized prompts based on input prompts. In another technique, a response generating LLM is fine-tuned based on sets of training samples, each training sample in a set comprising a different variant of a prompt and a response that the response generating LLM generated based on the prompt.
    Type: Application
    Filed: September 4, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M Mamtani
  • Publication number: 20250096272
    Abstract: A positive electrode material, and a secondary battery, battery module, battery pack, and electric apparatus including the same are disclosed. The positive electrode material includes an active substance and a binder, and the binder has the following formula: The fluorine-free binder in the positive electrode material not only has good flexibility and cohesiveness but also good NMP solubility and thus is a good substitute for PVDF.
    Type: Application
    Filed: December 2, 2024
    Publication date: March 20, 2025
    Applicant: CONTEMPORARY AMPEREX TECHNOLOGY (HONG KONG) LIMITED
    Inventors: Lei LU, Changyuan HU, Zheng WANG, Yalong WANG, Shisong LI, Shunhao DAI
  • Publication number: 20250094704
    Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of text summarization for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a body of text and an example summary. The method fine-tunes a large language model (LLM) based on a loss function that compares the example summary and a generated summary generated by the LLM. The example and generated summaries are compared at sentence, paragraph, and/or article levels. The method generates an evaluation score for performance of the tuned LLM as a text summarizer based on a further comparison of a reference summary and a summary generated by the tuned LLM. The method then automatically determines to deploy the tuned LLM to a text summarization task in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: April 5, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
  • Publication number: 20250097171
    Abstract: Systems, methods, and other embodiments automated fine-tuning of chatbot performance for large language models are described herein. In one embodiment, a method accesses a collection of sample conversations between two entities. An individual sample conversation includes one or more rounds of natural language example prompt by a querent and example response by an agent. The method fine-tunes an LLM to generate responses in natural language based on a chatbot loss function that evaluates first responses generated by the LLM to the example prompts by the querent. The method generates an evaluation score for performance of the tuned LLM as a chatbot based on second responses generated by the tuned LLM to test prompts from a test conversation. And, the method automatically signals that the fine-tuning of the tuned LLM is complete in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: July 10, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
  • Publication number: 20250094716
    Abstract: Techniques for language model (LM) summarization using semantical clustering are provided. In one technique, a plurality of concepts reflected in text data is identified. A plurality of concept clusters is generated based on similarity among the plurality of concepts. Thus, some concept clusters may include multiple concepts. For each concept cluster of the plurality of concept clusters, an LM generates a summary of the text corresponding to that concept cluster. A summary response of the text data is generated by aggregating the summary of each concept cluster of the plurality of concept clusters. In another technique, an LM generates a summary based on text data. A first set of concepts reflected in the summary is identified and a second set of concepts reflected in the text data is identified. A difference between the two sets may indicate that the summary is missing one or more concepts.
    Type: Application
    Filed: May 7, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M. Mamtani
  • Publication number: 20250094138
    Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of software code generation by large language models are described herein. In one embodiment, a method accesses a collection of software code samples that intermix sample code and human language description. The method generates prompts to an LLM to write code that performs as described by the human language description of the sample code. The method fine-tunes a large language model to generate software code based on a code generation loss function that evaluates code generated by the LLM in response to the prompts. The method generates an evaluation score for performance of the tuned large language model as a code generator based on code generation loss for second generated code. And, the method automatically signals that fine-tuning of the tuned large language is complete in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: June 14, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
  • Publication number: 20250094866
    Abstract: Techniques for correcting hallucinations produced by generative large language models (LLMs). In one technique, a computing system accesses first output generated by an LLM. The computing system identifies, within the first output, a plurality of assertions. The computing system determines that a first assertion in the plurality of assertions is false. The computing system generates a prompt that indicates that the first assertion is false. The computing system submits the prompt as input to the LLM. The computing system accesses second output that is generated by the LLM, where the second output includes a second assertion that is different than the first assertion and corresponds to the first assertion.
    Type: Application
    Filed: May 30, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
  • Patent number: 12253713
    Abstract: Provided is a display module. The display module includes: a backplane, a middle frame, a backlight module, and a display panel; wherein the backlight module and the display panel are sequentially laminated on the backplane; the middle frame includes a first frame body and a bearing structure, wherein the first frame body surrounds the backlight module, the bearing structure is disposed on the first frame body and extends in a direction towards a center of the backlight module, and the backlight module and the display panel are respectively disposed on two faces of the bearing structure; and the backlight module includes a light guide plate, and the display module has a view angle greater than or equal to 45° on at least one side of the display module.
    Type: Grant
    Filed: July 1, 2022
    Date of Patent: March 18, 2025
    Assignees: BEIJING BOE DISPLAY TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Tengfei Wang, Zheng Wang, Hetao Wang, Rui Guo, Shixin Geng, Jin Han, Tianyang Han