Patents by Inventor Jun Hu

Jun Hu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250119956
    Abstract: System and methods for signal transmission can include a wireless communication device receiving, from a wireless communication node, first information in a signaling indicative of a pattern of at least one physical random access channel (PRACH) or downlink (DL) signal transmission. The wireless communication device may transmit, to the wireless communication node, a PRACH signal corresponding to a DL signal index, in a random access (RA) occasion determined by the pattern.
    Type: Application
    Filed: December 17, 2024
    Publication date: April 10, 2025
    Inventors: Qiujin GUO, Mengzhu CHEN, Bo DAI, Jun XU, Xuan MA, Hong TANG, Xiaoying MA, Youjun HU, Jianqiang DAI
  • Publication number: 20250110857
    Abstract: A technique is directed to ascertaining application test coverage. The technique involves obtaining a source code identifier that identifies a source code section of unmarked source code for an application to be tested. The technique further involves providing a relation that includes the source code identifier and a feature identifier that identifies a feature provided by the source code section of unmarked source code. The technique further involves forming marked source code for the application to be tested from the relation and the unmarked source code. The marked source code for the application to be tested is testable to generate application test coverage results describing test coverage of the source code section.
    Type: Application
    Filed: September 29, 2023
    Publication date: April 3, 2025
    Inventors: Ishay Zekri, Yair Fodor, Jun Hu, Xiaojian Wang, Dianming Yuan
  • Publication number: 20250109730
    Abstract: A method and system for monitoring operation of a hydraulic turbine under an extremely low water head, belonging to the technical field of hydraulic turbines.
    Type: Application
    Filed: September 27, 2024
    Publication date: April 3, 2025
    Inventors: Zhile Jiang, Kunlong Song, Ruiji Yi, Yufeng Hu, Pan Liu, Jun Lu, Yingli Wu, Qiming Zhong, Denghua Li, Jiang Yu, Wanli Guo, Honglei Ren
  • Patent number: 12267269
    Abstract: Methods, systems, and devices for configurations for reference signaling in mobile communication technology are described. An example method for wireless communication includes transmitting, by a network node to a wireless device, a first signaling comprising information associated with a first reference signal, the information comprising at least one of a configuration of the first reference signal, an update information of the first reference signal, or a valid period of the first reference signal.
    Type: Grant
    Filed: August 12, 2022
    Date of Patent: April 1, 2025
    Assignee: ZTE Corporation
    Inventors: Mengzhu Chen, Yuzhou Hu, Jun Xu, Qiujin Guo, Xiaoying Ma
  • Patent number: 12266092
    Abstract: The disclosure discloses a marking method and apparatus of a continuous composite strip. The method incudes: collecting a first sequence of images of the continuous composite strip; splicing multiple images of the first sequence of images according to a collection sequence, to obtain a to-be-detected image with at least one electrode sheet structure; and marking, in case where two electrode sheet edges are identified in the to-be-detected image, a position of the collection-sequentially second electrode sheet edge in the continuous composite strip as an electrode sheet division position of the continuous composite strip. The method determines the electrode sheet division position by identifying the electrode sheet edges in the continuous composite strip, obtains the specific position information of the electrode sheet, and accurately performs the sheet division marking of the continuous composite strip.
    Type: Grant
    Filed: May 21, 2024
    Date of Patent: April 1, 2025
    Assignee: CONTEMPORARY AMPEREX TECHNOLOGY (HONG KONG) LIMITED
    Inventors: Baiquan Zhao, Xianfeng Xie, Hongyuan Li, Jun Hu, Shiping Feng, Qian Wu
  • Patent number: 12264146
    Abstract: Provided are a Rho kinase inhibitor, a method for preparing same and the uses thereof. The Rho kinase inhibitor designates a compound of Formula I, a stereoisomer thereof or pharmaceutically acceptable salt thereof. The Rho kinase inhibitor promotes endothelial cells and endothelin expression, prostenin expression, and vascular factors NO synthesis and secretion, has a promoting effect on proprostin expression independently of the doses used, shows lower toxicity, while being safer.
    Type: Grant
    Filed: May 6, 2019
    Date of Patent: April 1, 2025
    Assignee: Beijing Increase Innovative Drug Co., LTD
    Inventors: Baoxian Zhang, Hongwu Zhang, Jie Hu, Zhiyun Kang, Chunmei Xue, Wenhui Li, Yanwei Song, Zhenzhen Wu, Anping Chen, Fang Wang, Hengchun Ren, Jun Li
  • Publication number: 20250107362
    Abstract: A display substrate has a fan-out area and includes a plurality of conductive layers. The plurality of conductive layers include data lines, connection lines and fan-out lines. A connection line is electrically connected to a first data line, and crosses at least one data line and is insulated from the crossed data line. A first fan-out line is electrically connected to the connection line. A second fan-out line is electrically connected to a second data line. The first fan-out line includes a transfer line, and is located in a different conductive layer from second fan-out lines and crosses at least one second fan-out line. An order of arrangement of ends of the fan-out lines away from the display area in a first direction is same as an order of arrangement of the data lines in the first direction.
    Type: Application
    Filed: March 30, 2023
    Publication date: March 27, 2025
    Inventors: Mengqi Wang, Ziyang Yu, Zhiliang Jiang, Rong Wang, Ming Hu, Haijun Qiu, Xiangdan Dong, Jun Yan, Fan He
  • Publication number: 20250102234
    Abstract: A vapor chamber includes a housing, a capillary structure, and a working fluid. Both the capillary structure and the working fluid are located in the housing. An area of a surface of the capillary structure in contact with an inner surface of the housing is less than an area of the inner surface. The housing has a cavity. The working fluid in a liquid state is located in the capillary structure. The working fluid in a gaseous state is located in the cavity. The housing includes a first region and a second region. The capillary structure includes a first part located in the first region and a second part located in the second region. The second part includes a trunk structure and a branch structure.
    Type: Application
    Filed: December 9, 2024
    Publication date: March 27, 2025
    Inventors: Jun Zhang, Qiang Hu, Liujun Zou, Jian Shi, Chenxi Feng
  • Patent number: 12261132
    Abstract: Chip sealing structures and methods of manufacture are described. In an embodiment, a chip structure includes a main body area formed of a substrate, a back-end-of-the-line (BEOL) build-up structure spanning over the substrate, and chip edge sidewalls extending from a back surface of the substrate to a top surface of the BEOL build-up structure and laterally surrounding the substrate and the BEOL build-up structure. In accordance with embodiments, the chip structure may further include a conformal sealing layer covering at least a first chip edge sidewall of the chip edge sidewalls and a portion of the top surface of the BEOL build-up structure, and forming a lip around the top surface of the BEOL build-up structure.
    Type: Grant
    Filed: October 12, 2023
    Date of Patent: March 25, 2025
    Assignee: Apple Inc.
    Inventors: Vidhya Ramachandran, Sanjay Dabral, SivaChandra Jangam, Jun Zhai, Kunzhong Hu
  • Patent number: 12262074
    Abstract: A method for transmitting data streams, applicable to a stream distributing device including a plurality of transmitters, includes: acquiring a data stream to be transmitted; acquiring, from the data stream, a plurality of data segments corresponding to the plurality of transmitters, wherein each of the transmitters corresponds to at least one data segment, and different transmitters correspond to different data segments; and transmitting the corresponding data segments to a stream combining device using the plurality of transmitters, such that the stream combining device transmits the data stream to a playing device in response to acquiring the data stream by combining the plurality of data segments.
    Type: Grant
    Filed: June 16, 2021
    Date of Patent: March 25, 2025
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Tao Li, Chaofeng Dong, Dongbo Cao, Kejun Hu, Jun Yang
  • Patent number: 12260585
    Abstract: The embodiments of the present disclosure provide a visual positioning method, the method may include obtaining a positioning image collected by an imaging device; obtaining a three-dimensional (3D) point cloud map associated with an area where the imaging device is located; determining a target area associated with the positioning image from the 3D point cloud map based on the positioning image; and determining positioning information of the imaging device based on the positioning image and the target area.
    Type: Grant
    Filed: June 18, 2022
    Date of Patent: March 25, 2025
    Assignee: BEIJING DIDI INFINITY TECHNOLOGY AND DEVELOPMENT CO., LTD.
    Inventors: Ling Bao, Bin Xu, Xiance Du, Jun Zhang, Xiaoqiang Teng, Zhiwei Ruan, Huanqing Zhou, Pengfei Xu, Ronghao Li, Runbo Hu, Hua Chai
  • Publication number: 20250092206
    Abstract: The invention belongs to the field of polymers, and relates to biaxially oriented polypropylene dielectric film, a modified polypropylene material and use thereof. The preparation raw materials of the biaxially oriented polypropylene dielectric film comprise a modified polypropylene grafted with an alkenyl-containing functional monomer. The modified polypropylene grafted with an alkenyl-containing functional monomer comprises a structural unit derived from polypropylene as a matrix phase and a structural unit derived from an alkenyl-containing functional monomer as a dispersion phase. The ash content of the modified polypropylene grafted with an alkenyl-containing functional monomer is less than 50 ppm.
    Type: Application
    Filed: November 22, 2022
    Publication date: March 20, 2025
    Inventors: Yaru ZHANG, Qi LI, Qi ZHANG, Hao YUAN, Jinliang HE, Qing SHAO, Junluo LI, Hui QUAN, Mingti WANG, Jun HU, Juan LI, Dali GAO, Longgui ZHANG, Shixun HU, Hongchao LU, Lidong XIA
  • Publication number: 20250094686
    Abstract: Techniques for modifying a narrative point of view for content generated by a machine-learned model, such as a large language model (LLM), are provided. In one technique, a first textual content that was generated by an LLM is accessed. A narrative point of view (NPOV) detection operation is performed on a first portion of the first textual content to identify a first NPOV corresponding to the first portion of the first textual content. Based on an output, of the NPOV detection operation, that indicates that the first NPOV does not meet one or more NPOV criteria, the first portion of the first textual content is modified to generate a modified textual content. The modified textual content is submitted to the LLM, causing the LLM to generate a second textual content.
    Type: Application
    Filed: June 28, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
  • Publication number: 20250094810
    Abstract: Method and apparatus for processing input information using an adaptable and continually learning neural network architecture comprising an encoder, at least one adaptor and at least one reconfigurator. The encoder, at least one reconfigurator and at least one adaptor determine whether the input information is out-of-distribution or in-distribution. If the input information is in distribution, the architecture extracts features from the input information, creates hyperdimensional vectors representing the features and classifies the hyperdimensional vectors. If the input information is out of distribution, the architecture creates at least one adaptor to operate with the encoder and the at least one reconfigurator to extract features from the input information, create hyperdimensional vectors representing the features and classify the hyperdimensional vectors.
    Type: Application
    Filed: September 3, 2024
    Publication date: March 20, 2025
    Inventors: Zachary A. DANIELS, Jun HU, Michael R. LOMNITZ, Philip MILLER, Aswin NADAMUNI RAGHAVAN, Yuzheng ZHANG, Michael PIACENTINO, David C. ZHANG, Michael ISNARDI, Saurabh FARKYA
  • Publication number: 20250094138
    Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of software code generation by large language models are described herein. In one embodiment, a method accesses a collection of software code samples that intermix sample code and human language description. The method generates prompts to an LLM to write code that performs as described by the human language description of the sample code. The method fine-tunes a large language model to generate software code based on a code generation loss function that evaluates code generated by the LLM in response to the prompts. The method generates an evaluation score for performance of the tuned large language model as a code generator based on code generation loss for second generated code. And, the method automatically signals that fine-tuning of the tuned large language is complete in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: June 14, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
  • Publication number: 20250094816
    Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of text generation for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a natural language text prompt that combines content and instructions. The method extracts the instructions from the text prompt. The method fine-tunes a large language model to generate text in natural language based on a text generation loss function that penalizes non-compliance with the extracted instructions by a generated text response to the text prompt. The method generates an evaluation score for performance of the tuned large language model as a text generator based on a value of the text generation loss function for a second generated text response. And, the method automatically signals that the fine tuning of the tuned large language model is complete in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: April 30, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
  • Publication number: 20250094687
    Abstract: Techniques for generating repetition-free text using a large language model (LLM) are provided. In one technique, textual content that was generated by an LLM is accessed, where the textual content comprises a plurality of sub-components including a first sub-component and a second sub-component. A first embedding that represents the first sub-component is generated and a second embedding that represents the second sub-component is generated. Based on a similarity between the first embedding and the second embedding, it is determined whether the second sub-component is repetitious with respect to the first sub-component. In response to determining that the second sub-component is repetitious with respect to the first sub-component, at least a portion of the second sub-component is removed from the textual content.
    Type: Application
    Filed: June 28, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
  • Publication number: 20250094814
    Abstract: Techniques are provided for fine-tuning large language models (LLMs) to reduce the instability of LLM outputs to prompt. In one technique, a plurality of prompts is stored. For each prompt of the plurality of prompts, a plurality of variants of that prompt is generated. A prompt generating LLM is fine-tuned based on that prompt and the plurality of variants. Each variant-prompt association (where the variant is generated based on the prompt and has an identical or similar meaning) is a training sample that is used to train or fine-tune the prompt generating LLM. The prompt generating LLM is configured to generate standardized prompts based on input prompts. In another technique, a response generating LLM is fine-tuned based on sets of training samples, each training sample in a set comprising a different variant of a prompt and a response that the response generating LLM generated based on the prompt.
    Type: Application
    Filed: September 4, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M Mamtani
  • Publication number: 20250094866
    Abstract: Techniques for correcting hallucinations produced by generative large language models (LLMs). In one technique, a computing system accesses first output generated by an LLM. The computing system identifies, within the first output, a plurality of assertions. The computing system determines that a first assertion in the plurality of assertions is false. The computing system generates a prompt that indicates that the first assertion is false. The computing system submits the prompt as input to the LLM. The computing system accesses second output that is generated by the LLM, where the second output includes a second assertion that is different than the first assertion and corresponds to the first assertion.
    Type: Application
    Filed: May 30, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
  • Publication number: 20250094704
    Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of text summarization for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a body of text and an example summary. The method fine-tunes a large language model (LLM) based on a loss function that compares the example summary and a generated summary generated by the LLM. The example and generated summaries are compared at sentence, paragraph, and/or article levels. The method generates an evaluation score for performance of the tuned LLM as a text summarizer based on a further comparison of a reference summary and a summary generated by the tuned LLM. The method then automatically determines to deploy the tuned LLM to a text summarization task in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: April 5, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI