Patents by Inventor Jun Qian

Jun Qian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250124930
    Abstract: In an approach to improve the privacy of chat groups within a virtual world, embodiments of the present invention expand a chat area of a chat group to form an experience annulus according to predetermined distance that a voice volume of private-chat-group member can propagate. Responsive to identifying an external user is interested in the chat group, embodiments generate and set a current topic representing a conversation in the chat group as an externally hearable topic that is perceivable by the external user and generate a faux multi-person conversation associated with the externally hearable topic that corresponds to a real conversation made by members in the chat group. Further, embodiments assign the faux utterances to chat group members based on a corresponding speaker index and utilize one or more corresponding avatars of the chat group members to present the faux utterances to the external user.
    Type: Application
    Filed: October 13, 2023
    Publication date: April 17, 2025
    Inventors: Jun Qian Zhou, Dan Zhang, Yuan Jie Song, Meng Chai, Xiao Feng Ji
  • Patent number: 12272047
    Abstract: A neural network is trained for use in a substrate residue classification system by obtaining ground truth residue level measurements of a top layer of a calibration substrate at a plurality of locations, each location at a defined position for a die being fabricated on the substrate. A plurality of color images of the calibration substrate are obtained, each color image corresponding to a region for a die being fabricated on the substrate. A neural network is trained to convert color images of die regions from an in-line substrate imager to residue level measurements for the top layer in the die region.
    Type: Grant
    Filed: October 27, 2023
    Date of Patent: April 8, 2025
    Assignee: Applied Materials, Inc.
    Inventors: Sivakumar Dhandapani, Arash Alahgholipouromrani, Dominic J. Benvegnu, Jun Qian, Kiran Lall Shrestha
  • Publication number: 20250107251
    Abstract: The present application discloses a method for making an image sensor, wherein an additional supplementary oxide layer is added in a PD area of a pixel cell before the formation of a gate oxide layer, a layer of a first photoresist is added and photoetching is used to define a PD area of a non-pixel cell, a supplementary oxide layer outside the PD area is removed by etching, retaining the supplementary oxide layer in the PD area. Thus, a relatively thick oxide layer can be formed in the PD area before polysilicon generation, blanket etching can be performed on the surface of the PD area during subsequent DG-ET (double-gate etching) and poly etch, and surface damage can be avoided during etching, reducing the plasma interference, and ultimately, the pixel dark current to improve pixel performance.
    Type: Application
    Filed: May 14, 2024
    Publication date: March 27, 2025
    Applicant: Shanghai Huali Microelectronics Corporation
    Inventors: Xing Fang, Chenchen Qiu, Jun Qian, Chang Sun, Zhengying Wei
  • Patent number: 12257665
    Abstract: During chemical mechanical polishing of a substrate, a signal value that depends on a thickness of a layer in a measurement spot on a substrate undergoing polishing is determined by a first in-situ monitoring system. An image of at least the measurement spot of the substrate is generated by a second in-situ imaging system. Machine vision processing, e.g., a convolutional neural network, is used to determine a characterizing value for the measurement spot based on the image. Then a measurement value is calculated based on both the characterizing value and the signal value.
    Type: Grant
    Filed: February 2, 2023
    Date of Patent: March 25, 2025
    Assignee: Applied Materials, Inc.
    Inventors: Benjamin Cherian, Jun Qian, Nicholas A. Wiswell, Dominic J. Benvegnu, Boguslaw A. Swedek, Thomas H. Osterheld
  • Patent number: 12261038
    Abstract: Provided herein are methods and apparatus for filling one or more gaps on a semiconductor substrate. The disclosed embodiments are especially useful for forming seam-free, void-free fill in both narrow and wide features. The methods may be performed without any intervening etching operations to achieve a single step deposition. In various implementations, a first operation is performed using a novel PEALD fill mechanism to fill narrow gaps and line wide gaps. A second operation may be performed using PECVD methods to continue filling the wide gaps.
    Type: Grant
    Filed: September 2, 2021
    Date of Patent: March 25, 2025
    Assignee: Lam Research Corporation
    Inventors: Hu Kang, Shankar Swaminathan, Jun Qian, Wanki Kim, Dennis M. Hausmann, Bart J. van Schravendijk, Adrien LaVoie
  • Publication number: 20250094716
    Abstract: Techniques for language model (LM) summarization using semantical clustering are provided. In one technique, a plurality of concepts reflected in text data is identified. A plurality of concept clusters is generated based on similarity among the plurality of concepts. Thus, some concept clusters may include multiple concepts. For each concept cluster of the plurality of concept clusters, an LM generates a summary of the text corresponding to that concept cluster. A summary response of the text data is generated by aggregating the summary of each concept cluster of the plurality of concept clusters. In another technique, an LM generates a summary based on text data. A first set of concepts reflected in the summary is identified and a second set of concepts reflected in the text data is identified. A difference between the two sets may indicate that the summary is missing one or more concepts.
    Type: Application
    Filed: May 7, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M. Mamtani
  • Publication number: 20250094686
    Abstract: Techniques for modifying a narrative point of view for content generated by a machine-learned model, such as a large language model (LLM), are provided. In one technique, a first textual content that was generated by an LLM is accessed. A narrative point of view (NPOV) detection operation is performed on a first portion of the first textual content to identify a first NPOV corresponding to the first portion of the first textual content. Based on an output, of the NPOV detection operation, that indicates that the first NPOV does not meet one or more NPOV criteria, the first portion of the first textual content is modified to generate a modified textual content. The modified textual content is submitted to the LLM, causing the LLM to generate a second textual content.
    Type: Application
    Filed: June 28, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
  • Publication number: 20250094814
    Abstract: Techniques are provided for fine-tuning large language models (LLMs) to reduce the instability of LLM outputs to prompt. In one technique, a plurality of prompts is stored. For each prompt of the plurality of prompts, a plurality of variants of that prompt is generated. A prompt generating LLM is fine-tuned based on that prompt and the plurality of variants. Each variant-prompt association (where the variant is generated based on the prompt and has an identical or similar meaning) is a training sample that is used to train or fine-tune the prompt generating LLM. The prompt generating LLM is configured to generate standardized prompts based on input prompts. In another technique, a response generating LLM is fine-tuned based on sets of training samples, each training sample in a set comprising a different variant of a prompt and a response that the response generating LLM generated based on the prompt.
    Type: Application
    Filed: September 4, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M Mamtani
  • Publication number: 20250094865
    Abstract: Techniques for ensuring that language models follow instructions indicated in prompts are provided. In one technique, a first language model generates a response based on a prompt. A set of instructions in the prompt is identified. For each instruction in the set, a second language model determines whether the response indicates that the first language model followed the instruction. In another technique, for each prompt of a plurality of prompts: (1) a first language model generates a response based on the prompt; (2) multiple instructions are identified based on the prompt; (3) a second language model generates, based on the plurality of instructions, an output that indicates that the first language model followed each instruction; and (4) the prompt, the response, and the multiple instructions are stored in a training instance. The first language model is finetuned based on the training instances.
    Type: Application
    Filed: April 8, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod M. Mamtani
  • Publication number: 20250094704
    Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of text summarization for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a body of text and an example summary. The method fine-tunes a large language model (LLM) based on a loss function that compares the example summary and a generated summary generated by the LLM. The example and generated summaries are compared at sentence, paragraph, and/or article levels. The method generates an evaluation score for performance of the tuned LLM as a text summarizer based on a further comparison of a reference summary and a summary generated by the tuned LLM. The method then automatically determines to deploy the tuned LLM to a text summarization task in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: April 5, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
  • Publication number: 20250094816
    Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of text generation for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a natural language text prompt that combines content and instructions. The method extracts the instructions from the text prompt. The method fine-tunes a large language model to generate text in natural language based on a text generation loss function that penalizes non-compliance with the extracted instructions by a generated text response to the text prompt. The method generates an evaluation score for performance of the tuned large language model as a text generator based on a value of the text generation loss function for a second generated text response. And, the method automatically signals that the fine tuning of the tuned large language model is complete in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: April 30, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
  • Publication number: 20250094138
    Abstract: Systems, methods, and other embodiments associated with automated fine-tuning of software code generation by large language models are described herein. In one embodiment, a method accesses a collection of software code samples that intermix sample code and human language description. The method generates prompts to an LLM to write code that performs as described by the human language description of the sample code. The method fine-tunes a large language model to generate software code based on a code generation loss function that evaluates code generated by the LLM in response to the prompts. The method generates an evaluation score for performance of the tuned large language model as a code generator based on code generation loss for second generated code. And, the method automatically signals that fine-tuning of the tuned large language is complete in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: June 14, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
  • Publication number: 20250094687
    Abstract: Techniques for generating repetition-free text using a large language model (LLM) are provided. In one technique, textual content that was generated by an LLM is accessed, where the textual content comprises a plurality of sub-components including a first sub-component and a second sub-component. A first embedding that represents the first sub-component is generated and a second embedding that represents the second sub-component is generated. Based on a similarity between the first embedding and the second embedding, it is determined whether the second sub-component is repetitious with respect to the first sub-component. In response to determining that the second sub-component is repetitious with respect to the first sub-component, at least a portion of the second sub-component is removed from the textual content.
    Type: Application
    Filed: June 28, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
  • Publication number: 20250097171
    Abstract: Systems, methods, and other embodiments automated fine-tuning of chatbot performance for large language models are described herein. In one embodiment, a method accesses a collection of sample conversations between two entities. An individual sample conversation includes one or more rounds of natural language example prompt by a querent and example response by an agent. The method fine-tunes an LLM to generate responses in natural language based on a chatbot loss function that evaluates first responses generated by the LLM to the example prompts by the querent. The method generates an evaluation score for performance of the tuned LLM as a chatbot based on second responses generated by the tuned LLM to test prompts from a test conversation. And, the method automatically signals that the fine-tuning of the tuned LLM is complete in response to the evaluation score satisfying a threshold.
    Type: Application
    Filed: July 10, 2024
    Publication date: March 20, 2025
    Inventors: Yazhe HU, Mengqing GUO, Zheng WANG, Tao SHENG, Jun QIAN, Vinod MAMTANI
  • Publication number: 20250095096
    Abstract: The present disclosure relates to utilizing large language models (LLMs) to facilitate generation of incident reports or similar documents. One or more initial inputs may be received from a user, and one or more example incident reports may be identified. The one or more example incident reports and the one or more initial inputs may be sent to an LLM. A reviewable version of an incident report may be accessed that is based on output that the LLM generated based on the example incident reports and the one or more initial inputs. The reviewable version of the incident report may be presented in a human readable format via a graphical user interface (GUI). A modification corresponding to the reviewable version of the incident report may be received via the GUI. The modification and the reviewable version of the incident report may be sent to the LLM to cause the LLM to generate an updated version of the incident report.
    Type: Application
    Filed: September 13, 2024
    Publication date: March 20, 2025
    Inventors: Iman Zadeh, Christophe J. Gerard, Qiu Qin, Ziqun Ye, Aditya Banerjee, Jun Qian, Nicole E. Hess
  • Publication number: 20250094866
    Abstract: Techniques for correcting hallucinations produced by generative large language models (LLMs). In one technique, a computing system accesses first output generated by an LLM. The computing system identifies, within the first output, a plurality of assertions. The computing system determines that a first assertion in the plurality of assertions is false. The computing system generates a prompt that indicates that the first assertion is false. The computing system submits the prompt as input to the LLM. The computing system accesses second output that is generated by the LLM, where the second output includes a second assertion that is different than the first assertion and corresponds to the first assertion.
    Type: Application
    Filed: May 30, 2024
    Publication date: March 20, 2025
    Inventors: Zheng Wang, Yazhe Hu, Mengqing Guo, Tao Sheng, Jun Qian, Vinod Murli Mamtani
  • Patent number: 12249170
    Abstract: The present embodiments relate to a language identification system for predicting a language and text content of text lines in an image-based document. The language identification system uses a trainable neural network model that integrates multiple neural network models in a single unified end-to-end trainable architecture. A CNN and an RNN of the model can process text lines and derive visual and contextual features of the text lines. The derived features can be used to predict a language and text content for the text line. The CNN and the RNN can be jointly trained by determining losses based on the predicted language and content and corresponding language labels and text labels for each text line.
    Type: Grant
    Filed: August 26, 2022
    Date of Patent: March 11, 2025
    Assignee: Oracle International Corporation
    Inventors: Liyu Gong, Yuying Wang, Zhonghai Deng, Iman Zadeh, Jun Qian
  • Publication number: 20250077868
    Abstract: Systems, methods, and other embodiments associated with contribution metric-based pruning of a neural network are described. In one embodiment, an example method includes accessing a trained neural network that has a plurality of channels. The neural network is to be evaluated for pruning of the channels. The example method may also include determining contribution metrics for the channels by measuring changes in error of the convolutional neural network with individual channels removed in turn. The contribution metrics are determined based at least in part on higher order analysis of the changes. And, the example method may also include pruning out of the convolutional neural network a set of the channels for which the contribution metrics do not satisfy a threshold.
    Type: Application
    Filed: August 29, 2023
    Publication date: March 6, 2025
    Inventors: Baopu LI, Tao SHENG, Jun QIAN
  • Publication number: 20250062163
    Abstract: Disclosed herein is a chemical mechanical polishing apparatus, comprising a platen to support a polishing pad; a carrier head to hold a surface of a substrate against the polishing pad; a motor to generate relative motion between the platen and the carrier head so as to polish an overlying layer on the substrate; an in-situ acoustic monitoring system including an acoustic sensor that receives acoustic energy from the substrate and the polishing pad; and a controller configured to detect an abnormal acoustic event based on measurements from the in-situ acoustic monitoring system, and determine a type of anomaly based on signals measured by the in-situ acoustic monitoring system during the abnormal acoustic event.
    Type: Application
    Filed: August 18, 2023
    Publication date: February 20, 2025
    Inventors: Nicholas A. Wiswell, Benjamin Cherian, Haoquan Fang, Jun Qian, Thomas H. Osterheld, Sohrab Pourmand
  • Patent number: D1067472
    Type: Grant
    Filed: October 31, 2024
    Date of Patent: March 18, 2025
    Assignee: Huzhou Peace Technology Co., Ltd.
    Inventor: Jun Qian