Patents by Inventor Yukun LI
Yukun LI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11946349Abstract: A downhole throttling device based on wireless control includes an inlet nozzle, a throttling assembly, an electrical sealing cylinder, a gas guide cylinder, a lower adapter sleeve, an end socket, a female sleeve, and electrical components. The inlet nozzle is connected to the throttling assembly, the throttling assembly is connected to the electrical sealing cylinder and the gas guide cylinder, the electrical sealing cylinder and the gas guide cylinder are both connected to the lower adapter sleeve, the lower adapter sleeve is respectively connected to the end socket and the female sleeve, and the electrical components are arranged in the electrical sealing cylinder. A throttling effect is achieved by detecting the temperature and pressure in a tube by a temperature/pressure sensor in the electrical components and controlling a motor to rotate a movable valve in the throttling assembly by a circuit control assembly, thereby achieving wireless control over downhole throttling.Type: GrantFiled: September 15, 2020Date of Patent: April 2, 2024Assignees: PetroChina Company Limited, Sichuan Shengnuo Oil. And Gas Engineering Technology Service Co., LtdInventors: Jun Xie, Huiyun Ma, Jian Yang, Chenggang Yu, Yukun Fu, Qiang Yin, Kui Li, Yuan Jiang, Dezheng Yi, Yanyan Liu, Haifeng Zhong, Xiaodong Liu
-
Publication number: 20240072071Abstract: Provided are a display panel, a detection device therefor, and a display device. In an embodiment, the display panel includes an array layer, along a thickness direction of the display panel, the array layer including first and second conductive layers, and at least an insulating layer being located between the first and second conductive layers; a light-emitting element located at a side of the array layer facing a light-exiting surface of the display panel; and a first through-hole. In an embodiment, the first and second conductive layers are connected to each other through the first through-hole, and at least one first through-hole is reused as an alignment connection hole; and/or, along the thickness direction of the display panel, orthographic projections of at least two first through-holes onto a plane of the light-exiting surface of the display panel have different shapes.Type: ApplicationFiled: November 3, 2023Publication date: February 29, 2024Applicant: Tianma Advanced Display Technology Institute (Xiamen) Co.,Ltd.Inventors: Zhenyu JIA, Chenglong YANG, Kerui XI, Tianyi WU, Xiaoxiang HE, Ping AN, Yingteng ZHAI, Liwei ZHANG, Yukun HUANG, Aowen LI
-
Publication number: 20230301069Abstract: A preparation method for a semiconductor device includes: providing a semiconductor substrate, the semiconductor substrate having shallow trenches and active regions defined by the shallow trenches, the active regions extending in a first direction; forming isolation layers in the first direction at interfaces between the shallow trenches and the active regions, the isolation layers and the active regions being inverse types to each other; forming shallow trench isolation structures in the shallow trenches; and forming word-line structures, the word-line structures extending in a second direction and sequentially passing through the shallow trench isolation structures and the active regions.Type: ApplicationFiled: May 28, 2021Publication date: September 21, 2023Applicant: CHANGXIN MEMORY TECHNOLOGIES, INC.Inventor: Yukun LI
-
Publication number: 20230187448Abstract: A semiconductor structure includes a semiconductor substrate, a first isolation dam, a plurality of switching transistors and a second isolation dam. The semiconductor substrate includes a trench, an isolation region formed by a region where the trench is located, a plurality of active regions defined by the isolation region, and an electrical isolation layer, the electrical isolation layer being located on one side, away from an opening of the trench, of the trench; the first isolation dam fills the trench; the switching transistor is at least partially embedded in the active region of the semiconductor substrate; and the second isolation dam is at least partially located between the first isolation dam and the electrical isolation layer.Type: ApplicationFiled: June 2, 2021Publication date: June 15, 2023Applicant: CHANGXIN MEMORY TECHNOLOGIES, INC.Inventors: Yukun LI, Tao CHEN
-
Publication number: 20230053536Abstract: The present disclosure provides an integrated circuit memory and the method of forming the same, the memory includes: a substrate, in which a plurality of active areas arranged in an array are provided; a conducting line group, formed in the substrate, and including a plurality of conducting lines sequentially arranged along a first direction, each conducting line extending in a second direction and being connected to the corresponding active area, and ends of two adjacent conducting lines on a same side being staggered from each other in the second direction; and a plurality of contact pads, formed on the substrate, one of the contact pads being connected to an end of one conducting line, and two adjacent contact pads located on the same side being staggered in the second direction.Type: ApplicationFiled: June 30, 2021Publication date: February 23, 2023Inventor: Yukun LI
-
Patent number: 11562150Abstract: The present disclosure proposes a language generation method and apparatus. The method includes: performing encoding processing on an input sequence by using a preset encoder to generate a hidden state vector corresponding to the input sequence; in response to a granularity category of a second target segment being a phrase, decoding a first target segment vector, the hidden state vector, and a position vector corresponding to the second target segment by using N decoders to generate N second target segments; determining a loss value based on differences between respective N second target segments and a second target annotated segment; and performing parameter updating on the preset encoder, a preset classifier, and the N decoders based on the loss value to generate an updated language generation model for performing language generation.Type: GrantFiled: September 24, 2020Date of Patent: January 24, 2023Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.Inventors: Han Zhang, Dongling Xiao, Yukun Li, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
-
Patent number: 11556715Abstract: A method for training a language model based on various word vectors, a device and a medium, which relate to the field of natural language processing technologies in artificial intelligence, are disclosed. An implementation includes inputting a first sample text language material including a first word mask into the language model, and outputting a context vector of the first word mask via the language model; acquiring a first probability distribution matrix of the first word mask based on the context vector of the first word mask and a first word vector parameter matrix, and a second probability distribution matrix of the first word mask based on the context vector of the first word mask and a second word vector parameter matrix; and training the language model based on a word vector corresponding to the first word mask.Type: GrantFiled: November 18, 2020Date of Patent: January 17, 2023Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Zhen Li, Yukun Li, Yu Sun
-
Publication number: 20230004753Abstract: The present disclosure provides a method, apparatus, electronic device and storage medium for training a semantic similarity model, which relates to the field of artificial intelligence. A specific implementation solution is as follows: obtaining a target field to be used by a semantic similarity model to be trained; calculating respective correlations between the target field and application fields corresponding to each of training datasets in known multiple training datasets; training the semantic similarity model with the training datasets in turn, according to the respective correlations between the target field and the application fields corresponding to each of the training datasets.Type: ApplicationFiled: March 22, 2021Publication date: January 5, 2023Inventors: Zhen Li, Yukun Li, Yu Sun
-
Patent number: 11526668Abstract: A method and apparatus for obtaining word vectors based on a language model, a device and a storage medium are disclosed, which relates to the field of natural language processing technologies in artificial intelligence. An implementation includes inputting each of at least two first sample text language materials into the language model, and outputting a context vector of a first word mask in each first sample text language material via the language model; determining the word vector corresponding to each first word mask based on a first word vector parameter matrix, a second word vector parameter matrix and a fully connected matrix respectively; and training the language model and the fully connected matrix based on the word vectors corresponding to the first word masks in the at least two first sample text language materials, so as to obtain the word vectors.Type: GrantFiled: November 12, 2020Date of Patent: December 13, 2022Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Zhen Li, Yukun Li, Yu Sun
-
Patent number: 11520991Abstract: The present disclosure provides a method, apparatus, electronic device and storage medium for processing a semantic representation model, and relates to the field of artificial intelligence technologies. A specific implementation solution is: collecting a training corpus set including a plurality of training corpuses; training the semantic representation model using the training corpus set based on at least one of lexicon, grammar and semantics. In the present disclosure, by building the unsupervised or weakly-supervised training task at three different levels, namely, lexicon, grammar and semantics, the semantic representation model is enabled to learn knowledge at levels of lexicon, grammar and semantics from massive data, enhance the capability of universal semantic representation and improve the processing effect of the NLP task.Type: GrantFiled: May 28, 2020Date of Patent: December 6, 2022Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Yu Sun, Haifeng Wang, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu
-
Patent number: 11463476Abstract: A character string classification method, a character string classification system, a character string classification device, and a computer readable storage medium are provided. The method includes: acquiring a to-be-classified character string, inputting the to-be-classified character string to a feature extractor to obtain a feature vector of the to-be-classified character string, and inputting the feature vector to a classifier to obtain a classification result of the to-be-classified character string. With the character string classification method, only the features of the character string itself are used in the character string classification process. That is, the to-be-classified character string is directly inputted to the feature extractor to obtain the feature vector, and the classifier classifies the to-be-classified character string based on the feature vector, thereby eliminating requirement for other information associated with the character string.Type: GrantFiled: January 17, 2018Date of Patent: October 4, 2022Assignee: GUANGDONG UNIVERSITY OF TECHNOLOGYInventors: Wenyin Liu, Zhenguo Yang, Huaping Yuan, Xu Chen, Yukun Li
-
Patent number: 11461549Abstract: The present disclosure discloses a method and an apparatus for generating a text based on a semantic representation and relates to a field of natural language processing (NLP) technologies. The method for generating the text includes: obtaining an input text, the input text comprising a source text; obtaining a placeholder of an ith word to be predicted in a target text; obtaining a vector representation of the ith word to be predicted, in which the vector representation of the ith word to be predicted is obtained by calculating the placeholder of the ith word to be predicted, the source text and 1st to (i?1)th predicted words by employing a self-attention mechanism; and generating an ith predicted word based on the vector representation of the ith word to be predicted, to obtain a target text.Type: GrantFiled: August 10, 2020Date of Patent: October 4, 2022Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Han Zhang, Dongling Xiao, Yukun Li, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
-
Publication number: 20220300763Abstract: The present disclosure provides a method, apparatus, electronic device and storage medium for training a semantic similarity model, which relates to the field of artificial intelligence. A specific implementation solution is as follows: obtaining a target field to be used by a semantic similarity model to be trained; calculating respective correlations between the target field and application fields corresponding to each of training datasets in known multiple training datasets; training the semantic similarity model with the training datasets in turn, according to the respective correlations between the target field and the application fields corresponding to each of the training datasets.Type: ApplicationFiled: March 22, 2021Publication date: September 22, 2022Inventors: Zhen Li, Yukun Li, Yu Sun
-
Publication number: 20220300697Abstract: A method for generating a target object is provided. A first discrete encoded sequence corresponding to an original object is generated by performing discrete encoding on the original object. The original object is of an image type, a text type, or a text-image-combined type. A second discrete encode sequence is obtained by inputting the first discrete encoded sequence into a generative model. A target object is generated based on the second discrete encoded sequence. The target object is of an image type or a text type. When the original object is of the image type, the target object is of the text type. When the original object is of the text type, the target object is of the image type.Type: ApplicationFiled: June 8, 2022Publication date: September 22, 2022Inventors: Yukun LI, Han ZHANG, Weichong YIN, Dongling XIAO, Yu SUN, Hao TIAN
-
Publication number: 20220129768Abstract: The present disclosure provides a method and apparatus for training a model. The method can include: acquiring at least one paragraph text, each paragraph text comprising a plurality of fine-grained samples; processing a fine-grained sample in the each paragraph text to obtain a coarse-grained sample; annotating the coarse-grained sample in the each paragraph text and obscuring one coarse-grained sample using a mask of one fine-grained sample to obtain a training sample set, wherein the training sample set comprises a plurality of annotated texts, and each annotated text comprises at least one of a fine-grained sample or an annotated coarse-grained sample; and training a fine-grained model using the training sample set to obtain a trained fine-grained model, the fine-grained model being used to learn content of a previous fine grain size and predict content of an adjacent coarse grain size.Type: ApplicationFiled: January 3, 2022Publication date: April 28, 2022Inventors: Dongling XIAO, Yukun LI, Han ZHANG, Yu SUN, Hao TIAN, Hua WU, Haifeng WANG
-
Publication number: 20220077005Abstract: A data analysis method includes: a target yield problem stacked graph corresponding to a wafer list is obtained, and measurement data stacked graphs of the wafer list under different types of tests are obtained; graph matching is performed on the target yield problem stacked graph and each of the measurement data stacked graphs to obtain matching degree data corresponding to the target yield problem stacked graph and each of the measurement data stacked graphs; correlation data corresponding to each of the measurement data stacked graphs and the target yield problem stacked graph is calculated; and weighted calculation is performed on the matching degree data and the correlation data, and a target measurement parameter causing a target yield problem is determined according to a result of the weighted calculation.Type: ApplicationFiled: September 10, 2021Publication date: March 10, 2022Applicant: CHANGXIN MEMORY TECHNOLOGIES, INC.Inventor: Yukun LI
-
Publication number: 20220059695Abstract: The application provides a method for manufacturing a semiconductor device. The method includes the following operations. A semiconductor substrate is provided, a plurality of separate trenches being formed in the semiconductor substrate. Plasma injection is performed to form a barrier layer between adjacent trenches A respective gate structure is formed in each of the plurality of trenches. A plurality of channel regions are formed in the semiconductor substrate, each of the plurality of trenches corresponding to a respective one of the plurality of channel regions. A source/drain region is formed between each of the plurality of trenches and the barrier layer, the source/drain region being electrically connected to the respective one of the plurality of channel regions, and a conductive type of the barrier layer is opposite to a conductive type of the source/drain region.Type: ApplicationFiled: August 17, 2021Publication date: February 24, 2022Inventors: Yukun LI, Tao CHEN
-
Publication number: 20210374334Abstract: A method for training a language model, an electronic device and a readable storage medium, which relate to the field of natural language processing technologies in artificial intelligence, are disclosed. The method may include pre-training the language model using preset text language materials in a corpus; replacing at least one word in a sample text language material with a word mask respectively to obtain a sample text language material including at least one word mask; inputting the sample text language material including the at least one word mask into the language model, and outputting a context vector of each of the at least one word mask via the language model; determining a word vector corresponding to each word mask based on the context vector of the word mask and a word vector parameter matrix; and training the language model based on the word vector corresponding to each word mask.Type: ApplicationFiled: December 10, 2020Publication date: December 2, 2021Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Yukun LI, Zhen LI, Yu SUN
-
Publication number: 20210374343Abstract: A method and apparatus for obtaining word vectors based on a language model, a device and a storage medium are disclosed, which relates to the field of natural language processing technologies in artificial intelligence. An implementation includes inputting each of at least two first sample text language materials into the language model, and outputting a context vector of a first word mask in each first sample text language material via the language model; determining the word vector corresponding to each first word mask based on a first word vector parameter matrix, a second word vector parameter matrix and a fully connected matrix respectively; and training the language model and the fully connected matrix based on the word vectors corresponding to the first word masks in the at least two first sample text language materials, so as to obtain the word vectors.Type: ApplicationFiled: November 12, 2020Publication date: December 2, 2021Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Zhen LI, Yukun LI, Yu SUN
-
Publication number: 20210374352Abstract: A method for training a language model based on various word vectors, a device and a medium, which relate to the field of natural language processing technologies in artificial intelligence, are disclosed. An implementation includes inputting a first sample text language material including a first word mask into the language model, and outputting a context vector of the first word mask via the language model; acquiring a first probability distribution matrix of the first word mask based on the context vector of the first word mask and a first word vector parameter matrix, and a second probability distribution matrix of the first word mask based on the context vector of the first word mask and a second word vector parameter matrix; and training the language model based on a word vector corresponding to the first word mask.Type: ApplicationFiled: November 18, 2020Publication date: December 2, 2021Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Zhen LI, Yukun LI, Yu SUN