Patents by Inventor Wenhao JIANG
Wenhao JIANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240174648Abstract: Provided is a small molecule compound, which is characterized in having the structure represented by the following molecular formula: (I), wherein X1 and X2 are selected from carbon or nitrogen; G1 is a carbocyclic ring or heterocyclic ring having aromaticity; any one or more hydrogen atoms on the ring of G1 are substituted by R1; and R1 is selected from nitrogen-containing groups. The small molecule compound of the present invention can be used as a highly effective and specific JAK kinase inhibitor, specifically a Tyk2 inhibitor; and/or a JAK1 inhibitor, and/or a JAK1/Tyk2 or Tyk2/JAK1, Tyk2/Jak2 dual inhibitor.Type: ApplicationFiled: October 10, 2020Publication date: May 30, 2024Inventors: Li Xing, Guanqun Li, Xiaolei Wang, Yuting Cai, Xiang Jiang, Xiang Pan, Wenhao Zhu, Yang Wang, Zengquan Wang
-
Patent number: 11936183Abstract: An energy Internet system, an energy routing conversion device, and an energy control method, relating to a field of energy information. An alternating-current (AC) side energy routing conversion device of the energy Internet system includes a plurality of first route ports, and a direct-current (DC) side energy routing conversion device includes a plurality of second route ports, where each second route port is connected to a corresponding first route port by means of a corresponding DC busbar. A plurality of energy devices are connected to a DC busbar by means of corresponding first AC/DC converters or first DC converters. The AC side energy routing conversion device and the DC side energy routing conversion device collect energy information of the energy devices and adjust energy of the energy devices on a basis of energy balance constraint conditions.Type: GrantFiled: December 13, 2018Date of Patent: March 19, 2024Assignee: GREE ELECTRIC APPLIANCES, INC. OF ZHUHAIInventors: Mingzhu Dong, Zhigang Zhao, Meng Huang, Xuefen Zhang, Shugong Nan, Shiyong Jiang, Meng Li, Wenqiang Tang, Peng Ren, Wu Wen, Lingjun Wang, Xiao Luo, Wenhao Wu, Jianjun Huang, Weijin Li, Yunhong Zeng, Bei Chen
-
Publication number: 20240079651Abstract: Provided are a non-aqueous electrolyte and a preparation method thereof, and a secondary battery and an electric apparatus containing the same. The non-aqueous electrolyte contains a non-aqueous solvent and lithium ions, first cations, and first anions dissolved therein, where the first cation is a metal cation Men+ other than the lithium ion, n representing a chemical valence of the metal cation; the first anion is a difluoroxalate borate anion DFOB?; mass concentration of the first cations in the non-aqueous electrolyte is D1 ppm, and mass concentration of the first anions in the non-aqueous electrolyte is D2 ppm, both based on total mass of the non-aqueous electrolyte; and the non-aqueous electrolyte satisfies that D1 is 0.5 to 870 and that D1/D2 is 0.02 to 2. The non-aqueous electrolyte in this application enables the secondary battery to have good cycling performance, safety performance, and kinetic performance.Type: ApplicationFiled: November 7, 2023Publication date: March 7, 2024Inventors: Zeli Wu, Bin Jiang, Changlong Han, Huiling Chen, Lei Huang, Cuiping Zhang, Jie Guo, Wenhao Liu
-
Patent number: 11907851Abstract: Embodiments of this application disclose an image description generation method performed at a computing device. The method includes: obtaining a target image; generating a first global feature vector and a first label vector set of the target image; generating a first multi-mode feature vector of the target image through a matching model, the matching model being a model obtained through training according to a training image and reference image description information of the training image; and applying the first multi-mode feature vector, the first global feature vector, and the first label vector set to a computing model, to obtain the target image description information, the computing model being a model obtained through training according to image description information of the training image and the reference image description information.Type: GrantFiled: January 31, 2022Date of Patent: February 20, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Wenhao Jiang, Lin Ma, Wei Liu
-
Publication number: 20240007211Abstract: A method implemented by a communication system, including at least a first electronic device and a second electronic device, includes that the first electronic device plays multimedia content. The first electronic device sends first synchronization information through broadcast, so that the second electronic device receives the first synchronization information, where the first synchronization information includes at least a first address, and the first address is an obtaining address of the multimedia content. In response to a preset operation, the first electronic device establishes a near field wireless communication connection to the second electronic device, and the second electronic device caches the multimedia content based on at least the first address. The first electronic device sends a control instruction to the second electronic device by using the near field wireless communication connection. The second electronic device plays the multimedia content based on the control instruction.Type: ApplicationFiled: September 1, 2021Publication date: January 4, 2024Inventors: Chong Chen, Shuo Zhang, Hao Wang, Songping Yao, Wenhao Jiang
-
Publication number: 20230335081Abstract: A method includes: a processor that obtains several lines of data in to-be-displayed display data to generate a data block; generates a synchronization flag corresponding to the data block; encapsulates the data block and the synchronization flag corresponding to the data block to obtain a data packet corresponding to the data block; and sends all data packets corresponding to the display data to the display system. The display system sequentially parses all the data packets sent by the processor to obtain a synchronization flag associated with each data packet, and determines a display location of each data block on a display panel based on the synchronization flag to display the display data.Type: ApplicationFiled: June 12, 2023Publication date: October 19, 2023Inventors: Weiwei Fan, Ming Chang, Hongli Wang, Xiaowen Cao, Wenhao Jiang, Siqing Du
-
Patent number: 11699298Abstract: This application relates to the field of artificial intelligence technologies, and in particular, to a training method of an image-text matching model, a bi-directional search method, and a relevant apparatus. The training method includes extracting a global feature and a local feature of an image sample; extracting a global feature and a local feature of a text sample; training a matching model according to the extracted global feature and local feature of the image sample and the extracted global feature and local feature of the text sample, to determine model parameters of the matching model; and determining, by the matching model, according to a global feature and a local feature of an inputted image and a global feature and a local feature of an inputted text, a matching degree between the image and the text.Type: GrantFiled: June 16, 2021Date of Patent: July 11, 2023Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Lin Ma, Wenhao Jiang, Wei Liu
-
Method and apparatus for training neural network model used for image processing, and storage medium
Patent number: 11610082Abstract: A method, apparatus, and storage medium for training a neural network model used for image processing are described. The method includes: obtaining a plurality of video frames; inputting the plurality of video frames through a neural network model so that the neural network model outputs intermediate images; obtaining optical flow information between an early video frame and a later video frame; modifying an intermediate image corresponding to the early video frame according to the optical flow information to obtain an expected-intermediate image; determining a time loss between an intermediate image corresponding to the later video frame and the expected-intermediate image; determining a feature loss between the intermediate images and a target feature image; and training the neural network model according to the time loss and the feature loss, and returning to obtaining a plurality of video frames continue training until the neural network model satisfies a training finishing condition.Type: GrantFiled: February 26, 2021Date of Patent: March 21, 2023Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Haozhi Huang, Hao Wang, Wenhan Luo, Lin Ma, Peng Yang, Wenhao Jiang, Xiaolong Zhu, Wei Liu -
Patent number: 11494658Abstract: This application relates to an abstract description generating method, an abstract description generation model training method, a computer device, and a storage medium. The abstract description generating method includes: inputting a labeled training sample into an abstract description generation model; performing first-phase training on an encoding network and a decoding network of the abstract description generation model based on supervision of a first loss function; obtaining a backward-derived hidden state of a previous moment through backward derivation according to a hidden state of each moment outputted by the decoding network; obtaining a value of a second loss function according to the backward-derived hidden state of the previous moment and an actual hidden state of the previous moment outputted by the decoding network; and obtaining final model parameters of the abstract description generation model determined based on supervision of the second loss function to reach a preset threshold value.Type: GrantFiled: November 15, 2019Date of Patent: November 8, 2022Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Xinpeng Chen, Lin Ma, Wenhao Jiang, Wei Liu
-
Publication number: 20220163652Abstract: A frequency modulated continuous wave (FMCW)-based virtual reality (VR) environment interaction system and method are provided. Signal generators (S1, S2, S3) are provided to transmit FMCW signals; a glove is worn on a hand by a user; and multiple signal receiving nodes (H) are provided on the glove and configured to receive the FMCW signals. When the signal receiving nodes (H) receive the FMCW signals, one-dimensional distances are measured by means of FMCW technique; after the distances are measured, positions of the signal receiving nodes (H) in a coordinate system of the signal generators (S1, S2, S3) are calculated; a change in a position of the hand that wears the glove is tracked by means of changes in the positions of the signal receiving nodes (H); and a VR interaction is performed by outputting a change in a coordinate point matrix formed by the signal receiving nodes (H).Type: ApplicationFiled: February 6, 2022Publication date: May 26, 2022Inventors: Yanchao ZHAO, Wenhao JIANG, Si LI
-
Publication number: 20220156518Abstract: Embodiments of this application disclose an image description generation method performed at a computing device. The method includes: obtaining a target image; generating a first global feature vector and a first label vector set of the target image; generating a first multi-mode feature vector of the target image through a matching model, the matching model being a model obtained through training according to a training image and reference image description information of the training image; and applying the first multi-mode feature vector, the first global feature vector, and the first label vector set to a computing model, to obtain the target image description information, the computing model being a model obtained through training according to image description information of the training image and the reference image description information.Type: ApplicationFiled: January 31, 2022Publication date: May 19, 2022Inventors: Wenhao JIANG, Lin MA, Wei LIU
-
Patent number: 11301641Abstract: A terminal for generating music may identify, based on execution of scenario recognition, scenarios for images previously received by the terminal. The terminal may generate respective description texts for the scenarios. The terminal may execute keyword-based rhyme matching based on the respective description texts. The terminal may generate respective rhyming lyrics corresponding to the images. The terminal may convert the respective rhyming lyrics corresponding to the images into a speech. The terminal may synthesize the speech with preset background music to obtain image music.Type: GrantFiled: October 22, 2019Date of Patent: April 12, 2022Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Nan Wang, Wei Liu, Lin Ma, Wenhao Jiang, Guangzhi Li, Shiyin Kang, Deyi Tuo, Xiaolong Zhu, Youyi Zhang, Shaobin Lin, Yongsen Zheng, Zixin Zou, Jing He, Zaizhen Chen, Pinyi Li
-
Patent number: 11270160Abstract: Embodiments of this application disclose an image description generation method performed at a computing device. The method includes: obtaining a target image; generating a first global feature vector and a first label vector set of the target image; applying the target image to a matching model and generating a first multi-mode feature vector of the target image through the matching model, the matching model being a model obtained through training according to a training image and reference image description information of the training image; and generating target image description information of the target image according to the first multi-mode feature vector, the first global feature vector, and the first label vector set.Type: GrantFiled: August 22, 2019Date of Patent: March 8, 2022Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Wenhao Jiang, Lin Ma, Wei Liu
-
Publication number: 20210312211Abstract: This application relates to the field of artificial intelligence technologies, and in particular, to a training method of an image-text matching model, a bi-directional search method, and a relevant apparatus. The training method includes extracting a global feature and a local feature of an image sample; extracting a global feature and a local feature of a text sample; training a matching model according to the extracted global feature and local feature of the image sample and the extracted global feature and local feature of the text sample, to determine model parameters of the matching model; and determining, by the matching model, according to a global feature and a local feature of an inputted image and a global feature and a local feature of an inputted text, a matching degree between the image and the text.Type: ApplicationFiled: June 16, 2021Publication date: October 7, 2021Inventors: Lin MA, Wenhao JIANG, Wei LIU
-
Patent number: 11087166Abstract: This application relates to the field of artificial intelligence technologies, and in particular, to a training method of an image-text matching model, a bi-directional search method, and a relevant apparatus. The training method includes extracting a global feature and a local feature of an image sample; extracting a global feature and a local feature of a text sample; training a matching model according to the extracted global feature and local feature of the image sample and the extracted global feature and local feature of the text sample, to determine model parameters of the matching model; and determining, by the matching model, according to a global feature and a local feature of an inputted image and a global feature and a local feature of an inputted text, a matching degree between the image and the text.Type: GrantFiled: September 23, 2019Date of Patent: August 10, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Lin Ma, Wenhao Jiang, Wei Liu
-
METHOD AND APPARATUS FOR TRAINING NEURAL NETWORK MODEL USED FOR IMAGE PROCESSING, AND STORAGE MEDIUM
Publication number: 20210182616Abstract: A method, apparatus, and storage medium for training a neural network model used for image processing are described. The method includes: obtaining a plurality of video frames; inputting the plurality of video frames through a neural network model so that the neural network model outputs intermediate images; obtaining optical flow information between an early video frame and a later video frame; modifying an intermediate image corresponding to the early video frame according to the optical flow information to obtain an expected-intermediate image; determining a time loss between an intermediate image corresponding to the later video frame and the expected-intermediate image; determining a feature loss between the intermediate images and a target feature image; and training the neural network model according to the time loss and the feature loss, and returning to obtaining a plurality of video frames continue training until the neural network model satisfies a training finishing condition.Type: ApplicationFiled: February 26, 2021Publication date: June 17, 2021Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Haozhi HUANG, Hao WANG, Wenhan LUO, Lin MA, Peng YANG, Wenhao JIANG, Xiaolong ZHU, Wei LIU -
Method and apparatus for training neural network model used for image processing, and storage medium
Patent number: 10970600Abstract: A method, apparatus, and storage medium for training a neural network model used for image processing are described. The method includes: obtaining a plurality of video frames; inputting the plurality of video frames through a neural network model so that the neural network model outputs intermediate images; obtaining optical flow information between an early video frame and a later video frame; modifying an intermediate image corresponding to the early video frame according to the optical flow information to obtain an expected-intermediate image; determining a time loss between an intermediate image corresponding to the later video frame and the expected-intermediate image; determining a feature loss between the intermediate images and a target feature image; and training the neural network model according to the time loss and the feature loss, and returning to obtaining a plurality of video frames continue training until the neural network model satisfies a training finishing condition.Type: GrantFiled: April 2, 2019Date of Patent: April 6, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Haozhi Huang, Hao Wang, Wenhan Luo, Lin Ma, Peng Yang, Wenhao Jiang, Xiaolong Zhu, Wei Liu -
Patent number: 10956771Abstract: An image recognition method, a terminal, and a storage medium are provided. The method includes: performing feature extraction on a to-be-recognized image by using an encoder, to obtain a feature vector and a first annotation vector set; performing initialization processing on the feature vector, to obtain first initial input data; and generating first guiding information based on the first annotation vector set by using a first guiding network model. The first guiding network model is configured to generate guiding information according to an annotation vector set of any image. The method also includes determining a descriptive statement of the image based on the first guiding information, the first annotation vector set, and the first initial input data by using a decoder.Type: GrantFiled: August 27, 2019Date of Patent: March 23, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Wenhao Jiang, Lin Ma, Wei Liu
-
Publication number: 20200082271Abstract: This application relates to an abstract description generating method, an abstract description generation model training method, a computer device, and a storage medium. The abstract description generating method includes: inputting a labeled training sample into an abstract description generation model; performing first-phase training on an encoding network and a decoding network of the abstract description generation model based on supervision of a first loss function; obtaining a backward-derived hidden state of a previous moment through backward derivation according to a hidden state of each moment outputted by the decoding network; obtaining a value of a second loss function according to the backward-derived hidden state of the previous moment and an actual hidden state of the previous moment outputted by the decoding network; and obtaining final model parameters of the abstract description generation model determined based on supervision of the second loss function to reach a preset threshold value.Type: ApplicationFiled: November 15, 2019Publication date: March 12, 2020Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Xinpeng CHEN, Lin MA, Wenhao JIANG, Wei LIU
-
Publication number: 20200051536Abstract: A terminal for generating music may identify, based on execution of scenario recognition, scenarios for images previously received by the terminal. The terminal may generate respective description texts for the scenarios. The terminal may execute keyword-based rhyme matching based on the respective description texts. The terminal may generate respective rhyming lyrics corresponding to the images. The terminal may convert the respective rhyming lyrics corresponding to the images into a speech. The terminal may synthesize the speech with preset background music to obtain image music.Type: ApplicationFiled: October 22, 2019Publication date: February 13, 2020Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Nan WANG, Wei LIU, Lin MA, Wenhao JIANG, Guangzhi LI, Shiyin KANG, Deyi TUO, Xiaolong ZHU, Youyi ZHANG, Shaobin LIN, Yongsen ZHENG, Zixin ZOU, Jing HE, Zaizhen CHEN, Pinyi LI