Patents by Inventor Wenhao JIANG
Wenhao JIANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12272335Abstract: A method includes: a processor that obtains several lines of data in to-be-displayed display data to generate a data block; generates a synchronization flag corresponding to the data block; encapsulates the data block and the synchronization flag corresponding to the data block to obtain a data packet corresponding to the data block; and sends all data packets corresponding to the display data to the display system. The display system sequentially parses all the data packets sent by the processor to obtain a synchronization flag associated with each data packet, and determines a display location of each data block on a display panel based on the synchronization flag to display the display data.Type: GrantFiled: June 12, 2023Date of Patent: April 8, 2025Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Weiwei Fan, Ming Chang, Hongli Wang, Xiaowen Cao, Wenhao Jiang, Siqing Du
-
Patent number: 11907851Abstract: Embodiments of this application disclose an image description generation method performed at a computing device. The method includes: obtaining a target image; generating a first global feature vector and a first label vector set of the target image; generating a first multi-mode feature vector of the target image through a matching model, the matching model being a model obtained through training according to a training image and reference image description information of the training image; and applying the first multi-mode feature vector, the first global feature vector, and the first label vector set to a computing model, to obtain the target image description information, the computing model being a model obtained through training according to image description information of the training image and the reference image description information.Type: GrantFiled: January 31, 2022Date of Patent: February 20, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Wenhao Jiang, Lin Ma, Wei Liu
-
Publication number: 20240007211Abstract: A method implemented by a communication system, including at least a first electronic device and a second electronic device, includes that the first electronic device plays multimedia content. The first electronic device sends first synchronization information through broadcast, so that the second electronic device receives the first synchronization information, where the first synchronization information includes at least a first address, and the first address is an obtaining address of the multimedia content. In response to a preset operation, the first electronic device establishes a near field wireless communication connection to the second electronic device, and the second electronic device caches the multimedia content based on at least the first address. The first electronic device sends a control instruction to the second electronic device by using the near field wireless communication connection. The second electronic device plays the multimedia content based on the control instruction.Type: ApplicationFiled: September 1, 2021Publication date: January 4, 2024Inventors: Chong Chen, Shuo Zhang, Hao Wang, Songping Yao, Wenhao Jiang
-
Publication number: 20230335081Abstract: A method includes: a processor that obtains several lines of data in to-be-displayed display data to generate a data block; generates a synchronization flag corresponding to the data block; encapsulates the data block and the synchronization flag corresponding to the data block to obtain a data packet corresponding to the data block; and sends all data packets corresponding to the display data to the display system. The display system sequentially parses all the data packets sent by the processor to obtain a synchronization flag associated with each data packet, and determines a display location of each data block on a display panel based on the synchronization flag to display the display data.Type: ApplicationFiled: June 12, 2023Publication date: October 19, 2023Inventors: Weiwei Fan, Ming Chang, Hongli Wang, Xiaowen Cao, Wenhao Jiang, Siqing Du
-
Patent number: 11699298Abstract: This application relates to the field of artificial intelligence technologies, and in particular, to a training method of an image-text matching model, a bi-directional search method, and a relevant apparatus. The training method includes extracting a global feature and a local feature of an image sample; extracting a global feature and a local feature of a text sample; training a matching model according to the extracted global feature and local feature of the image sample and the extracted global feature and local feature of the text sample, to determine model parameters of the matching model; and determining, by the matching model, according to a global feature and a local feature of an inputted image and a global feature and a local feature of an inputted text, a matching degree between the image and the text.Type: GrantFiled: June 16, 2021Date of Patent: July 11, 2023Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Lin Ma, Wenhao Jiang, Wei Liu
-
Method and apparatus for training neural network model used for image processing, and storage medium
Patent number: 11610082Abstract: A method, apparatus, and storage medium for training a neural network model used for image processing are described. The method includes: obtaining a plurality of video frames; inputting the plurality of video frames through a neural network model so that the neural network model outputs intermediate images; obtaining optical flow information between an early video frame and a later video frame; modifying an intermediate image corresponding to the early video frame according to the optical flow information to obtain an expected-intermediate image; determining a time loss between an intermediate image corresponding to the later video frame and the expected-intermediate image; determining a feature loss between the intermediate images and a target feature image; and training the neural network model according to the time loss and the feature loss, and returning to obtaining a plurality of video frames continue training until the neural network model satisfies a training finishing condition.Type: GrantFiled: February 26, 2021Date of Patent: March 21, 2023Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Haozhi Huang, Hao Wang, Wenhan Luo, Lin Ma, Peng Yang, Wenhao Jiang, Xiaolong Zhu, Wei Liu -
Patent number: 11494658Abstract: This application relates to an abstract description generating method, an abstract description generation model training method, a computer device, and a storage medium. The abstract description generating method includes: inputting a labeled training sample into an abstract description generation model; performing first-phase training on an encoding network and a decoding network of the abstract description generation model based on supervision of a first loss function; obtaining a backward-derived hidden state of a previous moment through backward derivation according to a hidden state of each moment outputted by the decoding network; obtaining a value of a second loss function according to the backward-derived hidden state of the previous moment and an actual hidden state of the previous moment outputted by the decoding network; and obtaining final model parameters of the abstract description generation model determined based on supervision of the second loss function to reach a preset threshold value.Type: GrantFiled: November 15, 2019Date of Patent: November 8, 2022Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Xinpeng Chen, Lin Ma, Wenhao Jiang, Wei Liu
-
Publication number: 20220163652Abstract: A frequency modulated continuous wave (FMCW)-based virtual reality (VR) environment interaction system and method are provided. Signal generators (S1, S2, S3) are provided to transmit FMCW signals; a glove is worn on a hand by a user; and multiple signal receiving nodes (H) are provided on the glove and configured to receive the FMCW signals. When the signal receiving nodes (H) receive the FMCW signals, one-dimensional distances are measured by means of FMCW technique; after the distances are measured, positions of the signal receiving nodes (H) in a coordinate system of the signal generators (S1, S2, S3) are calculated; a change in a position of the hand that wears the glove is tracked by means of changes in the positions of the signal receiving nodes (H); and a VR interaction is performed by outputting a change in a coordinate point matrix formed by the signal receiving nodes (H).Type: ApplicationFiled: February 6, 2022Publication date: May 26, 2022Inventors: Yanchao ZHAO, Wenhao JIANG, Si LI
-
Publication number: 20220156518Abstract: Embodiments of this application disclose an image description generation method performed at a computing device. The method includes: obtaining a target image; generating a first global feature vector and a first label vector set of the target image; generating a first multi-mode feature vector of the target image through a matching model, the matching model being a model obtained through training according to a training image and reference image description information of the training image; and applying the first multi-mode feature vector, the first global feature vector, and the first label vector set to a computing model, to obtain the target image description information, the computing model being a model obtained through training according to image description information of the training image and the reference image description information.Type: ApplicationFiled: January 31, 2022Publication date: May 19, 2022Inventors: Wenhao JIANG, Lin MA, Wei LIU
-
Patent number: 11301641Abstract: A terminal for generating music may identify, based on execution of scenario recognition, scenarios for images previously received by the terminal. The terminal may generate respective description texts for the scenarios. The terminal may execute keyword-based rhyme matching based on the respective description texts. The terminal may generate respective rhyming lyrics corresponding to the images. The terminal may convert the respective rhyming lyrics corresponding to the images into a speech. The terminal may synthesize the speech with preset background music to obtain image music.Type: GrantFiled: October 22, 2019Date of Patent: April 12, 2022Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Nan Wang, Wei Liu, Lin Ma, Wenhao Jiang, Guangzhi Li, Shiyin Kang, Deyi Tuo, Xiaolong Zhu, Youyi Zhang, Shaobin Lin, Yongsen Zheng, Zixin Zou, Jing He, Zaizhen Chen, Pinyi Li
-
Patent number: 11270160Abstract: Embodiments of this application disclose an image description generation method performed at a computing device. The method includes: obtaining a target image; generating a first global feature vector and a first label vector set of the target image; applying the target image to a matching model and generating a first multi-mode feature vector of the target image through the matching model, the matching model being a model obtained through training according to a training image and reference image description information of the training image; and generating target image description information of the target image according to the first multi-mode feature vector, the first global feature vector, and the first label vector set.Type: GrantFiled: August 22, 2019Date of Patent: March 8, 2022Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Wenhao Jiang, Lin Ma, Wei Liu
-
Publication number: 20210312211Abstract: This application relates to the field of artificial intelligence technologies, and in particular, to a training method of an image-text matching model, a bi-directional search method, and a relevant apparatus. The training method includes extracting a global feature and a local feature of an image sample; extracting a global feature and a local feature of a text sample; training a matching model according to the extracted global feature and local feature of the image sample and the extracted global feature and local feature of the text sample, to determine model parameters of the matching model; and determining, by the matching model, according to a global feature and a local feature of an inputted image and a global feature and a local feature of an inputted text, a matching degree between the image and the text.Type: ApplicationFiled: June 16, 2021Publication date: October 7, 2021Inventors: Lin MA, Wenhao JIANG, Wei LIU
-
Patent number: 11087166Abstract: This application relates to the field of artificial intelligence technologies, and in particular, to a training method of an image-text matching model, a bi-directional search method, and a relevant apparatus. The training method includes extracting a global feature and a local feature of an image sample; extracting a global feature and a local feature of a text sample; training a matching model according to the extracted global feature and local feature of the image sample and the extracted global feature and local feature of the text sample, to determine model parameters of the matching model; and determining, by the matching model, according to a global feature and a local feature of an inputted image and a global feature and a local feature of an inputted text, a matching degree between the image and the text.Type: GrantFiled: September 23, 2019Date of Patent: August 10, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Lin Ma, Wenhao Jiang, Wei Liu
-
METHOD AND APPARATUS FOR TRAINING NEURAL NETWORK MODEL USED FOR IMAGE PROCESSING, AND STORAGE MEDIUM
Publication number: 20210182616Abstract: A method, apparatus, and storage medium for training a neural network model used for image processing are described. The method includes: obtaining a plurality of video frames; inputting the plurality of video frames through a neural network model so that the neural network model outputs intermediate images; obtaining optical flow information between an early video frame and a later video frame; modifying an intermediate image corresponding to the early video frame according to the optical flow information to obtain an expected-intermediate image; determining a time loss between an intermediate image corresponding to the later video frame and the expected-intermediate image; determining a feature loss between the intermediate images and a target feature image; and training the neural network model according to the time loss and the feature loss, and returning to obtaining a plurality of video frames continue training until the neural network model satisfies a training finishing condition.Type: ApplicationFiled: February 26, 2021Publication date: June 17, 2021Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Haozhi HUANG, Hao WANG, Wenhan LUO, Lin MA, Peng YANG, Wenhao JIANG, Xiaolong ZHU, Wei LIU -
Method and apparatus for training neural network model used for image processing, and storage medium
Patent number: 10970600Abstract: A method, apparatus, and storage medium for training a neural network model used for image processing are described. The method includes: obtaining a plurality of video frames; inputting the plurality of video frames through a neural network model so that the neural network model outputs intermediate images; obtaining optical flow information between an early video frame and a later video frame; modifying an intermediate image corresponding to the early video frame according to the optical flow information to obtain an expected-intermediate image; determining a time loss between an intermediate image corresponding to the later video frame and the expected-intermediate image; determining a feature loss between the intermediate images and a target feature image; and training the neural network model according to the time loss and the feature loss, and returning to obtaining a plurality of video frames continue training until the neural network model satisfies a training finishing condition.Type: GrantFiled: April 2, 2019Date of Patent: April 6, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Haozhi Huang, Hao Wang, Wenhan Luo, Lin Ma, Peng Yang, Wenhao Jiang, Xiaolong Zhu, Wei Liu -
Patent number: 10956771Abstract: An image recognition method, a terminal, and a storage medium are provided. The method includes: performing feature extraction on a to-be-recognized image by using an encoder, to obtain a feature vector and a first annotation vector set; performing initialization processing on the feature vector, to obtain first initial input data; and generating first guiding information based on the first annotation vector set by using a first guiding network model. The first guiding network model is configured to generate guiding information according to an annotation vector set of any image. The method also includes determining a descriptive statement of the image based on the first guiding information, the first annotation vector set, and the first initial input data by using a decoder.Type: GrantFiled: August 27, 2019Date of Patent: March 23, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Wenhao Jiang, Lin Ma, Wei Liu
-
Publication number: 20200082271Abstract: This application relates to an abstract description generating method, an abstract description generation model training method, a computer device, and a storage medium. The abstract description generating method includes: inputting a labeled training sample into an abstract description generation model; performing first-phase training on an encoding network and a decoding network of the abstract description generation model based on supervision of a first loss function; obtaining a backward-derived hidden state of a previous moment through backward derivation according to a hidden state of each moment outputted by the decoding network; obtaining a value of a second loss function according to the backward-derived hidden state of the previous moment and an actual hidden state of the previous moment outputted by the decoding network; and obtaining final model parameters of the abstract description generation model determined based on supervision of the second loss function to reach a preset threshold value.Type: ApplicationFiled: November 15, 2019Publication date: March 12, 2020Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Xinpeng CHEN, Lin MA, Wenhao JIANG, Wei LIU
-
Publication number: 20200051536Abstract: A terminal for generating music may identify, based on execution of scenario recognition, scenarios for images previously received by the terminal. The terminal may generate respective description texts for the scenarios. The terminal may execute keyword-based rhyme matching based on the respective description texts. The terminal may generate respective rhyming lyrics corresponding to the images. The terminal may convert the respective rhyming lyrics corresponding to the images into a speech. The terminal may synthesize the speech with preset background music to obtain image music.Type: ApplicationFiled: October 22, 2019Publication date: February 13, 2020Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Nan WANG, Wei LIU, Lin MA, Wenhao JIANG, Guangzhi LI, Shiyin KANG, Deyi TUO, Xiaolong ZHU, Youyi ZHANG, Shaobin LIN, Yongsen ZHENG, Zixin ZOU, Jing HE, Zaizhen CHEN, Pinyi LI
-
Publication number: 20200019807Abstract: This application relates to the field of artificial intelligence technologies, and in particular, to a training method of an image-text matching model, a bi-directional search method, and a relevant apparatus. The training method includes extracting a global feature and a local feature of an image sample; extracting a global feature and a local feature of a text sample; training a matching model according to the extracted global feature and local feature of the image sample and the extracted global feature and local feature of the text sample, to determine model parameters of the matching model; and determining, by the matching model, according to a global feature and a local feature of an inputted image and a global feature and a local feature of an inputted text, a matching degree between the image and the text.Type: ApplicationFiled: September 23, 2019Publication date: January 16, 2020Inventors: Lin MA, Wenhao JIANG, Wei LIU
-
Publication number: 20190385004Abstract: An image recognition method, a terminal, and a storage medium are provided. The method includes: performing feature extraction on a to-be-recognized image by using an encoder, to obtain a feature vector and a first annotation vector set; performing initialization processing on the feature vector, to obtain first initial input data; and generating first guiding information based on the first annotation vector set by using a first guiding network model. The first guiding network model is configured to generate guiding information according to an annotation vector set of any image. The method also includes determining a descriptive statement of the image based on the first guiding information, the first annotation vector set, and the first initial input data by using a decoder.Type: ApplicationFiled: August 27, 2019Publication date: December 19, 2019Inventors: Wenhao JIANG, Lin MA, Wei LIU