Patents by Inventor Wenhao JIANG

Wenhao JIANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12272335
    Abstract: A method includes: a processor that obtains several lines of data in to-be-displayed display data to generate a data block; generates a synchronization flag corresponding to the data block; encapsulates the data block and the synchronization flag corresponding to the data block to obtain a data packet corresponding to the data block; and sends all data packets corresponding to the display data to the display system. The display system sequentially parses all the data packets sent by the processor to obtain a synchronization flag associated with each data packet, and determines a display location of each data block on a display panel based on the synchronization flag to display the display data.
    Type: Grant
    Filed: June 12, 2023
    Date of Patent: April 8, 2025
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Weiwei Fan, Ming Chang, Hongli Wang, Xiaowen Cao, Wenhao Jiang, Siqing Du
  • Patent number: 11907851
    Abstract: Embodiments of this application disclose an image description generation method performed at a computing device. The method includes: obtaining a target image; generating a first global feature vector and a first label vector set of the target image; generating a first multi-mode feature vector of the target image through a matching model, the matching model being a model obtained through training according to a training image and reference image description information of the training image; and applying the first multi-mode feature vector, the first global feature vector, and the first label vector set to a computing model, to obtain the target image description information, the computing model being a model obtained through training according to image description information of the training image and the reference image description information.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: February 20, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Wenhao Jiang, Lin Ma, Wei Liu
  • Publication number: 20240007211
    Abstract: A method implemented by a communication system, including at least a first electronic device and a second electronic device, includes that the first electronic device plays multimedia content. The first electronic device sends first synchronization information through broadcast, so that the second electronic device receives the first synchronization information, where the first synchronization information includes at least a first address, and the first address is an obtaining address of the multimedia content. In response to a preset operation, the first electronic device establishes a near field wireless communication connection to the second electronic device, and the second electronic device caches the multimedia content based on at least the first address. The first electronic device sends a control instruction to the second electronic device by using the near field wireless communication connection. The second electronic device plays the multimedia content based on the control instruction.
    Type: Application
    Filed: September 1, 2021
    Publication date: January 4, 2024
    Inventors: Chong Chen, Shuo Zhang, Hao Wang, Songping Yao, Wenhao Jiang
  • Publication number: 20230335081
    Abstract: A method includes: a processor that obtains several lines of data in to-be-displayed display data to generate a data block; generates a synchronization flag corresponding to the data block; encapsulates the data block and the synchronization flag corresponding to the data block to obtain a data packet corresponding to the data block; and sends all data packets corresponding to the display data to the display system. The display system sequentially parses all the data packets sent by the processor to obtain a synchronization flag associated with each data packet, and determines a display location of each data block on a display panel based on the synchronization flag to display the display data.
    Type: Application
    Filed: June 12, 2023
    Publication date: October 19, 2023
    Inventors: Weiwei Fan, Ming Chang, Hongli Wang, Xiaowen Cao, Wenhao Jiang, Siqing Du
  • Patent number: 11699298
    Abstract: This application relates to the field of artificial intelligence technologies, and in particular, to a training method of an image-text matching model, a bi-directional search method, and a relevant apparatus. The training method includes extracting a global feature and a local feature of an image sample; extracting a global feature and a local feature of a text sample; training a matching model according to the extracted global feature and local feature of the image sample and the extracted global feature and local feature of the text sample, to determine model parameters of the matching model; and determining, by the matching model, according to a global feature and a local feature of an inputted image and a global feature and a local feature of an inputted text, a matching degree between the image and the text.
    Type: Grant
    Filed: June 16, 2021
    Date of Patent: July 11, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Lin Ma, Wenhao Jiang, Wei Liu
  • Patent number: 11610082
    Abstract: A method, apparatus, and storage medium for training a neural network model used for image processing are described. The method includes: obtaining a plurality of video frames; inputting the plurality of video frames through a neural network model so that the neural network model outputs intermediate images; obtaining optical flow information between an early video frame and a later video frame; modifying an intermediate image corresponding to the early video frame according to the optical flow information to obtain an expected-intermediate image; determining a time loss between an intermediate image corresponding to the later video frame and the expected-intermediate image; determining a feature loss between the intermediate images and a target feature image; and training the neural network model according to the time loss and the feature loss, and returning to obtaining a plurality of video frames continue training until the neural network model satisfies a training finishing condition.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: March 21, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Haozhi Huang, Hao Wang, Wenhan Luo, Lin Ma, Peng Yang, Wenhao Jiang, Xiaolong Zhu, Wei Liu
  • Patent number: 11494658
    Abstract: This application relates to an abstract description generating method, an abstract description generation model training method, a computer device, and a storage medium. The abstract description generating method includes: inputting a labeled training sample into an abstract description generation model; performing first-phase training on an encoding network and a decoding network of the abstract description generation model based on supervision of a first loss function; obtaining a backward-derived hidden state of a previous moment through backward derivation according to a hidden state of each moment outputted by the decoding network; obtaining a value of a second loss function according to the backward-derived hidden state of the previous moment and an actual hidden state of the previous moment outputted by the decoding network; and obtaining final model parameters of the abstract description generation model determined based on supervision of the second loss function to reach a preset threshold value.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: November 8, 2022
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Xinpeng Chen, Lin Ma, Wenhao Jiang, Wei Liu
  • Publication number: 20220163652
    Abstract: A frequency modulated continuous wave (FMCW)-based virtual reality (VR) environment interaction system and method are provided. Signal generators (S1, S2, S3) are provided to transmit FMCW signals; a glove is worn on a hand by a user; and multiple signal receiving nodes (H) are provided on the glove and configured to receive the FMCW signals. When the signal receiving nodes (H) receive the FMCW signals, one-dimensional distances are measured by means of FMCW technique; after the distances are measured, positions of the signal receiving nodes (H) in a coordinate system of the signal generators (S1, S2, S3) are calculated; a change in a position of the hand that wears the glove is tracked by means of changes in the positions of the signal receiving nodes (H); and a VR interaction is performed by outputting a change in a coordinate point matrix formed by the signal receiving nodes (H).
    Type: Application
    Filed: February 6, 2022
    Publication date: May 26, 2022
    Inventors: Yanchao ZHAO, Wenhao JIANG, Si LI
  • Publication number: 20220156518
    Abstract: Embodiments of this application disclose an image description generation method performed at a computing device. The method includes: obtaining a target image; generating a first global feature vector and a first label vector set of the target image; generating a first multi-mode feature vector of the target image through a matching model, the matching model being a model obtained through training according to a training image and reference image description information of the training image; and applying the first multi-mode feature vector, the first global feature vector, and the first label vector set to a computing model, to obtain the target image description information, the computing model being a model obtained through training according to image description information of the training image and the reference image description information.
    Type: Application
    Filed: January 31, 2022
    Publication date: May 19, 2022
    Inventors: Wenhao JIANG, Lin MA, Wei LIU
  • Patent number: 11301641
    Abstract: A terminal for generating music may identify, based on execution of scenario recognition, scenarios for images previously received by the terminal. The terminal may generate respective description texts for the scenarios. The terminal may execute keyword-based rhyme matching based on the respective description texts. The terminal may generate respective rhyming lyrics corresponding to the images. The terminal may convert the respective rhyming lyrics corresponding to the images into a speech. The terminal may synthesize the speech with preset background music to obtain image music.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: April 12, 2022
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Nan Wang, Wei Liu, Lin Ma, Wenhao Jiang, Guangzhi Li, Shiyin Kang, Deyi Tuo, Xiaolong Zhu, Youyi Zhang, Shaobin Lin, Yongsen Zheng, Zixin Zou, Jing He, Zaizhen Chen, Pinyi Li
  • Patent number: 11270160
    Abstract: Embodiments of this application disclose an image description generation method performed at a computing device. The method includes: obtaining a target image; generating a first global feature vector and a first label vector set of the target image; applying the target image to a matching model and generating a first multi-mode feature vector of the target image through the matching model, the matching model being a model obtained through training according to a training image and reference image description information of the training image; and generating target image description information of the target image according to the first multi-mode feature vector, the first global feature vector, and the first label vector set.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: March 8, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Wenhao Jiang, Lin Ma, Wei Liu
  • Publication number: 20210312211
    Abstract: This application relates to the field of artificial intelligence technologies, and in particular, to a training method of an image-text matching model, a bi-directional search method, and a relevant apparatus. The training method includes extracting a global feature and a local feature of an image sample; extracting a global feature and a local feature of a text sample; training a matching model according to the extracted global feature and local feature of the image sample and the extracted global feature and local feature of the text sample, to determine model parameters of the matching model; and determining, by the matching model, according to a global feature and a local feature of an inputted image and a global feature and a local feature of an inputted text, a matching degree between the image and the text.
    Type: Application
    Filed: June 16, 2021
    Publication date: October 7, 2021
    Inventors: Lin MA, Wenhao JIANG, Wei LIU
  • Patent number: 11087166
    Abstract: This application relates to the field of artificial intelligence technologies, and in particular, to a training method of an image-text matching model, a bi-directional search method, and a relevant apparatus. The training method includes extracting a global feature and a local feature of an image sample; extracting a global feature and a local feature of a text sample; training a matching model according to the extracted global feature and local feature of the image sample and the extracted global feature and local feature of the text sample, to determine model parameters of the matching model; and determining, by the matching model, according to a global feature and a local feature of an inputted image and a global feature and a local feature of an inputted text, a matching degree between the image and the text.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: August 10, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Lin Ma, Wenhao Jiang, Wei Liu
  • Publication number: 20210182616
    Abstract: A method, apparatus, and storage medium for training a neural network model used for image processing are described. The method includes: obtaining a plurality of video frames; inputting the plurality of video frames through a neural network model so that the neural network model outputs intermediate images; obtaining optical flow information between an early video frame and a later video frame; modifying an intermediate image corresponding to the early video frame according to the optical flow information to obtain an expected-intermediate image; determining a time loss between an intermediate image corresponding to the later video frame and the expected-intermediate image; determining a feature loss between the intermediate images and a target feature image; and training the neural network model according to the time loss and the feature loss, and returning to obtaining a plurality of video frames continue training until the neural network model satisfies a training finishing condition.
    Type: Application
    Filed: February 26, 2021
    Publication date: June 17, 2021
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Haozhi HUANG, Hao WANG, Wenhan LUO, Lin MA, Peng YANG, Wenhao JIANG, Xiaolong ZHU, Wei LIU
  • Patent number: 10970600
    Abstract: A method, apparatus, and storage medium for training a neural network model used for image processing are described. The method includes: obtaining a plurality of video frames; inputting the plurality of video frames through a neural network model so that the neural network model outputs intermediate images; obtaining optical flow information between an early video frame and a later video frame; modifying an intermediate image corresponding to the early video frame according to the optical flow information to obtain an expected-intermediate image; determining a time loss between an intermediate image corresponding to the later video frame and the expected-intermediate image; determining a feature loss between the intermediate images and a target feature image; and training the neural network model according to the time loss and the feature loss, and returning to obtaining a plurality of video frames continue training until the neural network model satisfies a training finishing condition.
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: April 6, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Haozhi Huang, Hao Wang, Wenhan Luo, Lin Ma, Peng Yang, Wenhao Jiang, Xiaolong Zhu, Wei Liu
  • Patent number: 10956771
    Abstract: An image recognition method, a terminal, and a storage medium are provided. The method includes: performing feature extraction on a to-be-recognized image by using an encoder, to obtain a feature vector and a first annotation vector set; performing initialization processing on the feature vector, to obtain first initial input data; and generating first guiding information based on the first annotation vector set by using a first guiding network model. The first guiding network model is configured to generate guiding information according to an annotation vector set of any image. The method also includes determining a descriptive statement of the image based on the first guiding information, the first annotation vector set, and the first initial input data by using a decoder.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: March 23, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Wenhao Jiang, Lin Ma, Wei Liu
  • Publication number: 20200082271
    Abstract: This application relates to an abstract description generating method, an abstract description generation model training method, a computer device, and a storage medium. The abstract description generating method includes: inputting a labeled training sample into an abstract description generation model; performing first-phase training on an encoding network and a decoding network of the abstract description generation model based on supervision of a first loss function; obtaining a backward-derived hidden state of a previous moment through backward derivation according to a hidden state of each moment outputted by the decoding network; obtaining a value of a second loss function according to the backward-derived hidden state of the previous moment and an actual hidden state of the previous moment outputted by the decoding network; and obtaining final model parameters of the abstract description generation model determined based on supervision of the second loss function to reach a preset threshold value.
    Type: Application
    Filed: November 15, 2019
    Publication date: March 12, 2020
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Xinpeng CHEN, Lin MA, Wenhao JIANG, Wei LIU
  • Publication number: 20200051536
    Abstract: A terminal for generating music may identify, based on execution of scenario recognition, scenarios for images previously received by the terminal. The terminal may generate respective description texts for the scenarios. The terminal may execute keyword-based rhyme matching based on the respective description texts. The terminal may generate respective rhyming lyrics corresponding to the images. The terminal may convert the respective rhyming lyrics corresponding to the images into a speech. The terminal may synthesize the speech with preset background music to obtain image music.
    Type: Application
    Filed: October 22, 2019
    Publication date: February 13, 2020
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Nan WANG, Wei LIU, Lin MA, Wenhao JIANG, Guangzhi LI, Shiyin KANG, Deyi TUO, Xiaolong ZHU, Youyi ZHANG, Shaobin LIN, Yongsen ZHENG, Zixin ZOU, Jing HE, Zaizhen CHEN, Pinyi LI
  • Publication number: 20200019807
    Abstract: This application relates to the field of artificial intelligence technologies, and in particular, to a training method of an image-text matching model, a bi-directional search method, and a relevant apparatus. The training method includes extracting a global feature and a local feature of an image sample; extracting a global feature and a local feature of a text sample; training a matching model according to the extracted global feature and local feature of the image sample and the extracted global feature and local feature of the text sample, to determine model parameters of the matching model; and determining, by the matching model, according to a global feature and a local feature of an inputted image and a global feature and a local feature of an inputted text, a matching degree between the image and the text.
    Type: Application
    Filed: September 23, 2019
    Publication date: January 16, 2020
    Inventors: Lin MA, Wenhao JIANG, Wei LIU
  • Publication number: 20190385004
    Abstract: An image recognition method, a terminal, and a storage medium are provided. The method includes: performing feature extraction on a to-be-recognized image by using an encoder, to obtain a feature vector and a first annotation vector set; performing initialization processing on the feature vector, to obtain first initial input data; and generating first guiding information based on the first annotation vector set by using a first guiding network model. The first guiding network model is configured to generate guiding information according to an annotation vector set of any image. The method also includes determining a descriptive statement of the image based on the first guiding information, the first annotation vector set, and the first initial input data by using a decoder.
    Type: Application
    Filed: August 27, 2019
    Publication date: December 19, 2019
    Inventors: Wenhao JIANG, Lin MA, Wei LIU