Patents by Inventor Yanggang DAI

Yanggang DAI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11140436
    Abstract: A method of performing video synthesis by a terminal is described. Circuitry of the terminal receives a first operation to trigger capture of first media information and detects at least one of a facial expression change of a user based on a first preset condition or a gesture of the user based on a second preset condition during the capture of the first media information. The circuitry of the terminal sends the detected at least one of the facial expression change or the gesture of the user to a server as key information. The circuitry of the terminal receives second media information that corresponds to the key information from the server and performs video synthesis on the first media information and the second media information.
    Type: Grant
    Filed: April 25, 2018
    Date of Patent: October 5, 2021
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Qianyi Wang, Yanggang Dai, Lei Ying, Faqiang Wu, Lingrui Cui, Zhenhai Wu, Yu Gao
  • Patent number: 10880598
    Abstract: Aspects of the disclosure provide methods and apparatuses for generating video data. In some examples, an apparatus for generating the video data includes processing circuitry. The processing circuitry obtains tempo information from audio data that is inserted in a target video. The processing circuitry determines a plurality of target tempo points in the tempo information according to video effect time description information included in a video effect description file. The video effect description file includes video effect data that is used to adjust one or more video frames of an original video. The processing circuitry obtains the one or more video frames from the original video according to the plurality of target tempo points. The processing circuitry adjusts the one or more video frames with the video effect data. The processing circuitry generates the target video including the adjusted one or more video frames and the audio data.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: December 29, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Qianyi Wang, Yanggang Dai, Yu Gao, Bin Fu
  • Patent number: 10679675
    Abstract: This application discloses a multimedia file joining method performed by an apparatus. After obtaining a first video clip and a second video clip to be joined, the apparatus obtains an audio file corresponding to the first video clip and the second video clip. The audio file records the first start and end time points of the first video clip and the second start and end time points of the second video clip. The apparatus adjusts the first video clip to play the first video clip in a first time period indicated by the first and end time points, and adjusts the second video clip to play the second video clip in a second time period indicated by the second and end time points with the first time period not overlapping the second time period. Finally, the apparatus performs a joining operation on the adjusted first video clip and the adjusted second video clip, to obtain a joined video file.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: June 9, 2020
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Tao Xiong, Lingrui Cui, Lei Ying, Faqiang Wu, Bin Fu, Qianyi Wang, Yanggang Dai
  • Patent number: 10652613
    Abstract: The present disclosure discloses a media information processing method performed by a media information processing apparatus. The apparatus determines media information clips of target media information and their characteristics, and generates a first media information clip of a first user based on the determined characteristics. Next the apparatus determines media information clips other than the target media information clip in the target media information, and obtains a second media information clip corresponding to the characteristics of the determined media information clips. The apparatus then determines a splicing manner of the media information clips in the target media information, and splices the first media information clip and the second media information clip based on the determined splicing manner.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: May 12, 2020
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Zhenhai Wu, Bin Fu, Lingrui Cui, Qianyi Wang, Yanggang Dai, Feng Shi, Faqiang Wu
  • Patent number: 10628677
    Abstract: A method for selecting a matching partner for a costarring video is performed by a terminal. The terminal obtains a first video recorded by a first user and a plurality of second videos in which a second role matching the first role is played by a respective second user. After obtaining a matching score between the first video and the plurality of second videos in each user type, the terminal ranks the second videos in which the second role is played by the second users for each user type and displays a ranking result of the second videos in which the second role is played for each user type. After obtaining a user selection of a second video according to the ranking result, the terminal synthesizes a complete video from the first video and the user-selected second video and plays the complete video.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: April 21, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Qianyi Wang, Yanggang Dai, Feng Shi, Faqiang Wu, Lingrui Cui, Tao Xiong, Yu Gao, Yunsheng Wu
  • Publication number: 20190335229
    Abstract: Aspects of the disclosure provide methods and apparatuses for generating video data. In some examples, an apparatus for generating the video data includes processing circuitry. The processing circuitry obtains tempo information from audio data that is inserted in a target video. The processing circuitry determines a plurality of target tempo points in the tempo information according to video effect time description information included in a video effect description file. The video effect description file includes video effect data that is used to adjust one or more video frames of an original video. The processing circuitry obtains the one or more video frames from the original video according to the plurality of target tempo points. The processing circuitry adjusts the one or more video frames with the video effect data. The processing circuitry generates the target video including the adjusted one or more video frames and the audio data.
    Type: Application
    Filed: July 9, 2019
    Publication date: October 31, 2019
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Qianyi WANG, Yanggang DAI, Yu GAO, Bin FU
  • Publication number: 20190303679
    Abstract: A method for selecting a matching partner for a costarring video is performed by a terminal. The terminal obtains a first video recorded by a first user and a plurality of second videos in which a second role matching the first role is played by a respective second user. After obtaining a matching score between the first video and the plurality of second videos in each user type, the terminal ranks the second videos in which the second role is played by the second users for each user type and displays a ranking result of the second videos in which the second role is played for each user type. After obtaining a user selection of a second video according to the ranking result, the terminal synthesizes a complete video from the first video and the user-selected second video and plays the complete video.
    Type: Application
    Filed: June 17, 2019
    Publication date: October 3, 2019
    Inventors: Qianyi WANG, Yanggang DAI, Feng SHI, Faqiang WU, Lingrui CUI, Tao XIONG, Yu GAO, Yunsheng WU
  • Patent number: 10380427
    Abstract: A partner matching method in a costarring video is performed by a terminal. The terminal obtains a video recorded by a first user identifier and a video in which a second role that matches the first role is played and an associated second user identifier. After obtaining a total score of videos in which the second role is played by each second user identifier in each user type, the terminal ranks the videos in which the second role is played by the second user identifiers for each user type and displays a ranking result of the videos in which the second role is played for each user type. After obtaining a video selected from the ranking result, the terminal synthesizes a complete video from the selected video in which the second role is played and the video in which the first role is played.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: August 13, 2019
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Qianyi Wang, Yanggang Dai, Feng Shi, Faqiang Wu, Lingrui Cui, Tao Xiong, Yu Gao, Yunsheng Wu
  • Publication number: 20180357485
    Abstract: A partner matching method in a costarring video is performed by a terminal. The terminal obtains a video recorded by a first user identifier and a video in which a second role that matches the first role is played and an associated second user identifier. After obtaining a total score of videos in which the second role is played by each second user identifier in each user type, the terminal ranks the videos in which the second role is played by the second user identifiers for each user type and displays a ranking result of the videos in which the second role is played for each user type. After obtaining a video selected from the ranking result, the terminal synthesizes a complete video from the selected video in which the second role is played and the video in which the first role is played.
    Type: Application
    Filed: August 21, 2018
    Publication date: December 13, 2018
    Inventors: Qianyi WANG, Yanggang DAI, Feng SHI, Faqiang WU, Lingrui CUI, Tao XIONG, Yu GAO, Yunsheng WU
  • Publication number: 20180352293
    Abstract: The present disclosure discloses a media information processing method performed by a media information processing apparatus. The apparatus determines media information clips of target media information and their characteristics, and generates a first media information clip of a first user based on the determined characteristics. Next the apparatus determines media information clips other than the target media information clip in the target media information, and obtains a second media information clip corresponding to the characteristics of the determined media information clips. The apparatus then determines a splicing manner of the media information clips in the target media information, and splices the first media information clip and the second media information clip based on the determined splicing manner.
    Type: Application
    Filed: July 20, 2018
    Publication date: December 6, 2018
    Inventors: Zhenhai WU, Bin FU, Lingrui CUI, Qianyi WANG, Yanggang DAI, Feng SHI, Faqiang WU
  • Publication number: 20180330757
    Abstract: This application discloses a multimedia file joining method performed by an apparatus. After obtaining a first video clip and a second video clip to be joined, the apparatus obtains an audio file corresponding to the first video clip and the second video clip. The audio file records the first start and end time points of the first video clip and the second start and end time points of the second video clip. The apparatus adjusts the first video clip to play the first video clip in a first time period indicated by the first and end time points, and adjusts the second video clip to play the second video clip in a second time period indicated by the second and end time points with the first time period not overlapping the second time period. Finally, the apparatus performs a joining operation on the adjusted first video clip and the adjusted second video clip, to obtain a joined video file.
    Type: Application
    Filed: June 29, 2018
    Publication date: November 15, 2018
    Inventors: Tao XIONG, Lingrui CUI, Lei YING, Faqiang WU, Bin FU, Qianyi WANG, Yanggang DAI
  • Publication number: 20180249200
    Abstract: A method of performing video synthesis by a terminal is described. Circuitry of the terminal receives a first operation to trigger capture of first media information and detects at least one of a facial expression change of a user based on a first preset condition or a gesture of the user based on a second preset condition during the capture of the first media information. The circuitry of the terminal sends the detected at least one of the facial expression change or the gesture of the user to a server as key information. The circuitry of the terminal receives second media information that corresponds to the key information from the server and performs video synthesis on the first media information and the second media information.
    Type: Application
    Filed: April 25, 2018
    Publication date: August 30, 2018
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Qianyi WANG, Yanggang DAI, Lei YING, Faqiang WU, Lingrui CUI, Zhenhai WU, Yu GAO