Patents by Inventor Yufan XUE

Yufan XUE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250069585
    Abstract: The present disclosure relates to a music generation method, apparatus and system, and storage medium. In an embodiment of the present disclosure: obtaining text information, and converting the text information into a corresponding voice audio; obtaining an initial music audio, wherein the initial music audio comprises a music key point, and music characteristics of the initial music audio have a sudden change at the position of an audio key point; and on the basis of the position of the music key point, synthesizing the voice audio and the initial music audio to obtain a target music audio. In the target music audio, the voice audio appears at the position of the music key point of the initial music audio. Thus, a music audio is generated from text information, and the user can customize the content of the text information and customize the initial music audio.
    Type: Application
    Filed: April 27, 2023
    Publication date: February 27, 2025
    Inventors: Andrew SHAW, Yilin ZHANG, Jitong CHEN, Vibert THIO, Shawn Chan Zhen YI, Liangqin XU, Yufan XUE
  • Patent number: 12236925
    Abstract: Embodiments of the present disclosure provide a method and a device for music play. The method comprises receiving a first operation instruction in a target application for playing music; in response to the first operation instruction, presenting a first interface of the target application, the first interface including an operation control for enhancing a play effect of the music through at least one processing, the processing being used for representing music content in a way more than sound; receiving a second operation instruction for the operation control; processing the music based on the second operation instruction during a process of playing the music.
    Type: Grant
    Filed: December 20, 2023
    Date of Patent: February 25, 2025
    Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
    Inventors: Mengfei Xie, Yufan Xue, Wei Hua, Xiaoyu Zhu, Dailong Chen, Jia Ding, Zoujie He, Jie Weng, Chaopeng Liu, Bowen Yang
  • Publication number: 20240371345
    Abstract: Embodiments of the present disclosure relate to a music generation method, apparatus, system and storage medium. In at least some embodiments of the present disclosure, by displaying a music generation interface including a text input box, a music generation control and a music configuration item in response to an operation by a user triggering the music generation control, so that the user can input a custom text in the text input box and configure a music melody through the music configuration item, and then in response to an operation by the user triggering the music generation control, it is possible to generate a voice based on the custom text input by the user, and generate a music including the voice corresponding to the custom text based on the generated voice and the user configured music melody.
    Type: Application
    Filed: April 27, 2023
    Publication date: November 7, 2024
    Inventors: Yufan XUE, Qiang ZHENG, Dong NIU, Liangqin XU, Xiaochan WANG, Jitong CHEN, Bochen LI, Naihan LI
  • Publication number: 20240348846
    Abstract: A video generating method includes acquiring video materials from an initial collection which comprises user-related videos, acquiring a target audio material serving as background music, performing image feature extraction on video frames of each video material, and performing segmentation processing according to image feature information corresponding to each video frame to acquire a target video segment corresponding to the video material, and merging the target video segment and the corresponding target audio material to generate a target video. The target video includes video segments which are obtained based on the target video segments respectively, the video segments in the target video are played in order of post time, and time lengths of the video segments are matched with time lengths of corresponding musical phrases in the target audio material.
    Type: Application
    Filed: June 27, 2024
    Publication date: October 17, 2024
    Inventors: Yufan XUE, Jie HE, Ye YUAN, Xiaojie LI, Yue GAO
  • Patent number: 12112731
    Abstract: The present application relates to the technical field of computers, and discloses a method and apparatus for generating a music file, and an electronic device and a storage medium. The method for generating a music file comprises: obtaining a first image; performing feature extraction on the first image to obtain a salient feature of the first image; mapping the salient feature to a musical instrument digital interface (MIDI) information coordinate system on the basis of the position of the salient feature in the first image, so as to determine MIDI information corresponding to the salient feature, the MIDI information coordinate system being used for indicating a correspondence between MIDI information and time; and generating a music file on the basis of the correspondence between the MIDI information and the time.
    Type: Grant
    Filed: December 19, 2023
    Date of Patent: October 8, 2024
    Assignee: BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
    Inventors: Yufan Xue, Guanjun Guo, Xin Yuan, Yuezhao Chen, Hao Huang, Na Li, Xubin Zhou
  • Publication number: 20240281200
    Abstract: The embodiments of the present disclosure relate to the technical field of computer processing. Provided are a method and device for playing a sound effect of music, and an electronic device, a computer-readable storage medium, a computer program product and a computer program. The method comprises: receiving a first operation instruction from a first interface, wherein the first interface comprises an interface for playing music in a music player; and in response to the first operation instruction, playing a sound effect of a piece of target music, wherein the sound effect comprises an associated audio sound effect and visual sound effect. By means of the embodiments of the present disclosure, when a user plays music, an audio sound effect and a visual sound effect, which are associated with the music, can be provided for the user, thereby improving the diversity of music for playing.
    Type: Application
    Filed: August 22, 2022
    Publication date: August 22, 2024
    Inventors: Yipeng HUANG, Chaopeng LIU, Xiaoyu ZHU, Dailong CHEN, Yufan XUE, Hao HUANG, Xuzhou YE
  • Publication number: 20240127777
    Abstract: The present application relates to the technical field of computers, and discloses a method and apparatus for generating a music file, and an electronic device and a storage medium. The method for generating a music file comprises: obtaining a first image; performing feature extraction on the first image to obtain a salient feature of the first image; mapping the salient feature to a musical instrument digital interface (MIDI) information coordinate system on the basis of the position of the salient feature in the first image, so as to determine MIDI information corresponding to the salient feature, the MIDI information coordinate system being used for indicating a correspondence between MIDI information and time; and generating a music file on the basis of the correspondence between the MIDI information and the time.
    Type: Application
    Filed: December 19, 2023
    Publication date: April 18, 2024
    Inventors: Yufan XUE, Guanjun GUO, Xin YUAN, Yuezhao CHEN, Hao HUANG, Na LI, Xubin ZHOU
  • Publication number: 20240119919
    Abstract: Embodiments of the present disclosure provide a method and a device for music play. The method comprises receiving a first operation instruction in a target application for playing music; in response to the first operation instruction, presenting a first interface of the target application, the first interface including an operation control for enhancing a play effect of the music through at least one processing, the processing being used for representing music content in a way more than sound; receiving a second operation instruction for the operation control; processing the music based on the second operation instruction during a process of playing the music.
    Type: Application
    Filed: December 20, 2023
    Publication date: April 11, 2024
    Inventors: Mengfei Xie, Yufan Xue, Wei Hua, Xiaoyu Zhu, Dailong Chen, Jia Ding, Zoujie He, Jie Weng, Chaopeng Liu, Bowen Yang
  • Patent number: 11436481
    Abstract: A method for natural language processing includes receiving, by one or more processors, an unstructured text input. An entity classifier is used to identify entities in the unstructured text input. The identifying the entities includes generating, using a plurality of sub-classifiers of a hierarchical neural network classifier of the entity classifier, a plurality of lower-level entity identifications associated with the unstructured text input. The identifying the entities further includes generating, using a combiner of the hierarchical neural network classifier, a plurality of higher-level entity identifications associated with the unstructured text input based on the plurality of lower-level entity identifications. Identified entities are provided based on the plurality of higher-level entity identifications.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: September 6, 2022
    Assignee: SALESFORCE.COM, INC.
    Inventors: Govardana Sachithanandam Ramachandran, Michael Machado, Shashank Harinath, Linwei Zhu, Yufan Xue, Abhishek Sharma, Jean-Marc Soumet, Bryan McCann
  • Patent number: 11232267
    Abstract: A method and apparatus include receiving a first sentence including a first set of words, and a second sentence including a second set of words. A first set of vectors corresponding to the first set of words of the first sentence, and a second set of vectors corresponding to the second set of words of the second sentence are generated using a word embedding model. A similarity matrix based on the first set of vectors and the second set of vectors is generated. An alignment score associated with the first set of vectors and the second set of vectors is determined using the similarity matrix. The alignment score is transmitted to permit information retrieval based on a similarity between the first sentence and the second sentence.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: January 25, 2022
    Assignee: TENCENT AMERICA LLC
    Inventors: Lianyi Han, Yaliang Li, Zhen Qian, Yusheng Xie, Yufan Xue, Tao Yang, Wei Fan
  • Publication number: 20200372117
    Abstract: A method and apparatus include receiving a first sentence including a first set of words, and a second sentence including a second set of words. A first set of vectors corresponding to the first set of words of the first sentence, and a second set of vectors corresponding to the second set of words of the second sentence are generated using a word embedding model. A similarity matrix based on the first set of vectors and the second set of vectors is generated. An alignment score associated with the first set of vectors and the second set of vectors is determined using the similarity matrix. The alignment score is transmitted to permit information retrieval based on a similarity between the first sentence and the second sentence.
    Type: Application
    Filed: May 24, 2019
    Publication date: November 26, 2020
    Applicant: TENCENT AMERICA LLC
    Inventors: Lianyi Han, Yaliang Li, Zhen Qian, Yusheng Xie, Yufan Xue, Tao Yang, Wei Fan
  • Publication number: 20200090033
    Abstract: A method for natural language processing includes receiving, by one or more processors, an unstructured text input. An entity classifier is used to identify entities in the unstructured text input. The identifying the entities includes generating, using a plurality of sub-classifiers of a hierarchical neural network classifier of the entity classifier, a plurality of lower-level entity identifications associated with the unstructured text input. The identifying the entities further includes generating, using a combiner of the hierarchical neural network classifier, a plurality of higher-level entity identifications associated with the unstructured text input based on the plurality of lower-level entity identifications. Identified entities are provided based on the plurality of higher-level entity identifications.
    Type: Application
    Filed: September 18, 2018
    Publication date: March 19, 2020
    Inventors: Govardana Sachithanandam RAMACHANDRAN, Michael MACHADO, Shashank HARINATH, Linwei ZHU, Yufan XUE, Abhishek SHARMA, Jean-Marc SOUMET, Bryan MCCANN