Patents by Inventor SHUANGSHUANG QIAO

SHUANGSHUANG QIAO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240086484
    Abstract: Provided are a content search method, apparatus, and device, and a storage medium. The present disclosure enables: receiving a search content; and displaying a plurality of answer viewpoints and first contents in a search result interface, wherein each answer viewpoint corresponds to one category of search results, the search results are results obtained by searching the search content, the first contents comprises keywords, the keywords are used for indicating reasons for displaying a target answer viewpoint among the plurality of answer viewpoints, and the keywords are extracted from a target category of search results corresponding to the target answer viewpoint.
    Type: Application
    Filed: April 21, 2022
    Publication date: March 14, 2024
    Inventors: Yating LIN, Feng ZHAO, Yanli WANG, Shuangshuang JIANG, Chao QIAO, Fan WU
  • Publication number: 20230401484
    Abstract: Provided are a data processing method and apparatus, an electronic device, and a storage medium. The data processing method includes acquiring a target directed acyclic graph (DAG) corresponding to the service processing logic of a model self-taught learning service, where the service processing logic includes execution logic for acquiring service data generated by an online released service model, execution logic for training a to-be-trained service model based on the service data, and execution logic for releasing the trained service model online; and performing self-taught learning on the to-be-trained service model according to the target DAG.
    Type: Application
    Filed: December 7, 2022
    Publication date: December 14, 2023
    Inventors: Chao WANG, Xiangyue Lin, Yang Liang, En Shi, Shuangshuang QIAO
  • Patent number: 11310559
    Abstract: Embodiments of the present disclosure disclose a method and apparatus for recommending a video. A specific implementation of the method includes: finding a recommended video corresponding to a target video from all candidate videos based on similarities of content characteristics of the videos, the target video being a video to be played on a terminal of a user; and sending play information of the recommended video corresponding to the target video to the terminal of the user. The method finds a video similar on video content to the target video that the user desires to view based on the content characteristic of the video, and recommends the video similar on video content to the target video that the user desires to view to the user.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: April 19, 2022
    Assignee: BEIJING BAIDU NETCOM SCIENCE & TECHNOLOGY CO., LTD
    Inventors: Hang Jiang, Minghao Liu, Yang Liang, Shuangshuang Qiao, Siyu An, Kaihua Song, Xiangyue Lin, Hua Chai, Faen Zhang, Jiangliang Guo, Jingbo Huang, Xu Li, Jin Tang, Shiming Yin
  • Patent number: 11282516
    Abstract: Embodiments of the present disclosure provide a human-machine interaction processing method, an apparatus thereof, a user terminal, a processing server and a system. On the user terminal side, the method includes: receiving an interaction request voice inputted from a user, and collecting video data of the user when inputting the interaction request voice; obtaining an interaction response voice corresponding to the interaction request voice, where the interaction response voice is obtained according to expression information of the user when inputting the interaction request voice and included in the video data; and outputting the interaction response voice to the user. The method imbues the interaction response voice with an emotional tone that matches the current emotion of the user, so that the human-machine interaction process is no longer monotonous, greatly enhancing the user experience.
    Type: Grant
    Filed: February 18, 2019
    Date of Patent: March 22, 2022
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Shuangshuang Qiao, Kun Liu, Yang Liang, Xiangyue Lin, Chao Han, Mingfa Zhu, Jiangliang Guo, Xu Li, Jun Liu, Shuo Li, Shiming Yin
  • Patent number: 11138903
    Abstract: The present disclosure provides a method, an apparatus, a device and a system for sign language translation, where a server receives video information sent by a terminal device, and preprocesses the video information to obtain at least one sign language action; the at least one sign language action is input into a sign language model for classification and prediction to obtain a word corresponding to the at least one sign language action; each word is input into a language model to determine whether an intention expression is complete; and each word is sent to the terminal device when the intention expression is complete, so that the terminal device displays each word, thereby realizing the translation of the sign language action into text, enabling the ordinary persons to better understand intentions of the hearing impaired, thus improving efficiency of communications.
    Type: Grant
    Filed: February 18, 2019
    Date of Patent: October 5, 2021
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Xiangyue Lin, Kun Liu, Shuangshuang Qiao, Yang Liang, Chao Han, Mingfa Zhu, Jiangliang Guo, Xu Li, Jun Liu, Shuo Li, Shiming Yin
  • Publication number: 20210012777
    Abstract: Embodiments of the present disclosure provide a context acquiring method based on voice interaction and a device, the method comprising: acquiring a scene image collected by an image collection device at a voice start point of a current conversation, and extracting a face feature of each user in the scene image; if it is determined that there is a second face feature matching a first face feature according to the face feature of each user and a face database, acquiring a first user identifier corresponding to the second face feature from the face database; if it is determined that a stored conversation corresponding to the first user identifier is stored in a voice database, determine a context of a voice interaction according to the current conversation and the stored conversation, and after the voice end point of the current conversation is obtained, storing the current conversation into the voice database.
    Type: Application
    Filed: July 23, 2020
    Publication date: January 14, 2021
    Applicant: Beijing Baidu Netcom Science Technology Co., Ltd.
    Inventors: Yang LIANG, Kun LIU, Shuangshuang QIAO, Xiangyue LIN, Chao HAN, Mingfa ZHU, Jiangliang GUO, Xu LI, Jun LIU, Shuo LI, Shiming YIN
  • Publication number: 20200005781
    Abstract: Embodiments of the present disclosure provide a human-machine interaction processing method, an apparatus thereof, a user terminal, a processing server and a system. On the user terminal side, the method includes: receiving an interaction request voice inputted from a user, and collecting video data of the user when inputting the interaction request voice; obtaining an interaction response voice corresponding to the interaction request voice, where the interaction response voice is obtained according to expression information of the user when inputting the interaction request voice and included in the video data; and outputting the interaction response voice to the user. The method imbues the interaction response voice with an emotional tone that matches the current emotion of the user, so that the human-machine interaction process is no longer monotonous, greatly enhancing the user experience.
    Type: Application
    Filed: February 18, 2019
    Publication date: January 2, 2020
    Inventors: SHUANGSHUANG QIAO, KUN LIU, YANG LIANG, XIANGYUE LIN, CHAO HAN, MINGFA ZHU, JIANGLIANG GUO, XU LI, JUN LIU, SHUO LI, SHIMING YIN
  • Publication number: 20200005673
    Abstract: The present disclosure provides a method, an apparatus, a device and a system for sign language translation, where a server receives video information sent by a terminal device, and preprocesses the video information to obtain at least one sign language action; the at least one sign language action is input into a sign language model for classification and prediction to obtain a word corresponding to the at least one sign language action; each word is input into a language model to determine whether an intention expression is complete; and each word is sent to the terminal device when the intention expression is complete, so that the terminal device displays each word, thereby realizing the translation of the sign language action into text, enabling the ordinary persons to better understand intentions of the hearing impaired, thus improving efficiency of communications.
    Type: Application
    Filed: February 18, 2019
    Publication date: January 2, 2020
    Inventors: XIANGYUE LIN, KUN LIU, SHUANGSHUANG QIAO, YANG LIANG, CHAO HAN, MINGFA ZHU, JIANGLIANG GUO, XU LI, JUN LIU, SHUO LI, SHIMING YIN
  • Publication number: 20190253760
    Abstract: Embodiments of the present disclosure disclose a method and apparatus for recommending a video. A specific implementation of the method includes: finding a recommended video corresponding to a target video from all candidate videos based on similarities of content characteristics of the videos, the target video being a video to be played on a terminal of a user; and sending play information of the recommended video corresponding to the target video to the terminal of the user. The method finds a video similar on video content to the target video that the user desires to view based on the content characteristic of the video, and recommends the video similar on video content to the target video that the user desires to view to the user.
    Type: Application
    Filed: January 23, 2019
    Publication date: August 15, 2019
    Inventors: Hang JIANG, Minghao LIU, Yang LIANG, Shuangshuang QIAO, Siyu AN, Kaihua SONG, Xiangyue LIN, Hua CHAI, Faen ZHANG, Jiangliang GUO, Jingbo HUANG, Xu LI, Jin TANG, Shiming YIN