Patents by Inventor Siwei Fu

Siwei Fu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11282508
    Abstract: A computer implemented method and system for processing an audio signal. The method includes the steps of extracting prosodic features from the audio signal, aligning the extracted prosodic features with a script derived from or associated with the audio signal, and segmenting the script with the aligned extracted prosodic features into structural blocks of a first type. The method may further include determining a distance measure between a structural block of a first type derived from the script with another structural block of the first type using, for example, the Damerau-Levenshtein distance.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: March 22, 2022
    Assignee: Blue Planet Training, Inc.
    Inventors: Huamin Qu, Yuanzhe Chen, Siwei Fu, Linping Yuan, Aoyu Wu
  • Patent number: 11086916
    Abstract: A method of analyzing conversational messages may be provided. The method including receiving a query defining a timespan of messages, retrieving at least two conversational messages associated with the defined timespan from a plurality of interleaved messages, de-threading the at least two conversational messages to identify at least one conversational thread, and generating a visualization of conversational threads based on the defined timespan, the at least one conversational thread, the visualization organized into time intervals based on the defined timespan.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: August 10, 2021
    Assignee: FUJIFILM BUSINESS INNOVATION CORP.
    Inventors: Jian Zhao, Siwei Fu
  • Patent number: 10762116
    Abstract: A method of analyzing conversational messages may be provided. The method may include receiving a query defining a timespan of messages, retrieving at least one conversational message associated with the defined timespan from a plurality of interleaved messages, retrieving at least one message author associated with the defined timespan from a plurality authors associated with the plurality of interleaved messages, and generating a visualization of conversational threads based on the defined timespan, the at least one conversational message and the at least one message author, the visualization organized into time intervals based on the defined timespan.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: September 1, 2020
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Jian Zhao, Siwei Fu
  • Publication number: 20200273450
    Abstract: A computer implemented method and system for processing an audio signal. The method includes the steps of extracting prosodic features from the audio signal, aligning the extracted prosodic features with a script derived from or associated with the audio signal, and segmenting the script with the aligned extracted prosodic features into structural blocks of a first type. The method may further include determining a distance measure between a structural block of a first type derived from the script with another structural block of the first type using, for example, the Damerau-Levenshtein distance.
    Type: Application
    Filed: December 9, 2019
    Publication date: August 27, 2020
    Inventors: Huamin Qu, Yuanzhe Chen, Siwei Fu, Linping Yuan, Aoyu WU
  • Patent number: 10616626
    Abstract: The present teaching relates to analyzing user activities related to a video. The video is provided to a plurality of users. The plurality of users is monitored to detect one or more types of user activities performed in time with respect to different portions of the video. One or more visual representations of the monitored one or more types of user activities are generated. The one or more visual representations capture a level of attention paid by the plurality of users to the different portions of the video at any time instance. Interests of at least some of the plurality of users are determined with respect to the different portions of the video based on the one or more visual representations.
    Type: Grant
    Filed: March 7, 2016
    Date of Patent: April 7, 2020
    Assignee: THE HONG KONG UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Huamin Qu, Conglei Shi, Siwei Fu, Qing Chen
  • Publication number: 20190205462
    Abstract: A method of analyzing conversational messages may be provided. The method may include receiving a query defining a timespan of messages, retrieving at least one conversational message associated with the defined timespan from a plurality of interleaved messages, retrieving at least one message author associated with the defined timespan from a plurality authors associated with the plurality of interleaved messages, and generating a visualization of conversational threads based on the defined timespan, the at least one conversational message and the at least one message author, the visualization organized into time intervals based on the defined timespan.
    Type: Application
    Filed: December 28, 2017
    Publication date: July 4, 2019
    Inventors: Jian ZHAO, Siwei FU
  • Publication number: 20190205464
    Abstract: A method of analyzing conversational messages may be provided. The method including receiving a query defining a timespan of messages, retrieving at least two conversational messages associated with the defined timespan from a plurality of interleaved messages, de-threading the at least two conversational messages to identify at least one conversational thread, and generating a visualization of conversational threads based on the defined timespan, the at least one conversational thread, the visualization organized into time intervals based on the defined timespan.
    Type: Application
    Filed: December 29, 2017
    Publication date: July 4, 2019
    Inventors: Jian ZHAO, Siwei FU
  • Publication number: 20160295260
    Abstract: The present teaching relates to analyzing user activities related to a video. The video is provided to a plurality of users. The plurality of users is monitored to detect one or more types of user activities performed in time with respect to different portions of the video. One or more visual representations of the monitored one or more types of user activities are generated. The one or more visual representations capture a level of attention paid by the plurality of users to the different portions of the video at any time instance. Interests of at least some of the plurality of users are determined with respect to the different portions of the video based on the one or more visual representations.
    Type: Application
    Filed: March 7, 2016
    Publication date: October 6, 2016
    Inventors: Huamin Qu, Conglei Shi, Siwei Fu, Qing Chen