Patents by Inventor Changsong Liu

Changsong Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220254343
    Abstract: The present teaching relates to method, system, medium, and implementations for enabling communication with a user. Information representing surrounding of a user to be engaged in a new dialogue is received via the communication platform, wherein the information is acquired from a scene in which the user is present and captures characteristics of the user and the scene. Relevant features are extracted from the information. A state of the user is estimated based on the relevant features, and a dialogue context surrounding the scene is determined based on the relevant features. A topic for the new dialogue is determined based on the user, and a feedback is generated to initiate the new dialogue with the user based on the topic, the state of the user, and the dialogue context.
    Type: Application
    Filed: January 10, 2022
    Publication date: August 11, 2022
    Inventors: Changsong Liu, Rui Fang
  • Patent number: 11266913
    Abstract: The present application discloses a method for synchronously displaying game content at a terminal device. The method includes: detecting a first operation instruction on a first client of a game application when a round of the game application is run on the first client, accounts participating in the round of game including a first account and a second account, and determining, a first operation that corresponds to the first operation instruction and is performed by a first operation object corresponding to the first account in the round of game; determining first content that needs to be displayed on the first client when the first operation object performs the first operation, and obtaining second content to be simultaneously displayed with the first content, the second content being content that needs to be displayed on the second client; and simultaneously displaying the first content and the second content on the first client.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: March 8, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Ronghua Kong, Changsong Liu
  • Publication number: 20220020360
    Abstract: The present teaching relates to method, system, medium, and implementations for user machine dialogue. An instruction is received by an agent device for rendering a communication directed to a user involved in a dialogue in a dialogue scene and is used to render the communication. A first representation of a mindset of the agent is updated accordingly after the rendering. Input data are received in one or more media types capturing a response from the user and information surrounding the dialogue scene and a second representation of a mindset of the user is updated based on the response from the user and the information surrounding of the dialogue scene. A next communication to the user is then determined based on the first representation of the mindset of the agent and the second representation of the mindset of the user.
    Type: Application
    Filed: May 28, 2021
    Publication date: January 20, 2022
    Inventors: Rui Fang, Changsong Liu
  • Patent number: 11222632
    Abstract: The present teaching relates to method, system, medium, and implementations for enabling communication with a user. Information representing surrounding of a user to be engaged in a new dialogue is received via the communication platform, wherein the information is acquired from a scene in which the user is present and captures characteristics of the user and the scene. Relevant features are extracted from the information. A state of the user is estimated based on the relevant features, and a dialogue context surrounding the scene is determined based on the relevant features. A topic for the new dialogue is determined based on the user, and a feedback is generated to initiate the new dialogue with the user based on the topic, the state of the user, and the dialogue context.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: January 11, 2022
    Assignee: DMAI, INC.
    Inventors: Changsong Liu, Rui Fang
  • Patent number: 11024294
    Abstract: The present teaching relates to method, system, medium, and implementations for user machine dialogue. An instruction is received by an agent device for rendering a communication directed to a user involved in a dialogue in a dialogue scene and is used to render the communication. A first representation of a mindset of the agent is updated accordingly after the rendering. Input data are received in one or more media types capturing a response from the user and information surrounding the dialogue scene and a second representation of a mindset of the user is updated based on the response from the user and the information surrounding of the dialogue scene. A next communication to the user is then determined based on the first representation of the mindset of the agent and the second representation of the mindset of the user.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: June 1, 2021
    Assignee: DMAI, INC.
    Inventors: Rui Fang, Changsong Liu
  • Patent number: 11003860
    Abstract: The present teaching relates to method, system, medium, and implementations for user machine dialogue. Historic dialogue data related to past dialogues are accessed and used to learn, via machine learning, expected utilities. During a dialogue involving a user and a machine agent, a representation of a shared mindset between the user and the agent is obtained to characterize the current state of the dialogue, which is then used to update the expected utilities. Continuous expected utility functions are then generated based on the updated expected utilities, wherein the continuous expected utility functions are to be used in determining how to conduct a dialogue with a user.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: May 11, 2021
    Assignee: DMAI, INC.
    Inventors: Rui Fang, Changsong Liu
  • Publication number: 20200155946
    Abstract: The present application discloses a method for synchronously displaying game content at a terminal device. The method includes: detecting a first operation instruction on a first client of a game application when a round of the game application is run on the first client, accounts participating in the round of game including a first account and a second account, and determining, a first operation that corresponds to the first operation instruction and is performed by a first operation object corresponding to the first account in the round of game; determining first content that needs to be displayed on the first client when the first operation object performs the first operation, and obtaining second content to be simultaneously displayed with the first content, the second content being content that needs to be displayed on the second client; and simultaneously displaying the first content and the second content on the first client.
    Type: Application
    Filed: January 22, 2020
    Publication date: May 21, 2020
    Inventors: Ronghua KONG, Changsong LIU
  • Publication number: 20190205390
    Abstract: The present teaching relates to method, system, medium, and implementations for user machine dialogue. Historic dialogue data related to past dialogues are accessed and used to learn, via machine learning, expected utilities. During a dialogue involving a user and a machine agent, a representation of a shared mindset between the user and the agent is obtained to characterize the current state of the dialogue, which is then used to update the expected utilities. Continuous expected utility functions are then generated based on the updated expected utilities, wherein the continuous expected utility functions are to be used in determining how to conduct a dialogue with a user.
    Type: Application
    Filed: December 27, 2018
    Publication date: July 4, 2019
    Inventors: Rui Fang, Changsong Liu
  • Publication number: 20190206393
    Abstract: The present teaching relates to method, system, medium, and implementations for user machine dialogue. An instruction is received by an agent device for rendering a communication directed to a user involved in a dialogue in a dialogue scene and is used to render the communication. A first representation of a mindset of the agent is updated accordingly after the rendering. Input data are received in one or more media types capturing a response from the user and information surrounding the dialogue scene and a second representation of a mindset of the user is updated based on the response from the user and the information surrounding of the dialogue scene. A next communication to the user is then determined based on the first representation of the mindset of the agent and the second representation of the mindset of the user.
    Type: Application
    Filed: December 27, 2018
    Publication date: July 4, 2019
    Inventors: Rui Fang, Changsong Liu
  • Publication number: 20190206402
    Abstract: The present teaching relates to method, system, medium, and implementations for an automated dialogue companion. Multimodal input data associated with a user engaged in a dialogue of a certain topic in a dialogue scene are first received and used to extract features representing a state of the user and relevant information associated with the dialogue scene. A current state of the dialogue characterizing the context of the dialogue is generated based on the state of the user and the relevant information associated with the dialogue scene. A response communication for the user is determined based on a dialogue tree corresponding to the dialogue of the certain topic, the current state of the dialogue, and utilities learned based on historic dialogue data and the current state of the dialogue.
    Type: Application
    Filed: December 27, 2018
    Publication date: July 4, 2019
    Inventors: Nishant Shukla, Rui Fang, Changsong Liu
  • Publication number: 20190206401
    Abstract: The present teaching relates to method, system, medium, and implementations for enabling communication with a user. Information representing surrounding of a user to be engaged in a new dialogue is received via the communication platform, wherein the information is acquired from a scene in which the user is present and captures characteristics of the user and the scene. Relevant features are extracted from the information. A state of the user is estimated based on the relevant features, and a dialogue context surrounding the scene is determined based on the relevant features. A topic for the new dialogue is determined based on the user, and a feedback is generated to initiate the new dialogue with the user based on the topic, the state of the user, and the dialogue context.
    Type: Application
    Filed: December 27, 2018
    Publication date: July 4, 2019
    Inventors: Changsong Liu, Rui Fang
  • Publication number: 20120299701
    Abstract: An apparatus and method for receiving a first user input comprising a first set of strokes; causing a representation of the first set of strokes to be displayed; whilst the representation of the first set of strokes is displayed, receiving a second user input comprising a second set of strokes; causing a representation of each of the second set of strokes to be displayed as it is received, the representation of the second set of strokes at least partially overlapping the representation of the first set of strokes; resolving the first user input into a first character; and resolving the second user input into a second character.
    Type: Application
    Filed: December 30, 2009
    Publication date: November 29, 2012
    Applicant: NOKIA CORPORATION
    Inventors: Yanming Zou, Xiaohui Xie, Changsong Liu, Yan Chen
  • Publication number: 20090135188
    Abstract: A method and a system of live detection based on a physiological motion on a human face are provided. The method has the following steps: in step a, a motion area and at least one motion direction in visual angle of a system camera are detected and a detected facial region is found. In step b, whether a valid facial motion exists in the detected facial region is determined. If a valid facial motion is inexistent, the object is considered as a photo of human face, otherwise, the method proceeds to step c to determine whether the facial motion is a physiological motion. If not, the object is considered as the photo of human face, yet considered as a real human face. The real human face and the photo of human face can be distinguished by the present invention so as to increase the reliability of the face recognition system.
    Type: Application
    Filed: May 30, 2008
    Publication date: May 28, 2009
    Applicant: TSINGHUA UNIVERSITY
    Inventors: Xiaoqing Ding, Liting Wang, Chi Fang, Changsong Liu, Liangrui Peng
  • Patent number: 7174044
    Abstract: In a method for character recognition based on Gabor filter group the Gabor filter's joint spatial/spatial-frequency localization and capability to efficiently extract characters' local structural features are employed to extract, from the character image, information of the stroke direction of characters as the recognition information of characters, so as to improve the capability to resist the noises, backgrounds, brightness variances in images and the deformation of characters. Using this information, a simple and effective parameter design method is put forward to optimally design the Gabor filter, ensuring a preferable recognition performance; a corrected Sigmoid function is used to non-linearly adaptively process the stroke direction information output from the Gabor filter group. When extracting the feature from blocks, Gaussian filter array is used to process the positive and negative values output from Gabor filter group to enhance the discrimination ability of the extracted features.
    Type: Grant
    Filed: May 23, 2003
    Date of Patent: February 6, 2007
    Assignee: Tsinghua University
    Inventors: Xiaoqing Ding, Xuewen Wang, Changsong Liu, Liangrui Peng, Chi Fang
  • Publication number: 20040017944
    Abstract: In a method for character recognition based on Gabor filter group the Gabor filter's joint spatial/spatial-frequency localization and capability to efficiently extract characters' local structural features are employed to extract, from the character image, information of the stroke direction of characters as the recognition information of characters, so as to improve the capability to resist the noises, backgrounds, brightness variances in images and the deformation of characters. Using this information, a simple and effective parameter design method is put forward to. optimally design the Gabor filter, ensuring a preferable recognition performance; a corrected Sigmoid function is used to non-linearly adaptively process the stroke direction information output from the Gabor filter group. When extracting the feature from blocks, Gaussian filter array is used to process the positive and negative values output from Gabor filter group to. enhance the discrimination ability of the extracted features.
    Type: Application
    Filed: May 23, 2003
    Publication date: January 29, 2004
    Inventors: Xiaoging Ding, Xuewen Wang, Changsong Liu, Liangrui Peng, Chi Fang
  • Patent number: D914765
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: March 30, 2021
    Assignee: ZHONGSHAN HENG YI SPORTS EQUIPMENTS CO., LTD
    Inventor: Changsong Liu