Patents by Inventor Changsong Liu
Changsong Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220254343Abstract: The present teaching relates to method, system, medium, and implementations for enabling communication with a user. Information representing surrounding of a user to be engaged in a new dialogue is received via the communication platform, wherein the information is acquired from a scene in which the user is present and captures characteristics of the user and the scene. Relevant features are extracted from the information. A state of the user is estimated based on the relevant features, and a dialogue context surrounding the scene is determined based on the relevant features. A topic for the new dialogue is determined based on the user, and a feedback is generated to initiate the new dialogue with the user based on the topic, the state of the user, and the dialogue context.Type: ApplicationFiled: January 10, 2022Publication date: August 11, 2022Inventors: Changsong Liu, Rui Fang
-
Patent number: 11266913Abstract: The present application discloses a method for synchronously displaying game content at a terminal device. The method includes: detecting a first operation instruction on a first client of a game application when a round of the game application is run on the first client, accounts participating in the round of game including a first account and a second account, and determining, a first operation that corresponds to the first operation instruction and is performed by a first operation object corresponding to the first account in the round of game; determining first content that needs to be displayed on the first client when the first operation object performs the first operation, and obtaining second content to be simultaneously displayed with the first content, the second content being content that needs to be displayed on the second client; and simultaneously displaying the first content and the second content on the first client.Type: GrantFiled: January 22, 2020Date of Patent: March 8, 2022Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Ronghua Kong, Changsong Liu
-
Publication number: 20220020360Abstract: The present teaching relates to method, system, medium, and implementations for user machine dialogue. An instruction is received by an agent device for rendering a communication directed to a user involved in a dialogue in a dialogue scene and is used to render the communication. A first representation of a mindset of the agent is updated accordingly after the rendering. Input data are received in one or more media types capturing a response from the user and information surrounding the dialogue scene and a second representation of a mindset of the user is updated based on the response from the user and the information surrounding of the dialogue scene. A next communication to the user is then determined based on the first representation of the mindset of the agent and the second representation of the mindset of the user.Type: ApplicationFiled: May 28, 2021Publication date: January 20, 2022Inventors: Rui Fang, Changsong Liu
-
Patent number: 11222632Abstract: The present teaching relates to method, system, medium, and implementations for enabling communication with a user. Information representing surrounding of a user to be engaged in a new dialogue is received via the communication platform, wherein the information is acquired from a scene in which the user is present and captures characteristics of the user and the scene. Relevant features are extracted from the information. A state of the user is estimated based on the relevant features, and a dialogue context surrounding the scene is determined based on the relevant features. A topic for the new dialogue is determined based on the user, and a feedback is generated to initiate the new dialogue with the user based on the topic, the state of the user, and the dialogue context.Type: GrantFiled: December 27, 2018Date of Patent: January 11, 2022Assignee: DMAI, INC.Inventors: Changsong Liu, Rui Fang
-
Patent number: 11024294Abstract: The present teaching relates to method, system, medium, and implementations for user machine dialogue. An instruction is received by an agent device for rendering a communication directed to a user involved in a dialogue in a dialogue scene and is used to render the communication. A first representation of a mindset of the agent is updated accordingly after the rendering. Input data are received in one or more media types capturing a response from the user and information surrounding the dialogue scene and a second representation of a mindset of the user is updated based on the response from the user and the information surrounding of the dialogue scene. A next communication to the user is then determined based on the first representation of the mindset of the agent and the second representation of the mindset of the user.Type: GrantFiled: December 27, 2018Date of Patent: June 1, 2021Assignee: DMAI, INC.Inventors: Rui Fang, Changsong Liu
-
Patent number: 11003860Abstract: The present teaching relates to method, system, medium, and implementations for user machine dialogue. Historic dialogue data related to past dialogues are accessed and used to learn, via machine learning, expected utilities. During a dialogue involving a user and a machine agent, a representation of a shared mindset between the user and the agent is obtained to characterize the current state of the dialogue, which is then used to update the expected utilities. Continuous expected utility functions are then generated based on the updated expected utilities, wherein the continuous expected utility functions are to be used in determining how to conduct a dialogue with a user.Type: GrantFiled: December 27, 2018Date of Patent: May 11, 2021Assignee: DMAI, INC.Inventors: Rui Fang, Changsong Liu
-
Publication number: 20200155946Abstract: The present application discloses a method for synchronously displaying game content at a terminal device. The method includes: detecting a first operation instruction on a first client of a game application when a round of the game application is run on the first client, accounts participating in the round of game including a first account and a second account, and determining, a first operation that corresponds to the first operation instruction and is performed by a first operation object corresponding to the first account in the round of game; determining first content that needs to be displayed on the first client when the first operation object performs the first operation, and obtaining second content to be simultaneously displayed with the first content, the second content being content that needs to be displayed on the second client; and simultaneously displaying the first content and the second content on the first client.Type: ApplicationFiled: January 22, 2020Publication date: May 21, 2020Inventors: Ronghua KONG, Changsong LIU
-
Publication number: 20190206393Abstract: The present teaching relates to method, system, medium, and implementations for user machine dialogue. An instruction is received by an agent device for rendering a communication directed to a user involved in a dialogue in a dialogue scene and is used to render the communication. A first representation of a mindset of the agent is updated accordingly after the rendering. Input data are received in one or more media types capturing a response from the user and information surrounding the dialogue scene and a second representation of a mindset of the user is updated based on the response from the user and the information surrounding of the dialogue scene. A next communication to the user is then determined based on the first representation of the mindset of the agent and the second representation of the mindset of the user.Type: ApplicationFiled: December 27, 2018Publication date: July 4, 2019Inventors: Rui Fang, Changsong Liu
-
Publication number: 20190205390Abstract: The present teaching relates to method, system, medium, and implementations for user machine dialogue. Historic dialogue data related to past dialogues are accessed and used to learn, via machine learning, expected utilities. During a dialogue involving a user and a machine agent, a representation of a shared mindset between the user and the agent is obtained to characterize the current state of the dialogue, which is then used to update the expected utilities. Continuous expected utility functions are then generated based on the updated expected utilities, wherein the continuous expected utility functions are to be used in determining how to conduct a dialogue with a user.Type: ApplicationFiled: December 27, 2018Publication date: July 4, 2019Inventors: Rui Fang, Changsong Liu
-
Publication number: 20190206402Abstract: The present teaching relates to method, system, medium, and implementations for an automated dialogue companion. Multimodal input data associated with a user engaged in a dialogue of a certain topic in a dialogue scene are first received and used to extract features representing a state of the user and relevant information associated with the dialogue scene. A current state of the dialogue characterizing the context of the dialogue is generated based on the state of the user and the relevant information associated with the dialogue scene. A response communication for the user is determined based on a dialogue tree corresponding to the dialogue of the certain topic, the current state of the dialogue, and utilities learned based on historic dialogue data and the current state of the dialogue.Type: ApplicationFiled: December 27, 2018Publication date: July 4, 2019Inventors: Nishant Shukla, Rui Fang, Changsong Liu
-
Publication number: 20190206401Abstract: The present teaching relates to method, system, medium, and implementations for enabling communication with a user. Information representing surrounding of a user to be engaged in a new dialogue is received via the communication platform, wherein the information is acquired from a scene in which the user is present and captures characteristics of the user and the scene. Relevant features are extracted from the information. A state of the user is estimated based on the relevant features, and a dialogue context surrounding the scene is determined based on the relevant features. A topic for the new dialogue is determined based on the user, and a feedback is generated to initiate the new dialogue with the user based on the topic, the state of the user, and the dialogue context.Type: ApplicationFiled: December 27, 2018Publication date: July 4, 2019Inventors: Changsong Liu, Rui Fang
-
Publication number: 20120299701Abstract: An apparatus and method for receiving a first user input comprising a first set of strokes; causing a representation of the first set of strokes to be displayed; whilst the representation of the first set of strokes is displayed, receiving a second user input comprising a second set of strokes; causing a representation of each of the second set of strokes to be displayed as it is received, the representation of the second set of strokes at least partially overlapping the representation of the first set of strokes; resolving the first user input into a first character; and resolving the second user input into a second character.Type: ApplicationFiled: December 30, 2009Publication date: November 29, 2012Applicant: NOKIA CORPORATIONInventors: Yanming Zou, Xiaohui Xie, Changsong Liu, Yan Chen
-
Publication number: 20090135188Abstract: A method and a system of live detection based on a physiological motion on a human face are provided. The method has the following steps: in step a, a motion area and at least one motion direction in visual angle of a system camera are detected and a detected facial region is found. In step b, whether a valid facial motion exists in the detected facial region is determined. If a valid facial motion is inexistent, the object is considered as a photo of human face, otherwise, the method proceeds to step c to determine whether the facial motion is a physiological motion. If not, the object is considered as the photo of human face, yet considered as a real human face. The real human face and the photo of human face can be distinguished by the present invention so as to increase the reliability of the face recognition system.Type: ApplicationFiled: May 30, 2008Publication date: May 28, 2009Applicant: TSINGHUA UNIVERSITYInventors: Xiaoqing Ding, Liting Wang, Chi Fang, Changsong Liu, Liangrui Peng
-
Patent number: 7174044Abstract: In a method for character recognition based on Gabor filter group the Gabor filter's joint spatial/spatial-frequency localization and capability to efficiently extract characters' local structural features are employed to extract, from the character image, information of the stroke direction of characters as the recognition information of characters, so as to improve the capability to resist the noises, backgrounds, brightness variances in images and the deformation of characters. Using this information, a simple and effective parameter design method is put forward to optimally design the Gabor filter, ensuring a preferable recognition performance; a corrected Sigmoid function is used to non-linearly adaptively process the stroke direction information output from the Gabor filter group. When extracting the feature from blocks, Gaussian filter array is used to process the positive and negative values output from Gabor filter group to enhance the discrimination ability of the extracted features.Type: GrantFiled: May 23, 2003Date of Patent: February 6, 2007Assignee: Tsinghua UniversityInventors: Xiaoqing Ding, Xuewen Wang, Changsong Liu, Liangrui Peng, Chi Fang
-
Publication number: 20040017944Abstract: In a method for character recognition based on Gabor filter group the Gabor filter's joint spatial/spatial-frequency localization and capability to efficiently extract characters' local structural features are employed to extract, from the character image, information of the stroke direction of characters as the recognition information of characters, so as to improve the capability to resist the noises, backgrounds, brightness variances in images and the deformation of characters. Using this information, a simple and effective parameter design method is put forward to. optimally design the Gabor filter, ensuring a preferable recognition performance; a corrected Sigmoid function is used to non-linearly adaptively process the stroke direction information output from the Gabor filter group. When extracting the feature from blocks, Gaussian filter array is used to process the positive and negative values output from Gabor filter group to. enhance the discrimination ability of the extracted features.Type: ApplicationFiled: May 23, 2003Publication date: January 29, 2004Inventors: Xiaoging Ding, Xuewen Wang, Changsong Liu, Liangrui Peng, Chi Fang
-
Patent number: D914765Type: GrantFiled: November 13, 2019Date of Patent: March 30, 2021Assignee: ZHONGSHAN HENG YI SPORTS EQUIPMENTS CO., LTDInventor: Changsong Liu
-
Patent number: D1054455Type: GrantFiled: June 16, 2023Date of Patent: December 17, 2024Inventor: Changsong Liu