Patents by Inventor Haohong Wang

Haohong Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240096329
    Abstract: A method and device for interaction are provided. The method includes: in response to a user starting a conversation, detecting a current program watched by the user, obtaining an input by the user and identifying a character that the user talks to based on the input, retrieving script information of the detected program and a cloned character voice model corresponding to the identified character, generating a response based on the script information corresponding to the identified character, and displaying the generated response using the cloned character voice model corresponding to the identified character to the user.
    Type: Application
    Filed: November 28, 2022
    Publication date: March 21, 2024
    Inventor: Haohong WANG
  • Publication number: 20240015259
    Abstract: A script-to-movie generation method for a computing device includes: obtaining a movie script; generating a list of actions according to the movie script; generating stage performance based on each action in the list of actions; extracting observation information from the stage performance; using a camera agent trained with a reinforcement learning algorithm to select a camera based on the observation information, where the camera includes camera setting that defines a position of the camera with respect to a character for which the camera shoots; using the selected camera to capture a video of the stage performance; and outputting the video.
    Type: Application
    Filed: July 6, 2022
    Publication date: January 11, 2024
    Inventors: Zixiao YU, Haohong WANG
  • Publication number: 20230237268
    Abstract: A method and device for one-click filmmaking are provided. The method includes: obtaining a script from a user, detecting a single user operation, in response to the single user operation, obtaining a plurality of shots and estimating information of the plurality of shots based on the script, and automatically generating a film based on an auto-cinematography algorithm and the estimated information of the plurality of shots. The estimated information of one of the plurality of shots comprises at least one of a character of a shot, a scene of the shot, one or more positions of the character in the shot, a duration of the shot, or a shot type.
    Type: Application
    Filed: April 28, 2022
    Publication date: July 27, 2023
    Inventor: Haohong Wang
  • Patent number: 11711573
    Abstract: A method and device for reversible story are provided. The method includes: when presenting a media stream of a current story, detecting a request performed by a user for generating an alternative story corresponding to the current story; in response to the request for generating the alternative story, determining a target path in a hyperstory, the hyperstory including multiple paths corresponding to multiple stories that describe different status change trends of one or more story characters, and the target path sharing a same initial segment with a path of the current story in the hyperstory and including a branch point where a story trend change occurs; determining a media stream of the alternative story according to the target path; and presenting the media stream of the alternative story.
    Type: Grant
    Filed: April 22, 2022
    Date of Patent: July 25, 2023
    Assignee: TCL RESEARCH AMERICA INC.
    Inventor: Haohong Wang
  • Publication number: 20230201715
    Abstract: An interactable video playback method includes: obtaining an interactable video sequence including a plurality of interactable data regions and a plurality of non-interactable data regions, wherein each interactable data region stores video data and non-video data associated with interaction, and each non-interactable data region stores a two-dimensional (2D) video clip; playing the interactable video sequence; detecting a join request from a user; and in response to the join request occurring when playing one of the plurality of interactable data regions, allowing an avatar of the user to interact with an object in a scene corresponding to the interactable data region being played.
    Type: Application
    Filed: May 9, 2022
    Publication date: June 29, 2023
    Inventor: Haohong WANG
  • Publication number: 20230005201
    Abstract: A method and device for harmony-aware audio-driven motion synthesis are provided. The method includes determining a plurality of testing meter units according to an input audio, each testing meter unit corresponding to an input audio sequence of the input audio, obtaining an auditory input corresponding to each testing meter unit, obtaining an initial pose of each testing meter unit as a visual input based on a visual motion sequence synthesized for a previous testing meter unit, and automatically generating a harmony-aware motion sequence corresponding to the input audio using a generator of a generative adversarial network (GAN) model. The GAN model is trained by incorporating a hybrid loss function. The hybrid loss function includes a multi-space pose loss, a harmony loss, and a GAN loss. The harmony loss is determined according to beat consistencies of audio-visual beat pairs.
    Type: Application
    Filed: June 28, 2021
    Publication date: January 5, 2023
    Inventors: Xinyi WU, Haohong WANG
  • Patent number: 11507848
    Abstract: An experience-aware anomaly processing system and a method for an experience-aware anomaly processing system are provided. The experience-aware anomaly processing system comprises an anomaly detection module configured to receive geographic location data with corresponding time information of a target object, and analyze target object behavior based on the geographic location data with corresponding time information of the target object; a user feedback module configured to receive user feedback from a user and model user feedback behavior when the user receives an alarm message indicating the target object is abnormal; and a decision module configured to receive user setting from the user, and make a detection decision through fusing target object behavior information corresponding to the target object behavior, user feedback behavior information corresponding to the user feedback behavior, and the user setting.
    Type: Grant
    Filed: August 8, 2016
    Date of Patent: November 22, 2022
    Assignee: TCL RESEARCH AMERICA INC.
    Inventors: Haohong Wang, Xiaobo Ren, Wenqiang Bo, Guanghan Ning, Lifan Guo
  • Patent number: 11463652
    Abstract: A script-to-movie generation method for a computing device includes obtaining a movie script, generating a video according to the movie script, optimizing the generated video until a pass condition is satisfied, and outputting the optimized video.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: October 4, 2022
    Assignee: TCL RESEARCH AMERICA INC.
    Inventors: Zixiao Yu, Haohong Wang
  • Patent number: 11445244
    Abstract: A context-aware method for answering a question about a video includes: receiving the question about the video that is paused at a pausing position; obtaining and analyzing context information at the pausing position of the video, the context information including supplementary materials of the video; and automatically searching an answer to the question based on the context information at the pausing position of the video.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: September 13, 2022
    Assignee: TCL RESEARCH AMERICA INC.
    Inventors: Kyle Otto Jorgensen, Zhiqun Zhao, Haohong Wang, Mea Wang
  • Patent number: 11423941
    Abstract: A method and device for implementing Write-A-Movie technology. The method includes: obtaining a screenplay of a movie; generating, according to the screenplay, an action list by performing natural language processing (NLP) on the screenplay, the action list comprising a plurality of actions with attributes, the attributes of each action including a subject, a predicate, and a location of the action; rendering, according to the action list, three-dimensional (3D) data in 3D scenes of the movie, the 3D data reflecting, for each action, the subject performing the action at the location in a corresponding 3D scene; determining camera sequence of cameras for shooting two-dimensional (2D) frames in the 3D scenes by performing an auto-cinematography optimization process; and generating a 2D video of the movie by combining the 2D frames shot by the cameras based on the determined camera sequence.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: August 23, 2022
    Assignee: TCL RESEARCH AMERICA INC.
    Inventor: Haohong Wang
  • Patent number: 11418848
    Abstract: A method for interactive video presentation includes: obtaining, by an electronic device, video data corresponding to a story; presenting, through a display interface, a portion of the video data corresponding to a selected storyline path of the story; receiving, by the input interface, a user request for switching between a two-dimensional (2D) video streaming mode and a three-dimensional (3D) exploration mode; and in response to the user request being switching from the 2D video streaming mode to the 3D exploration mode: acquiring, by the processor, 3D video scenes with exploration options for an avatar, the 3D video scenes matched to a current story status and currently presented video data; and presenting, through the display interface, the 3D video scenes with the exploration options.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: August 16, 2022
    Assignee: TCL RESEARCH AMERICA INC.
    Inventor: Haohong Wang
  • Publication number: 20220210366
    Abstract: A script-to-movie generation method for a computing device includes obtaining a movie script, generating a video according to the movie script, optimizing the generated video until a pass condition is satisfied, and outputting the optimized video.
    Type: Application
    Filed: December 29, 2020
    Publication date: June 30, 2022
    Inventors: Zixiao YU, Haohong WANG
  • Publication number: 20220101880
    Abstract: A method and device for implementing Write-A-Movie technology. The method includes: obtaining a screenplay of a movie; generating, according to the screenplay, an action list by performing natural language processing (NLP) on the screenplay, the action list comprising a plurality of actions with attributes, the attributes of each action including a subject, a predicate, and a location of the action; rendering, according to the action list, three-dimensional (3D) data in 3D scenes of the movie, the 3D data reflecting, for each action, the subject performing the action at the location in a corresponding 3D scene; determining camera sequence of cameras for shooting two-dimensional (2D) frames in the 3D scenes by performing an auto-cinematography optimization process; and generating a 2D video of the movie by combining the 2D frames shot by the cameras based on the determined camera sequence.
    Type: Application
    Filed: September 28, 2020
    Publication date: March 31, 2022
    Inventor: Haohong WANG
  • Publication number: 20220070541
    Abstract: A method for interactive video presentation includes: obtaining, by an electronic device, video data corresponding to a story; presenting, through a display interface, a portion of the video data corresponding to a selected storyline path of the story; receiving, by the input interface, a user request for switching between a two-dimensional (2D) video streaming mode and a three-dimensional (3D) exploration mode; and in response to the user request being switching from the 2D video streaming mode to the 3D exploration mode: acquiring, by the processor, 3D video scenes with exploration options for an avatar, the 3D video scenes matched to a current story status and currently presented video data; and presenting, through the display interface, the 3D video scenes with the exploration options.
    Type: Application
    Filed: August 31, 2020
    Publication date: March 3, 2022
    Inventor: Haohong WANG
  • Patent number: 11244668
    Abstract: A method for generating speech animation from an audio signal includes: receiving the audio signal; transforming the received audio signal into frequency-domain audio features; performing neural-network processing on the frequency-domain audio features to recognize phonemes, wherein the neural-network processing is performed using a neural network trained with a phoneme dataset comprising of audio signals with corresponding ground-truth phoneme labels; and generating the speech animation from the recognized phonemes.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: February 8, 2022
    Assignee: TCL RESEARCH AMERICA INC.
    Inventors: Zixiao Yu, Haohong Wang
  • Publication number: 20210375260
    Abstract: A method for generating speech animation from an audio signal includes: receiving the audio signal; transforming the received audio signal into frequency-domain audio features; performing neural-network processing on the frequency-domain audio features to recognize phonemes, wherein the neural-network processing is performed using a neural network trained with a phoneme dataset comprising of audio signals with corresponding ground-truth phoneme labels; and generating the speech animation from the recognized phonemes.
    Type: Application
    Filed: May 29, 2020
    Publication date: December 2, 2021
    Inventors: Zixiao YU, Haohong WANG
  • Patent number: 11120638
    Abstract: A method of generating video in three-dimensional animation environment is provided. The method includes: obtaining and translating directorial hints for making a 3D animated movie based on user input; determining camera configurations in a 3D environment according to the directorial hints; establishing a camera search space that includes multiple candidate cameras to be used at different timestamps to shoot one or more scenes of the movie based on the camera configurations; performing editing optimization based on the camera search space and the directorial hints, to obtain an edited video. The editing optimization is formalized into a process of finding a path with minimum cost in a graph model, each path describing a candidate camera sequence for producing the movie, and at least some of the directorial hints are translated into cost functions of the graph model. The edited video is output as the produced 3D animated movie.
    Type: Grant
    Filed: December 26, 2019
    Date of Patent: September 14, 2021
    Assignee: TCL RESEARCH AMERICA INC.
    Inventors: Lin Sun, Haohong Wang
  • Patent number: 11122335
    Abstract: An interaction method includes receiving an interaction indicating a user's wish, interpreting the interaction to obtain an interpreted wish, identifying a realization story in a hyperstory according to the interpreted wish, sending a feedback message indicating a time the user's wish will be realized and a response-to-wish confidence level, generating a realization video according to the realization story, and outputting the realization video.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: September 14, 2021
    Assignee: TCL RESEARCH AMERICA INC.
    Inventor: Haohong Wang
  • Publication number: 20210201595
    Abstract: A method of generating video in three-dimensional animation environment is provided. The method includes: obtaining and translating directorial hints for making a 3D animated movie based on user input; determining camera configurations in a 3D environment according to the directorial hints; establishing a camera search space that includes multiple candidate cameras to be used at different timestamps to shoot one or more scenes of the movie based on the camera configurations; performing editing optimization based on the camera search space and the directorial hints, to obtain an edited video. The editing optimization is formalized into a process of finding a path with minimum cost in a graph model, each path describing a candidate camera sequence for producing the movie, and at least some of the directorial hints are translated into cost functions of the graph model. The edited video is output as the produced 3D animated movie.
    Type: Application
    Filed: December 26, 2019
    Publication date: July 1, 2021
    Inventors: Lin SUN, Haohong WANG
  • Publication number: 20210160578
    Abstract: An interaction method includes receiving an interaction indicating a user's wish, interpreting the interaction to obtain an interpreted wish, identifying a realization story in a hyperstory according to the interpreted wish, sending a feedback message indicating a time the user's wish will be realized and a response-to-wish confidence level, generating a realization video according to the realization story, and outputting the realization video.
    Type: Application
    Filed: November 22, 2019
    Publication date: May 27, 2021
    Inventor: Haohong WANG