Patents by Inventor Chichen Fu
Chichen Fu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250252691Abstract: Systems and methods for generating virtual characters for video conferencing with feature enhancement are provided. In an example, a computing device access a source human face model, a target human face model, and a source virtual character face model. The device further accesses a first virtual feature triangle marked on the source human face model and a second virtual feature triangle marked on the source virtual character face model. The second virtual feature triangle corresponds to the first virtual feature triangle. The device deforms the source virtual character face model based on the source human face model and the target human face model to generate a target virtual character face model. The deforming includes minimizing a loss function comprising a term defined based on a difference between the second virtual feature triangle and the first virtual feature triangle. The device renders the target virtual character face model.Type: ApplicationFiled: February 6, 2024Publication date: August 7, 2025Applicant: Zoom Video Communications, Inc.Inventors: Wenyu Chen, Chichen Fu, Zhongyuan Hu, Qiang Li, Wenchong Lin, Bo Ling, Gengdai Liu
-
Publication number: 20250252692Abstract: Systems and methods for generating virtual characters for video conferencing via a two-stage process are provided. In an example, a computing device accesses a source human face model, a target human face model, and a source virtual character face model. The computing device deforms the source virtual character face model based on the source human face model and the target human face model to generate a target virtual character face model. The source virtual character face model includes a face region and a non-face region. Deforming the source virtual character face model includes deforming the face region by fixing the non-face region and deforming the non-face region by fixing the deformed face region. The computing device renders the target virtual character face model.Type: ApplicationFiled: February 6, 2024Publication date: August 7, 2025Applicant: Zoom Video Communications, Inc.Inventors: Wenyu Chen, Chichen Fu, Zhongyuan Hu, Qiang Li, Wenchong Lin, Bo Ling, Gengdai Liu
-
Publication number: 20250252647Abstract: Systems and methods for generating virtual characters for video conferencing are provided. In an example, a computing device accesses a source human face model, a target human face model, and a source virtual character face model as well as feature curves marked on the source human face model and the source virtual character face model. The computing device deforms the source virtual character face model based on the source human face model and the target human face model to generate a target virtual character face model. Deforming the source virtual character face model includes minimizing a loss function that includes terms defined based on the feature curves to preserve features of the source virtual character face model on the target virtual character face model. The computing device further renders the target virtual character face model.Type: ApplicationFiled: February 6, 2024Publication date: August 7, 2025Applicant: Zoom Video Communications, Inc.Inventors: Wenyu Chen, Chichen Fu, Zhongyuan Hu, Qiang Li, Wenchong Lin, Bo Ling, Gengdai Liu
-
Publication number: 20250252639Abstract: Systems and methods for generating virtual characters for video conferencing are provided. In an example, a computing device joins a video conference and determines an expression parameter vector from a video of a participant during the video conference. The device further generates a virtual character customized for the participant from a virtual character face model. The virtual character is generated by at least applying the expression parameter vector to a set of virtual character expressions customized for the participant and by incorporating a virtual character neutral face model customized for the participant. Each of the virtual character expressions describes a facial expression of the virtual character customized for the participant and the virtual character neutral face model describes a neutral face of the virtual character customized for the participant. The computing device renders the virtual character in a video stream of the participant.Type: ApplicationFiled: February 6, 2024Publication date: August 7, 2025Applicant: Zoom Video Communications, Inc.Inventors: Wenyu Chen, Chichen Fu, Zhongyuan Hu, Qiang Li, Wenchong Lin, Bo Ling, Gengdai Liu
-
Publication number: 20250252673Abstract: Systems and methods for generating virtual characters for video conferencing are provided. In an example, a computing device accesses a source human face model, a target human face model, and a source virtual character model. The computing device deforms the source virtual character model based on the source human face model and the target human face model to generate a target virtual character model. The source virtual character model includes a face region and a non-face region. Deforming the source virtual character model includes generating, for the source virtual character model, virtual triangles connecting the face region and the non-face region of the source virtual character model, deforming the source virtual character model with minimized deformation to the virtual triangles to generate the target virtual character model, and removing the virtual triangles from the target virtual character model. The computing device renders the target virtual character model.Type: ApplicationFiled: February 6, 2024Publication date: August 7, 2025Applicant: Zoom Video Communications, Inc.Inventors: Wenyu Chen, Chichen Fu, Zhongyuan Hu, Qiang Li, Wenchong Lin, Bo Ling, Gengdai Liu
-
Publication number: 20240323309Abstract: Methods and systems provide for applying a video effect to a video corresponding to a participant within a video communication session. The system displays a video for each of at least a subset of the participants and a user interface including a selectable video effects UI element. The system receives a selection by a participant of the video effects UI element. In response to receiving the selection, the system displays a variety of video effects options for modifying the appearance of the video and/or modifying a visual representation of the participant. The system then receives a selection by the participant of a video effects option, and further receives a subselection for customizing the amount of the video effect to be applied. The system then applies, in real time or substantially real time, the selected video effect in the selected amount to the video corresponding to the participant.Type: ApplicationFiled: June 4, 2024Publication date: September 26, 2024Inventors: Abhishek Balaji, Anna Deng, Chichen Fu, Pei Li, Bo Ling, Juliana Park, Qiang Li, Wenchong Lin
-
Patent number: 12041373Abstract: Methods and systems provide for applying a video effect to a video corresponding to a participant within a video communication session. The system displays a video for each of at least a subset of the participants and a user interface including a selectable video effects UI element. The system receives a selection by a participant of the video effects UI element. In response to receiving the selection, the system displays a variety of video effects options for modifying the appearance of the video and/or modifying a visual representation of the participant. The system then receives a selection by the participant of a video effects option, and further receives a subselection for customizing the amount of the video effect to be applied. The system then applies, in real time or substantially real time, the selected video effect in the selected amount to the video corresponding to the participant.Type: GrantFiled: July 31, 2021Date of Patent: July 16, 2024Assignee: Zoom Video Communications, Inc.Inventors: Abhishek Balaji, Anna Deng, Chichen Fu, Pei Li, Bo Ling, Juliana Park, Qiang Li, Wenchong Lin
-
Publication number: 20230260184Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media relate to a method for training machine learning network to generate facial expression for rendering an avatar within a video communication platform representing a video conference participant. Video images may be processed by the machine learning network to generate facial expression values. The generated facial expression values may be modified or adjusted to change the facial expression values. The modified or adjusted facial expression values may then be used to render a digital representation of the video conference participant in the form of an avatar.Type: ApplicationFiled: March 17, 2022Publication date: August 17, 2023Inventors: Wenyu Chen, Chichen Fu, Qiang Li, Wenchong Lin, Bo Ling, Gengdai Liu
-
Publication number: 20230222721Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media relate to a method for generating an avatar within a video communication platform. The system may receive a selection of an avatar model from a group of one or more avatar models. The system receives a first video stream and audio data of a first video conference participant. The system analyzes image frames of the first video stream to determine a group of pixels representing the first video conference participant. The system determines a plurality of facial expression parameter associated with the determined group of pixels. Based on the determined plurality of facial expression parameter values, the system generates a first modified video stream depicting a digital representation of the first video conference participant in an avatar form.Type: ApplicationFiled: January 31, 2022Publication date: July 13, 2023Inventors: Wenyu Chen, Chichen Fu, Guozhu Hu, Qiang Li, Wenhao Li, Wenchong Lin, Bo Ling, Gengdai Liu, Geng Wang, Kai Wei, Yian Zhu
-
Publication number: 20230007189Abstract: Methods and systems provide for applying a video effect to a video corresponding to a participant within a video communication session. The system displays a video for each of at least a subset of the participants and a user interface including a selectable video effects UI element. The system receives a selection by a participant of the video effects UI element. In response to receiving the selection, the system displays a variety of video effects options for modifying the appearance of the video and/or modifying a visual representation of the participant. The system then receives a selection by the participant of a video effects option, and further receives a subselection for customizing the amount of the video effect to be applied. The system then applies, in real time or substantially real time, the selected video effect in the selected amount to the video corresponding to the participant.Type: ApplicationFiled: July 31, 2021Publication date: January 5, 2023Inventors: Abhishek Balaji, Anna Deng, Chichen Fu, Pei Li, Bo Ling, Juliana Park, Qiang Li, Wenchong Lin