Patents by Inventor Yunzhu Li
Yunzhu Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12190558Abstract: A computer-implemented method for transforming a neural radiance field model is described. A plurality of inputs are provided to a neural radiance field (NeRF) model that represents a 3-dimensional space having a subject, wherein each input of the plurality of inputs includes a location and a view direction and corresponds to respective colors of voxels that represent the 3-dimensional space. A spectral analysis is performed on a plurality of outputs of the NeRF model based on the plurality of inputs, wherein the plurality of outputs include the respective colors of the voxels. Frequency components of the spectral analysis that represent colors for at least some of the voxels are extracted. A sparse volume data structure that represents the 3-dimensional space and the respective colors for the at least some of the voxels is generated.Type: GrantFiled: June 10, 2022Date of Patent: January 7, 2025Assignee: Lemon Inc.Inventors: Celong Liu, Lelin Zhang, Qingyu Chen, Yunzhu Li, Haoze Li, Xing Mei
-
Publication number: 20240278138Abstract: Embodiments of the disclosure provide a role information method, a device, a storage medium, and a program product. The method includes: collecting a video picture, performing portrait recognition on the video picture, and obtaining portrait information of a user; acquiring an effect prop corresponding to a target role selected by the user and obtaining a first role picture of the user by loading an effect onto the portrait information according to the effect prop, and transmitting the first role picture to a server; and pulling at least one second role picture transmitted by another user from the server, synthesizing the second role picture and the first role picture, and pushing a synthesized picture to a display interface for displaying. The embodiment can provide more abundant information for the user, such that pleasure of a game can be enhanced, and user experience can be improved.Type: ApplicationFiled: October 28, 2022Publication date: August 22, 2024Inventors: Chenyu Sun, Yunzhu Li, Zihan Wang, Bonong Bai, Hui Xu, Tao Xiong, Xuye Cai, Yehua Lyu
-
Publication number: 20240249426Abstract: A method for dynamic modeling and manipulation of multi-object scenes is described. The method includes using object-centric neural implicit scattering functions (OSFs) as object representations in a model-predictive control (MPC) framework for the multi-object scenes. The method also includes modeling a per-object light transport to enable compositional scene re-rendering under object rearrangement and varying lighting conditions. The method further includes applying inverse parameter estimation and graph neural network (GNN) dynamics models to estimate initial object poses and a light position in the multi-object scene. The method also includes manipulating an object perceived in the multi-object scene according to the applying of the inverse parameter estimation and the GNN dynamics models.Type: ApplicationFiled: December 13, 2023Publication date: July 25, 2024Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA, THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITYInventors: Stephen TIAN, Yancheng CAI, Hong-Xing YU, Sergey ZAKHAROV, Katherine LIU, Adrien David GAIDON, Yunzhu LI, Jiajun WU
-
Patent number: 11978143Abstract: The present disclosure describes techniques for creating videos using virtual characters. Creation of a video may be initiated by a user. Camera input comprising a human body of the user may be received. The camera input may be split into a first stream for removing the human body and a second stream for animating a virtual character in the video. An inpainting filter may be applied to remove the human body in real time from the camera input. The inpainting filter may be configured to accelerate texture sampling. Output of the inpainting filter may be blended with images comprised in the camera input to generate camera input backgrounds.Type: GrantFiled: May 23, 2022Date of Patent: May 7, 2024Assignee: LEMON INC.Inventors: Zeng Dai, Yunzhu Li, Nite Luo
-
Publication number: 20230401815Abstract: A computer-implemented method for transforming a neural radiance field model is described. A plurality of inputs are provided to a neural radiance field (NeRF) model that represents a 3-dimensional space having a subject, wherein each input of the plurality of inputs includes a location and a view direction and corresponds to respective colors of voxels that represent the 3-dimensional space. A spectral analysis is performed on a plurality of outputs of the NeRF model based on the plurality of inputs, wherein the plurality of outputs include the respective colors of the voxels. Frequency components of the spectral analysis that represent colors for at least some of the voxels are extracted. A sparse volume data structure that represents the 3-dimensional space and the respective colors for the at least some of the voxels is generated.Type: ApplicationFiled: June 10, 2022Publication date: December 14, 2023Inventors: Celong Liu, Lelin Zhang, Qingyu Chen, Yunzhu Li, Haoze Li, Xing Mei
-
Patent number: 11836437Abstract: A text display method, a text display apparatus, an electronic device and a storage medium are disclosed. A real scene image and a to-be-displayed text are acquired, motion track data for texts is invoked, the to-be-displayed text is processed with a dynamic special effect, and the text which has been subject to the dynamic special effect processing is displayed on a real scene image, thus realizing a function of displaying a text with dynamic special effect in augmented reality display, making the text display effect more vivid. The display method can be widely used in various application scenarios to bring users a better visual and sensory experience.Type: GrantFiled: November 30, 2022Date of Patent: December 5, 2023Assignee: LEMON INC.Inventors: Yunzhu Li, Liyou Xu, Zhili Chen, Yiheng Zhu, Shihkuang Chu
-
Publication number: 20230377236Abstract: The present disclosure describes techniques for creating videos using virtual characters. Creation of a video may be initiated by a user. Camera input comprising a human body of the user may be received. The camera input may be split into a first stream for removing the human body and a second stream for animating a virtual character in the video. An inpainting filter may be applied to remove the human body in real time from the camera input. The inpainting filter may be configured to accelerate texture sampling. Output of the inpainting filter may be blended with images comprised in the camera input to generate camera input backgrounds.Type: ApplicationFiled: May 23, 2022Publication date: November 23, 2023Inventors: Zeng Dai, Yunzhu Li, Nite Luo
-
Publication number: 20230306694Abstract: A list information display method and apparatus, an electronic device, and a storage medium are provided, the method includes: displaying list information in response to an information display operation triggered by a user; obtaining a real scene shooting image; and loading a plurality of types of information related to the list information into the real scene shooting image and display the real scene shooting image. Due to the use of virtual enhanced display technology, while the list information is displayed in the real scene shooting image, the plurality of types of information related to the list information are also displayed. On one hand, the user can obtain more information related to the list information while obtaining the list information, which improves the efficiency of information acquisition; and on the other hand, the user can obtain information in the real scene shooting image, which improves the visual effect and interactive experience.Type: ApplicationFiled: August 26, 2021Publication date: September 28, 2023Inventors: Jingcong ZHANG, Weikai LI, Zihao CHEN, Guohui WANG, Xiao YANG, Haiying CHENG, Anda LI, Ray MCCLURE, Zhili CHEN, Yiheng ZHU, Shihkuang CHU, Liyou XU, Yunzhu LI, Jianchao YANG
-
Patent number: 11769289Abstract: Systems and methods for generating a virtual article of clothing at a display are described. Some examples may include: obtaining video data and audio data, analyzing the video data to determine one or more body joints of a target object appearing in the video data. A mesh based on the determined one or more body joints may be generated. The audio data may be analyzed to determine audio characteristics associated with the audio data. Texture rendering information associated with a virtual article of clothing may be determined based on the audio characteristics. A rendered video may be generated by rendering the virtual article of clothing to the generated mesh using the texture rendering information.Type: GrantFiled: June 21, 2021Date of Patent: September 26, 2023Assignee: Lemon Inc.Inventors: Yunzhu Li, Haiying Cheng, Chen Sun
-
Patent number: 11756276Abstract: An image processing method and apparatus for augmented reality, an electronic device and a storage medium, including: acquiring a target image in response to an image acquiring instruction triggered by a user, where the target image includes a target object; acquiring an augmented reality model of the target object, and outputting the augmented reality model in combination with the target object; acquiring target audio data selected by the user, and determining an audio feature with temporal regularity according to the target audio data; and driving the augmented reality model according to the audio feature and a playing progress of the target audio data when outputting the target audio data.Type: GrantFiled: November 8, 2022Date of Patent: September 12, 2023Assignees: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD., BYTEDANCE INC.Inventors: Yunzhu Li, Jingcong Zhang, Xuchen Song, Jianchao Yang, Guohui Wang, Zhili Chen, Linjie Luo, Xiao Yang, Haoze Li, Jing Liu
-
Publication number: 20230245398Abstract: The embodiments of the present disclosure disclose an image effect implementing method and apparatus, an electronic device, a storage medium, a computer program product and a computer program. The method includes: acquiring a first image, recognizing a set object in the first image, and acquiring an augmented reality model corresponding to the set object; superimposing, according to coordinate information of pixels of the set object, the augmented reality model onto the first image to obtain a second image; and upon detection of a preset deformation event, controlling, according to a set deformation policy, at least one sub-model of the augmented reality model in the second image to deform, and displaying the deformed second image.Type: ApplicationFiled: June 22, 2021Publication date: August 3, 2023Inventors: Jingcong ZHANG, Yunzhu LI, Haoze LI, Zhili CHEN, Linjie LUO, Jing LIU, Xiao YANG, Guohui WANG, Jianchao YANG, Xuchen SONG
-
Publication number: 20230177253Abstract: A text display method, a text display apparatus, an electronic device and a storage medium are disclosed. A real scene image and a to-be-displayed text are acquired, motion track data for texts is invoked, the to-be-displayed text is processed with a dynamic special effect, and the text which has been subject to the dynamic special effect processing is displayed on a real scene image, thus realizing a function of displaying a text with dynamic special effect in augmented reality display, making the text display effect more vivid. The display method can be widely used in various application scenarios to bring users a better visual and sensory experience.Type: ApplicationFiled: November 30, 2022Publication date: June 8, 2023Inventors: Yunzhu LI, Liyou XU, Zhili CHEN, Yiheng ZHU, Shihkuang CHU
-
Publication number: 20230061012Abstract: An image processing method and apparatus for augmented reality, an electronic device and a storage medium, including: acquiring a target image in response to an image acquiring instruction triggered by a user, where the target image includes a target object; acquiring an augmented reality model of the target object, and outputting the augmented reality model in combination with the target object; acquiring target audio data selected by the user, and determining an audio feature with temporal regularity according to the target audio data; and driving the augmented reality model according to the audio feature and a playing progress of the target audio data when outputting the target audio data.Type: ApplicationFiled: November 8, 2022Publication date: March 2, 2023Inventors: Yunzhu LI, Jingcong ZHANG, Xuchen SONG, Jianchao YANG, Guohui WANG, Zhili CHEN, Linjie LUO, Xiao YANG, Haoze LI, Jing LIU
-
Publication number: 20220406001Abstract: Systems and methods for generating a virtual article of clothing at a display are described. Some examples may include: obtaining video data and audio data, analyzing the video data to determine one or more body joints of a target object appearing in the video data. A mesh based on the determined one or more body joints may be generated. The audio data may be analyzed to determine audio characteristics associated with the audio data. Texture rendering information associated with a virtual article of clothing may be determined based on the audio characteristics. A rendered video may be generated by rendering the virtual article of clothing to the generated mesh using the texture rendering information.Type: ApplicationFiled: June 21, 2021Publication date: December 22, 2022Inventors: Yunzhu Li, Haiying Cheng, Chen Sun
-
Publication number: 20220406337Abstract: Systems and methods for rendering a segmentation contour effect are described. More specifically, video data including one or more video frames and audio data are obtained. Based on the video data, one or more segments in each of the one or more video frame are determined. The audio data is analyzed to determine beat characteristics of each beat. A segmentation contour effect to be applied to the one or more segments in the video data is determined based on the beat characteristics. A rendered video is generated by synchronizing the segmentation contour effect to audio data.Type: ApplicationFiled: June 21, 2021Publication date: December 22, 2022Inventors: Yunzhu Li, Zihao Chen, Chen Sun
-
Publication number: 20220405982Abstract: Systems and methods for rendering motion-audio visualizations to a display are described. More specifically, video data and audio data is obtained. A position of a target object in each of one or more video frames of the video data is determined. Additionally, a video data comprising one or more video frames is determined. Audio visualizations for the predetermined time period are determined based on the frequency spectrum. A rendered video is generated by applying the audio visualizations at the position of the target object in the one or more video frames for the predetermined time period.Type: ApplicationFiled: June 21, 2021Publication date: December 22, 2022Inventors: Kexin Lin, Yunzhu Li
-
Patent number: 11521341Abstract: Systems and methods for rendering a video effect to a display are described. More specifically, video data and audio data are obtained. The video data is analyzed to determine one or more attachment points of a target object that appears in the video data. The audio data is analyzed to determine audio characteristics. A video effect associated with an animation to be added to the one or more attachment points is determined based on the audio characteristics. A rendered video is generated by applying the video effect to the video data.Type: GrantFiled: June 21, 2021Date of Patent: December 6, 2022Assignee: LEMON INC.Inventors: Yunzhu Li, Chen Sun, Gamze Inanc
-
Patent number: 11481945Abstract: Systems and methods for rendering a video effect to a display are described. More specifically, video data and audio data are obtained. The video data is analyzed to determine one or more attachment points of a target object that appears in the video data. The audio data is analyzed to determine audio characteristics. A video effect associated with an animation to be added to the one or more attachment points is determined based on the audio characteristics. A rendered video is generated by applying the video effect to the video data.Type: GrantFiled: June 21, 2021Date of Patent: October 25, 2022Assignee: LEMON INC.Inventors: Yunzhu Li, Chen Sun, Gamze Inanc
-
Publication number: 20210315485Abstract: Systems and methods are provided for estimating 3D poses of a subject based on tactile interactions with the ground. Test subject interactions with the ground are recorded using a sensor system along with reference information (e.g., synchronized video information) for use in correlating tactile information with specific 3D poses, e.g., by training a neural network based on the reference information. Then, tactile information received in response to a given subject interacting with the ground can be used to estimate the 3D pose of the given subject directly, i.e., without reference to corresponding reference information. Certain exemplary embodiments use a sensor system in the form of a pressure sensing carpet or mat, although other types of sensor systems using pressure or other sensors can be used in various alternative embodiments.Type: ApplicationFiled: April 9, 2021Publication date: October 14, 2021Inventors: Wojciech Matusik, Antonio Torralba, Michael J. Foshey, Wan Shou, Yiyue Luo, Pratyusha Sharma, Yunzhu Li