Patents by Inventor Yunzhu Li

Yunzhu Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240158480
    Abstract: Disclosed in the present invention is an anti-Nipah virus monoclonal antibody having neutralization activity. The antibody consists of a monkey-derived variable region and a human constant region, and both light and heavy chains of the monkey-derived variable region have unique CDR regions. The antibody provided by the present invention has an excellent antigen binding capability, and has good binding activity with Bangladesh Nipah virus and Malaysia Nipah virus glycoprotein G. The antibody can effectively neutralize the Nipahpseudovirus. Moreover, the neutralization activity of the antibody is enhanced as the concentration of the antibody increases, and nearly 100% neutralization of the Nipahpseudovirus can be achieved at a concentration of 1 ?g/mL. Also disclosed in the present invention is an application of the monoclonal antibody against the Nipah virus glycoprotein G in preparation of a Nipah virus treatment drug.
    Type: Application
    Filed: June 26, 2021
    Publication date: May 16, 2024
    Applicant: ACADEMY OF MILITARY MEDICAL SCIENCE, PLA
    Inventors: Wei Chen, Changming Yu, Yujiao Liu, Pengfei Fan, Guanying Zhang, Yaohui Li, Jianmin Li, Xiangyang Chi, Meng Hao, Ting Fang, Yunzhu Dong, Xiaohong Song, Yi Chen, Shuling Liu
  • Patent number: 11978143
    Abstract: The present disclosure describes techniques for creating videos using virtual characters. Creation of a video may be initiated by a user. Camera input comprising a human body of the user may be received. The camera input may be split into a first stream for removing the human body and a second stream for animating a virtual character in the video. An inpainting filter may be applied to remove the human body in real time from the camera input. The inpainting filter may be configured to accelerate texture sampling. Output of the inpainting filter may be blended with images comprised in the camera input to generate camera input backgrounds.
    Type: Grant
    Filed: May 23, 2022
    Date of Patent: May 7, 2024
    Assignee: LEMON INC.
    Inventors: Zeng Dai, Yunzhu Li, Nite Luo
  • Publication number: 20230401815
    Abstract: A computer-implemented method for transforming a neural radiance field model is described. A plurality of inputs are provided to a neural radiance field (NeRF) model that represents a 3-dimensional space having a subject, wherein each input of the plurality of inputs includes a location and a view direction and corresponds to respective colors of voxels that represent the 3-dimensional space. A spectral analysis is performed on a plurality of outputs of the NeRF model based on the plurality of inputs, wherein the plurality of outputs include the respective colors of the voxels. Frequency components of the spectral analysis that represent colors for at least some of the voxels are extracted. A sparse volume data structure that represents the 3-dimensional space and the respective colors for the at least some of the voxels is generated.
    Type: Application
    Filed: June 10, 2022
    Publication date: December 14, 2023
    Inventors: Celong Liu, Lelin Zhang, Qingyu Chen, Yunzhu Li, Haoze Li, Xing Mei
  • Patent number: 11836437
    Abstract: A text display method, a text display apparatus, an electronic device and a storage medium are disclosed. A real scene image and a to-be-displayed text are acquired, motion track data for texts is invoked, the to-be-displayed text is processed with a dynamic special effect, and the text which has been subject to the dynamic special effect processing is displayed on a real scene image, thus realizing a function of displaying a text with dynamic special effect in augmented reality display, making the text display effect more vivid. The display method can be widely used in various application scenarios to bring users a better visual and sensory experience.
    Type: Grant
    Filed: November 30, 2022
    Date of Patent: December 5, 2023
    Assignee: LEMON INC.
    Inventors: Yunzhu Li, Liyou Xu, Zhili Chen, Yiheng Zhu, Shihkuang Chu
  • Publication number: 20230377236
    Abstract: The present disclosure describes techniques for creating videos using virtual characters. Creation of a video may be initiated by a user. Camera input comprising a human body of the user may be received. The camera input may be split into a first stream for removing the human body and a second stream for animating a virtual character in the video. An inpainting filter may be applied to remove the human body in real time from the camera input. The inpainting filter may be configured to accelerate texture sampling. Output of the inpainting filter may be blended with images comprised in the camera input to generate camera input backgrounds.
    Type: Application
    Filed: May 23, 2022
    Publication date: November 23, 2023
    Inventors: Zeng Dai, Yunzhu Li, Nite Luo
  • Publication number: 20230306694
    Abstract: A list information display method and apparatus, an electronic device, and a storage medium are provided, the method includes: displaying list information in response to an information display operation triggered by a user; obtaining a real scene shooting image; and loading a plurality of types of information related to the list information into the real scene shooting image and display the real scene shooting image. Due to the use of virtual enhanced display technology, while the list information is displayed in the real scene shooting image, the plurality of types of information related to the list information are also displayed. On one hand, the user can obtain more information related to the list information while obtaining the list information, which improves the efficiency of information acquisition; and on the other hand, the user can obtain information in the real scene shooting image, which improves the visual effect and interactive experience.
    Type: Application
    Filed: August 26, 2021
    Publication date: September 28, 2023
    Inventors: Jingcong ZHANG, Weikai LI, Zihao CHEN, Guohui WANG, Xiao YANG, Haiying CHENG, Anda LI, Ray MCCLURE, Zhili CHEN, Yiheng ZHU, Shihkuang CHU, Liyou XU, Yunzhu LI, Jianchao YANG
  • Patent number: 11769289
    Abstract: Systems and methods for generating a virtual article of clothing at a display are described. Some examples may include: obtaining video data and audio data, analyzing the video data to determine one or more body joints of a target object appearing in the video data. A mesh based on the determined one or more body joints may be generated. The audio data may be analyzed to determine audio characteristics associated with the audio data. Texture rendering information associated with a virtual article of clothing may be determined based on the audio characteristics. A rendered video may be generated by rendering the virtual article of clothing to the generated mesh using the texture rendering information.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: September 26, 2023
    Assignee: Lemon Inc.
    Inventors: Yunzhu Li, Haiying Cheng, Chen Sun
  • Patent number: 11756276
    Abstract: An image processing method and apparatus for augmented reality, an electronic device and a storage medium, including: acquiring a target image in response to an image acquiring instruction triggered by a user, where the target image includes a target object; acquiring an augmented reality model of the target object, and outputting the augmented reality model in combination with the target object; acquiring target audio data selected by the user, and determining an audio feature with temporal regularity according to the target audio data; and driving the augmented reality model according to the audio feature and a playing progress of the target audio data when outputting the target audio data.
    Type: Grant
    Filed: November 8, 2022
    Date of Patent: September 12, 2023
    Assignees: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD., BYTEDANCE INC.
    Inventors: Yunzhu Li, Jingcong Zhang, Xuchen Song, Jianchao Yang, Guohui Wang, Zhili Chen, Linjie Luo, Xiao Yang, Haoze Li, Jing Liu
  • Publication number: 20230245398
    Abstract: The embodiments of the present disclosure disclose an image effect implementing method and apparatus, an electronic device, a storage medium, a computer program product and a computer program. The method includes: acquiring a first image, recognizing a set object in the first image, and acquiring an augmented reality model corresponding to the set object; superimposing, according to coordinate information of pixels of the set object, the augmented reality model onto the first image to obtain a second image; and upon detection of a preset deformation event, controlling, according to a set deformation policy, at least one sub-model of the augmented reality model in the second image to deform, and displaying the deformed second image.
    Type: Application
    Filed: June 22, 2021
    Publication date: August 3, 2023
    Inventors: Jingcong ZHANG, Yunzhu LI, Haoze LI, Zhili CHEN, Linjie LUO, Jing LIU, Xiao YANG, Guohui WANG, Jianchao YANG, Xuchen SONG
  • Publication number: 20230177253
    Abstract: A text display method, a text display apparatus, an electronic device and a storage medium are disclosed. A real scene image and a to-be-displayed text are acquired, motion track data for texts is invoked, the to-be-displayed text is processed with a dynamic special effect, and the text which has been subject to the dynamic special effect processing is displayed on a real scene image, thus realizing a function of displaying a text with dynamic special effect in augmented reality display, making the text display effect more vivid. The display method can be widely used in various application scenarios to bring users a better visual and sensory experience.
    Type: Application
    Filed: November 30, 2022
    Publication date: June 8, 2023
    Inventors: Yunzhu LI, Liyou XU, Zhili CHEN, Yiheng ZHU, Shihkuang CHU
  • Publication number: 20230061012
    Abstract: An image processing method and apparatus for augmented reality, an electronic device and a storage medium, including: acquiring a target image in response to an image acquiring instruction triggered by a user, where the target image includes a target object; acquiring an augmented reality model of the target object, and outputting the augmented reality model in combination with the target object; acquiring target audio data selected by the user, and determining an audio feature with temporal regularity according to the target audio data; and driving the augmented reality model according to the audio feature and a playing progress of the target audio data when outputting the target audio data.
    Type: Application
    Filed: November 8, 2022
    Publication date: March 2, 2023
    Inventors: Yunzhu LI, Jingcong ZHANG, Xuchen SONG, Jianchao YANG, Guohui WANG, Zhili CHEN, Linjie LUO, Xiao YANG, Haoze LI, Jing LIU
  • Publication number: 20220406337
    Abstract: Systems and methods for rendering a segmentation contour effect are described. More specifically, video data including one or more video frames and audio data are obtained. Based on the video data, one or more segments in each of the one or more video frame are determined. The audio data is analyzed to determine beat characteristics of each beat. A segmentation contour effect to be applied to the one or more segments in the video data is determined based on the beat characteristics. A rendered video is generated by synchronizing the segmentation contour effect to audio data.
    Type: Application
    Filed: June 21, 2021
    Publication date: December 22, 2022
    Inventors: Yunzhu Li, Zihao Chen, Chen Sun
  • Publication number: 20220405982
    Abstract: Systems and methods for rendering motion-audio visualizations to a display are described. More specifically, video data and audio data is obtained. A position of a target object in each of one or more video frames of the video data is determined. Additionally, a video data comprising one or more video frames is determined. Audio visualizations for the predetermined time period are determined based on the frequency spectrum. A rendered video is generated by applying the audio visualizations at the position of the target object in the one or more video frames for the predetermined time period.
    Type: Application
    Filed: June 21, 2021
    Publication date: December 22, 2022
    Inventors: Kexin Lin, Yunzhu Li
  • Publication number: 20220406001
    Abstract: Systems and methods for generating a virtual article of clothing at a display are described. Some examples may include: obtaining video data and audio data, analyzing the video data to determine one or more body joints of a target object appearing in the video data. A mesh based on the determined one or more body joints may be generated. The audio data may be analyzed to determine audio characteristics associated with the audio data. Texture rendering information associated with a virtual article of clothing may be determined based on the audio characteristics. A rendered video may be generated by rendering the virtual article of clothing to the generated mesh using the texture rendering information.
    Type: Application
    Filed: June 21, 2021
    Publication date: December 22, 2022
    Inventors: Yunzhu Li, Haiying Cheng, Chen Sun
  • Patent number: 11521341
    Abstract: Systems and methods for rendering a video effect to a display are described. More specifically, video data and audio data are obtained. The video data is analyzed to determine one or more attachment points of a target object that appears in the video data. The audio data is analyzed to determine audio characteristics. A video effect associated with an animation to be added to the one or more attachment points is determined based on the audio characteristics. A rendered video is generated by applying the video effect to the video data.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: December 6, 2022
    Assignee: LEMON INC.
    Inventors: Yunzhu Li, Chen Sun, Gamze Inanc
  • Patent number: 11481945
    Abstract: Systems and methods for rendering a video effect to a display are described. More specifically, video data and audio data are obtained. The video data is analyzed to determine one or more attachment points of a target object that appears in the video data. The audio data is analyzed to determine audio characteristics. A video effect associated with an animation to be added to the one or more attachment points is determined based on the audio characteristics. A rendered video is generated by applying the video effect to the video data.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: October 25, 2022
    Assignee: LEMON INC.
    Inventors: Yunzhu Li, Chen Sun, Gamze Inanc
  • Publication number: 20210315485
    Abstract: Systems and methods are provided for estimating 3D poses of a subject based on tactile interactions with the ground. Test subject interactions with the ground are recorded using a sensor system along with reference information (e.g., synchronized video information) for use in correlating tactile information with specific 3D poses, e.g., by training a neural network based on the reference information. Then, tactile information received in response to a given subject interacting with the ground can be used to estimate the 3D pose of the given subject directly, i.e., without reference to corresponding reference information. Certain exemplary embodiments use a sensor system in the form of a pressure sensing carpet or mat, although other types of sensor systems using pressure or other sensors can be used in various alternative embodiments.
    Type: Application
    Filed: April 9, 2021
    Publication date: October 14, 2021
    Inventors: Wojciech Matusik, Antonio Torralba, Michael J. Foshey, Wan Shou, Yiyue Luo, Pratyusha Sharma, Yunzhu Li