Patents by Inventor Jimei Yang

Jimei Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12260530
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
    Type: Grant
    Filed: March 27, 2023
    Date of Patent: March 25, 2025
    Assignee: Adobe Inc.
    Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz, Qing Liu, Jianming Zhang, Zhe Lin
  • Publication number: 20250088650
    Abstract: In one aspect, a processor determines a first set of video frames of a video based on a target video frame. The first set of video frames includes the target video frame, one or more frames of the video preceding the target video frame, and one or more frames of the video subsequent to the target video frame. The first set of video frames includes a sequence of video frames of the video. An encoder neural network executing on the processor encodes the first set of video frames of a video to generate a respective feature vector for each video frame in the first set. A decoder neural network executing on the processor decodes the feature vectors to generate a mask for the target video frame.
    Type: Application
    Filed: September 12, 2023
    Publication date: March 13, 2025
    Applicant: Adobe Inc.
    Inventors: Nikhil Kalra, Seoung Wug Oh, Nico Alexander Becherer, Joon-Young Lee, Jimei Yang
  • Publication number: 20240428491
    Abstract: The present disclosure relates to a system that utilizes neural networks to generate looping animations from still images. The system fits a 3D model to a pose of a person in a digital image. The system receives a 3D animation sequence that transitions between a starting pose and an ending pose. The system generates, utilizing an animation transition neural network, first and second 3D animation transition sequences that respectively transition between the pose of the person and the starting pose and between the ending pose and the pose of the person. The system modifies each of the 3D animation sequence, the first 3D animation transition sequence, and the second 3D animation transition sequence by applying a texture map. The system generates a looping 3D animation by combining the modified 3D animation sequence, the modified first 3D animation transition sequence, and the modified second 3D animation transition sequence.
    Type: Application
    Filed: June 23, 2023
    Publication date: December 26, 2024
    Inventors: Jae Shin Yoon, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jingwan Lu, Jimei Yang, Zhixin Shu, Chengan He, Yi Zhou, Jun Saito, James Zachary
  • Publication number: 20240378809
    Abstract: Decal application techniques as implemented by a computing device are described to perform decaling of a digital image. In one example, learned features of a digital image using machine learning are used by a computing device as a basis to predict the surface geometry of an object in the digital image. Once the surface geometry of the object is predicted, machine learning techniques are then used by the computing device to configure an overlay object to be applied onto the digital image according to the predicted surface geometry of the overlaid object.
    Type: Application
    Filed: May 12, 2023
    Publication date: November 14, 2024
    Applicant: Adobe Inc.
    Inventors: Yangtuanfeng Wang, Yi Zhou, Yasamin Jafarian, Nathan Aaron Carr, Jimei Yang, Duygu Ceylan Aksit
  • Publication number: 20240281978
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating segmentation masks for a digital visual media item. In particular, in one or more embodiments, the disclosed systems generate, utilizing a neural network encoder, high-level features of a digital visual media item. Further, the disclosed systems generate, utilizing the neural network encoder, low-level features of the digital visual media item. In some implementations, the disclosed systems generate, utilizing a neural network decoder, an initial segmentation mask of the digital visual media item from the low-level features. Moreover, the disclosed systems generate, utilizing the neural network decoder, a refined segmentation mask of the digital visual media item from the initial segmentation mask and the high-level features.
    Type: Application
    Filed: February 16, 2023
    Publication date: August 22, 2024
    Inventors: Jingyuan Liu, Qing Liu, Jimei Yang, Yuhong Wu, Su Chen
  • Patent number: 12067680
    Abstract: Systems and methods for mesh generation are described. One aspect of the systems and methods includes receiving an image depicting a visible portion of a body; generating an intermediate mesh representing the body based on the image; generating visibility features indicating whether parts of the body are visible based on the image; generating parameters for a morphable model of the body based on the intermediate mesh and the visibility features; and generating an output mesh representing the body based on the parameters for the morphable model, wherein the output mesh includes a non-visible portion of the body that is not depicted by the image.
    Type: Grant
    Filed: August 2, 2022
    Date of Patent: August 20, 2024
    Assignee: ADOBE INC.
    Inventors: Jimei Yang, Chun-han Yao, Duygu Ceylan Aksit, Yi Zhou
  • Patent number: 12033261
    Abstract: One example method involves a processing device that performs operations that include receiving a request to retarget a source motion into a target object. Operations further include providing the target object to a contact-aware motion retargeting neural network trained to retarget the source motion into the target object. The contact-aware motion retargeting neural network is trained by accessing training data that includes a source object performing the source motion. The contact-aware motion retargeting neural network generates retargeted motion for the target object, based on a self-contact having a pair of input vertices. The retargeted motion is subject to motion constraints that: (i) preserve a relative location of the self-contact and (ii) prevent self-penetration of the target object.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: July 9, 2024
    Assignee: ADOBE INC.
    Inventors: Ruben Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit, Aaron Hertzmann
  • Publication number: 20240169553
    Abstract: Techniques for modeling secondary motion based on three-dimensional models are described as implemented by a secondary motion modeling system, which is configured to receive a plurality of three-dimensional object models representing an object. Based on the three-dimensional object models, the secondary motion modeling system determines three-dimensional motion descriptors of a particular three-dimensional object model using one or more machine learning models. Based on the three-dimensional motion descriptors, the secondary motion modeling system models at least one feature subjected to secondary motion using the one or more machine learning models. The particular three-dimensional object model having the at least one feature is rendered by the secondary motion modeling system.
    Type: Application
    Filed: November 21, 2022
    Publication date: May 23, 2024
    Applicant: Adobe Inc.
    Inventors: Jae shin Yoon, Zhixin Shu, Yangtuanfeng Wang, Jingwan Lu, Jimei Yang, Duygu Ceylan Aksit
  • Publication number: 20240161335
    Abstract: Embodiments are disclosed for generating a gesture reenactment video sequence corresponding to a target audio sequence using a trained network based on a video motion graph generated from a reference speech video. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a first input including a reference speech video and generating a video motion graph representing the reference speech video, where each node is associated with a frame of the reference video sequence and reference audio features of the reference audio sequence. The disclosed systems and methods further comprise receiving a second input including a target audio sequence, generating target audio features, identifying a node path through the video motion graph based on the target audio features and the reference audio features, and generating an output media sequence based on the identified node path through the video motion graph paired with the target audio sequence.
    Type: Application
    Filed: November 14, 2022
    Publication date: May 16, 2024
    Applicant: Adobe Inc.
    Inventors: Yang ZHOU, Jimei YANG, Jun SAITO, Dingzeyu LI, Deepali ANEJA
  • Publication number: 20240144574
    Abstract: Digital object animation techniques are described. In a first example, translation-based animation of the digital object operates using control points of the digital object. In another example, the animation system is configured to minimize an amount of feature positions that are used to generate the animation. In a further example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. Yet further, a body tracking module is used to computing initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. The animation system also supports a plurality of modes used to generate the digital object, techniques to define a base of the digital object, and a friction term limiting movement of features positions based on contact with a ground plane.
    Type: Application
    Filed: December 27, 2023
    Publication date: May 2, 2024
    Applicant: Adobe Inc.
    Inventors: Jun Saito, Jimei Yang, Duygu Ceylan Aksit
  • Publication number: 20240135511
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
    Type: Application
    Filed: March 27, 2023
    Publication date: April 25, 2024
    Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz, Qing Liu, Jianming Zhang, Zhe Lin
  • Publication number: 20240135512
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
    Type: Application
    Filed: March 27, 2023
    Publication date: April 25, 2024
    Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz, Qing Liu, Jianming Zhang, Zhe Lin
  • Publication number: 20240135572
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
    Type: Application
    Filed: March 27, 2023
    Publication date: April 25, 2024
    Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz
  • Publication number: 20240135513
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
    Type: Application
    Filed: March 27, 2023
    Publication date: April 25, 2024
    Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz
  • Patent number: 11948281
    Abstract: Methods and systems are provided for accurately filling holes, regions, and/or portions of high-resolution images using guided upsampling during image inpainting. For instance, an image inpainting system can apply guided upsampling to an inpainted image result to enable generation of a high-resolution inpainting result from a lower-resolution image that has undergone inpainting. To allow for guided upsampling during image inpainting, one or more neural networks can be used. For instance, a low-resolution result neural network (e.g., comprised of an encoder and a decoder) and a high-resolution input neural network (e.g., comprised of an encoder and a decoder). The image inpainting system can use such networks to generate a high-resolution inpainting image result that fills the hole, region, and/or portion of the image.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: April 2, 2024
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Yu Zeng, Jimei Yang, Jianming Zhang, Elya Shechtman
  • Publication number: 20240046566
    Abstract: Systems and methods for mesh generation are described. One aspect of the systems and methods includes receiving an image depicting a visible portion of a body; generating an intermediate mesh representing the body based on the image; generating visibility features indicating whether parts of the body are visible based on the image; generating parameters for a morphable model of the body based on the intermediate mesh and the visibility features; and generating an output mesh representing the body based on the parameters for the morphable model, wherein the output mesh includes a non-visible portion of the body that is not depicted by the image.
    Type: Application
    Filed: August 2, 2022
    Publication date: February 8, 2024
    Inventors: Jimei Yang, Chun-han Yao, Duygu Ceylan Aksit, Yi Zhou
  • Patent number: 11861779
    Abstract: Digital object animation techniques are described. In a first example, translation-based animation of the digital object operates using control points of the digital object. In another example, the animation system is configured to minimize an amount of feature positions that are used to generate the animation. In a further example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. Yet further, a body tracking module is used to computing initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. The animation system also supports a plurality of modes used to generate the digital object, techniques to define a base of the digital object, and a friction term limiting movement of features positions based on contact with a ground plane.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: January 2, 2024
    Assignee: Adobe Inc.
    Inventors: Jun Saito, Jimei Yang, Duygu Ceylan Aksit
  • Publication number: 20230360320
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map.
    Type: Application
    Filed: July 18, 2023
    Publication date: November 9, 2023
    Inventors: Ruben Villegas, Yunseok Jang, Duygu Ceylan Aksit, Jimei Yang, Xin Sun
  • Patent number: 11721056
    Abstract: In some embodiments, a model training system obtains a set of animation models. For each of the animation models, the model training system renders the animation model to generate a sequence of video frames containing a character using a set of rendering parameters and extracts joint points of the character from each frame of the sequence of video frames. The model training system further determines, for each frame of the sequence of video frames, whether a subset of the joint points are in contact with a ground plane in a three-dimensional space and generates contact labels for the subset of the joint points. The model training system trains a contact estimation model using training data containing the joint points extracted from the sequences of video frames and the generated contact labels. The contact estimation model can be used to refine a motion model for a character.
    Type: Grant
    Filed: January 12, 2022
    Date of Patent: August 8, 2023
    Assignee: Adobe Inc.
    Inventors: Jimei Yang, Davis Rempe, Bryan Russell, Aaron Hertzmann
  • Patent number: 11704865
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map.
    Type: Grant
    Filed: July 22, 2021
    Date of Patent: July 18, 2023
    Assignee: Adobe Inc.
    Inventors: Ruben Villegas, Yunseok Jang, Duygu Ceylan Aksit, Jimei Yang, Xin Sun