Patents by Inventor Jimei Yang
Jimei Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240144574Abstract: Digital object animation techniques are described. In a first example, translation-based animation of the digital object operates using control points of the digital object. In another example, the animation system is configured to minimize an amount of feature positions that are used to generate the animation. In a further example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. Yet further, a body tracking module is used to computing initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. The animation system also supports a plurality of modes used to generate the digital object, techniques to define a base of the digital object, and a friction term limiting movement of features positions based on contact with a ground plane.Type: ApplicationFiled: December 27, 2023Publication date: May 2, 2024Applicant: Adobe Inc.Inventors: Jun Saito, Jimei Yang, Duygu Ceylan Aksit
-
Publication number: 20240135512Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.Type: ApplicationFiled: March 27, 2023Publication date: April 25, 2024Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz, Qing Liu, Jianming Zhang, Zhe Lin
-
Publication number: 20240135513Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.Type: ApplicationFiled: March 27, 2023Publication date: April 25, 2024Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz
-
Publication number: 20240135572Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.Type: ApplicationFiled: March 27, 2023Publication date: April 25, 2024Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz
-
Publication number: 20240135511Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.Type: ApplicationFiled: March 27, 2023Publication date: April 25, 2024Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz, Qing Liu, Jianming Zhang, Zhe Lin
-
Patent number: 11948281Abstract: Methods and systems are provided for accurately filling holes, regions, and/or portions of high-resolution images using guided upsampling during image inpainting. For instance, an image inpainting system can apply guided upsampling to an inpainted image result to enable generation of a high-resolution inpainting result from a lower-resolution image that has undergone inpainting. To allow for guided upsampling during image inpainting, one or more neural networks can be used. For instance, a low-resolution result neural network (e.g., comprised of an encoder and a decoder) and a high-resolution input neural network (e.g., comprised of an encoder and a decoder). The image inpainting system can use such networks to generate a high-resolution inpainting image result that fills the hole, region, and/or portion of the image.Type: GrantFiled: May 1, 2020Date of Patent: April 2, 2024Assignee: Adobe Inc.Inventors: Zhe Lin, Yu Zeng, Jimei Yang, Jianming Zhang, Elya Shechtman
-
Publication number: 20240046566Abstract: Systems and methods for mesh generation are described. One aspect of the systems and methods includes receiving an image depicting a visible portion of a body; generating an intermediate mesh representing the body based on the image; generating visibility features indicating whether parts of the body are visible based on the image; generating parameters for a morphable model of the body based on the intermediate mesh and the visibility features; and generating an output mesh representing the body based on the parameters for the morphable model, wherein the output mesh includes a non-visible portion of the body that is not depicted by the image.Type: ApplicationFiled: August 2, 2022Publication date: February 8, 2024Inventors: Jimei Yang, Chun-han Yao, Duygu Ceylan Aksit, Yi Zhou
-
Patent number: 11861779Abstract: Digital object animation techniques are described. In a first example, translation-based animation of the digital object operates using control points of the digital object. In another example, the animation system is configured to minimize an amount of feature positions that are used to generate the animation. In a further example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. Yet further, a body tracking module is used to computing initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. The animation system also supports a plurality of modes used to generate the digital object, techniques to define a base of the digital object, and a friction term limiting movement of features positions based on contact with a ground plane.Type: GrantFiled: December 14, 2021Date of Patent: January 2, 2024Assignee: Adobe Inc.Inventors: Jun Saito, Jimei Yang, Duygu Ceylan Aksit
-
Publication number: 20230360320Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map.Type: ApplicationFiled: July 18, 2023Publication date: November 9, 2023Inventors: Ruben Villegas, Yunseok Jang, Duygu Ceylan Aksit, Jimei Yang, Xin Sun
-
Patent number: 11721056Abstract: In some embodiments, a model training system obtains a set of animation models. For each of the animation models, the model training system renders the animation model to generate a sequence of video frames containing a character using a set of rendering parameters and extracts joint points of the character from each frame of the sequence of video frames. The model training system further determines, for each frame of the sequence of video frames, whether a subset of the joint points are in contact with a ground plane in a three-dimensional space and generates contact labels for the subset of the joint points. The model training system trains a contact estimation model using training data containing the joint points extracted from the sequences of video frames and the generated contact labels. The contact estimation model can be used to refine a motion model for a character.Type: GrantFiled: January 12, 2022Date of Patent: August 8, 2023Assignee: Adobe Inc.Inventors: Jimei Yang, Davis Rempe, Bryan Russell, Aaron Hertzmann
-
Patent number: 11704865Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map.Type: GrantFiled: July 22, 2021Date of Patent: July 18, 2023Assignee: Adobe Inc.Inventors: Ruben Villegas, Yunseok Jang, Duygu Ceylan Aksit, Jimei Yang, Xin Sun
-
Patent number: 11682238Abstract: Embodiments are disclosed for re-timing a video sequence to an audio sequence based on the detection of motion beats in the video sequence and audio beats in the audio sequence. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a first input, the first input including a video sequence, detecting motion beats in the video sequence, receiving a second input, the second input including an audio sequence, detecting audio beats in the audio sequence, modifying the video sequence by matching the detected motions beats in the video sequence to the detected audio beats in the audio sequence, and outputting the modified video sequence.Type: GrantFiled: February 12, 2021Date of Patent: June 20, 2023Assignee: Adobe Inc.Inventors: Jimei Yang, Deepali Aneja, Dingzeyu Li, Jun Saito, Yang Zhou
-
Publication number: 20230186544Abstract: Digital object animation techniques are described. In a first example, translation-based animation of the digital object operates using control points of the digital object. In another example, the animation system is configured to minimize an amount of feature positions that are used to generate the animation. In a further example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. Yet further, a body tracking module is used to computing initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. The animation system also supports a plurality of modes used to generate the digital object, techniques to define a base of the digital object, and a friction term limiting movement of features positions based on contact with a ground plane.Type: ApplicationFiled: December 14, 2021Publication date: June 15, 2023Applicant: Adobe Inc.Inventors: Jun Saito, Jimei Yang, Duygu Ceylan Aksit
-
Patent number: 11657546Abstract: Introduced here are techniques for relighting an image by automatically segmenting a human object in an image. The segmented image is input to an encoder that transforms it into a feature space. The feature space is concatenated with coefficients of a target illumination for the image and input to an albedo decoder and a light transport detector to predict an albedo map and a light transport matrix, respectively. In addition, the output of the encoder is concatenated with outputs of residual parts of each decoder and fed to a light coefficients block, which predicts coefficients of the illumination for the image. The light transport matrix and predicted illumination coefficients are multiplied to obtain a shading map that can sharpen details of the image. Scaling the resulting image by the albedo map to produce the relight image. The relight image can be refined to denoise the relight image.Type: GrantFiled: May 24, 2022Date of Patent: May 23, 2023Assignee: Adobe Inc.Inventors: Xin Sun, Ruben Villegas, Manuel Lagunas Arto, Jimei Yang, Jianming Zhang
-
Patent number: 11625881Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.Type: GrantFiled: September 27, 2021Date of Patent: April 11, 2023Assignee: Adobe Inc.Inventors: Ruben Eduardo Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit
-
Patent number: 11605156Abstract: Methods and systems are provided for accurately filling holes, regions, and/or portions of images using iterative image inpainting. In particular, iterative inpainting utilize a confidence analysis of predicted pixels determined during the iterations of inpainting. For instance, a confidence analysis can provide information that can be used as feedback to progressively fill undefined pixels that comprise the holes, regions, and/or portions of an image where information for those respective pixels is not known. To allow for accurate image inpainting, one or more neural networks can be used. For instance, a coarse result neural network (e.g., a GAN comprised of a generator and a discriminator) and a fine result neural network (e.g., a GAN comprised of a generator and two discriminators).Type: GrantFiled: July 14, 2022Date of Patent: March 14, 2023Assignee: ADOBE INC.Inventors: Zhe Lin, Yu Zeng, Jimei Yang, Jianming Zhang, Elya Shechtman
-
Publication number: 20230037339Abstract: One example method involves a processing device that performs operations that include receiving a request to retarget a source motion into a target object. Operations further include providing the target object to a contact-aware motion retargeting neural network trained to retarget the source motion into the target object. The contact-aware motion retargeting neural network is trained by accessing training data that includes a source object performing the source motion. The contact-aware motion retargeting neural network generates retargeted motion for the target object, based on a self-contact having a pair of input vertices. The retargeted motion is subject to motion constraints that: (i) preserve a relative location of the self-contact and (ii) prevent self-penetration of the target object.Type: ApplicationFiled: July 26, 2021Publication date: February 9, 2023Inventors: Ruben Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit, Aaron Hertzmann
-
Publication number: 20230037591Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map.Type: ApplicationFiled: July 22, 2021Publication date: February 9, 2023Inventors: Ruben Villegas, Yunseok Jang, Duygu Ceylan Aksit, Jimei Yang, Xin Sun
-
Patent number: 11531697Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly identifying and providing digital images of human figures in poses corresponding to a query pose. In particular, the disclosed systems can provide multiple approaches to searching for and providing pose images, including identifying a digital image depicting a human figure in a particular pose based on a query digital image that depicts the pose or identifying a digital image depicting a human figure in a particular pose based on a virtual mannequin. Indeed, the disclosed systems can provide a manipulable virtual mannequin that defines a query pose for searching a repository of digital images. Additionally, the disclosed systems can generate and provide digital pose image groups by clustering digital images together according to poses of human figures within a pose feature space.Type: GrantFiled: November 3, 2020Date of Patent: December 20, 2022Assignee: Adobe Inc.Inventors: Jinrong Xie, Shabnam Ghadar, Jun Saito, Jimei Yang, Elnaz Morad, Duygu Ceylan Aksit, Baldo Faieta, Alex Filipkowski
-
Publication number: 20220366546Abstract: Methods and systems are provided for accurately filling holes, regions, and/or portions of images using iterative image inpainting. In particular, iterative inpainting utilize a confidence analysis of predicted pixels determined during the iterations of inpainting. For instance, a confidence analysis can provide information that can be used as feedback to progressively fill undefined pixels that comprise the holes, regions, and/or portions of an image where information for those respective pixels is not known. To allow for accurate image inpainting, one or more neural networks can be used. For instance, a coarse result neural network (e.g., a GAN comprised of a generator and a discriminator) and a fine result neural network (e.g., a GAN comprised of a generator and two discriminators).Type: ApplicationFiled: July 14, 2022Publication date: November 17, 2022Inventors: Zhe LIN, Yu ZENG, Jimei YANG, Jianming ZHANG, Elya SHECHTMAN