Patents by Inventor Jimei Yang

Jimei Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11334971
    Abstract: Digital image completion by learning generation and patch matching jointly is described. Initially, a digital image having at least one hole is received. This holey digital image is provided as input to an image completer formed with a dual-stage framework that combines a coarse image neural network and an image refinement network. The coarse image neural network generates a coarse prediction of imagery for filling the holes of the holey digital image. The image refinement network receives the coarse prediction as input, refines the coarse prediction, and outputs a filled digital image having refined imagery that fills these holes. The image refinement network generates refined imagery using a patch matching technique, which includes leveraging information corresponding to patches of known pixels for filtering patches generated based on the coarse prediction. Based on this, the image completer outputs the filled digital image with the refined imagery.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: May 17, 2022
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Publication number: 20220138249
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly identifying and providing digital images of human figures in poses corresponding to a query pose. In particular, the disclosed systems can provide multiple approaches to searching for and providing pose images, including identifying a digital image depicting a human figure in a particular pose based on a query digital image that depicts the pose or identifying a digital image depicting a human figure in a particular pose based on a virtual mannequin. Indeed, the disclosed systems can provide a manipulable virtual mannequin that defines a query pose for searching a repository of digital images. Additionally, the disclosed systems can generate and provide digital pose image groups by clustering digital images together according to poses of human figures within a pose feature space.
    Type: Application
    Filed: November 3, 2020
    Publication date: May 5, 2022
    Inventors: Jinrong Xie, Shabnam Ghadar, Jun Saito, Jimei Yang, Elnaz Morad, Duygu Ceylan Aksit, Baldo Faieta, Alex Filipkowski
  • Publication number: 20220139019
    Abstract: In some embodiments, a model training system obtains a set of animation models. For each of the animation models, the model training system renders the animation model to generate a sequence of video frames containing a character using a set of rendering parameters and extracts joint points of the character from each frame of the sequence of video frames. The model training system further determines, for each frame of the sequence of video frames, whether a subset of the joint points are in contact with a ground plane in a three-dimensional space and generates contact labels for the subset of the joint points. The model training system trains a contact estimation model using training data containing the joint points extracted from the sequences of video frames and the generated contact labels. The contact estimation model can be used to refine a motion model for a character.
    Type: Application
    Filed: January 12, 2022
    Publication date: May 5, 2022
    Inventors: Jimei Yang, Davis Rempe, Bryan Russell, Aaron Hertzmann
  • Patent number: 11321847
    Abstract: In some embodiments, an image manipulation application receives an incomplete image that includes a hole area lacking image content. The image manipulation application applies a contour detection operation to the incomplete image to detect an incomplete contour of a foreground object in the incomplete image. The hole area prevents the contour detection operation from detecting a completed contour of the foreground object. The image manipulation application further applies a contour completion model to the incomplete contour and the incomplete image to generate the completed contour for the foreground object. Based on the completed contour and the incomplete image, the image manipulation application generates image content for the hole area to generate a completed image.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: May 3, 2022
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Wei Xiong, Connelly Barnes, Jimei Yang, Xin Lu
  • Patent number: 11250548
    Abstract: Digital image completion using deep learning is described. Initially, a digital image having at least one hole is received. This holey digital image is provided as input to an image completer formed with a framework that combines generative and discriminative neural networks based on learning architecture of the generative adversarial networks. From the holey digital image, the generative neural network generates a filled digital image having hole-filling content in place of holes. The discriminative neural networks detect whether the filled digital image and the hole-filling digital content correspond to or include computer-generated content or are photo-realistic. The generating and detecting are iteratively continued until the discriminative neural networks fail to detect computer-generated content for the filled digital image and hole-filling content or until detection surpasses a threshold difficulty.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: February 15, 2022
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Patent number: 11238634
    Abstract: In some embodiments, a motion model refinement system receives an input video depicting a human character and an initial motion model describing motions of individual joint points of the human character in a three-dimensional space. The motion model refinement system identifies foot joint points of the human character that are in contact with a ground plane using a trained contact estimation model. The motion model refinement system determines the ground plane based on the foot joint points and the initial motion model and constructs an optimization problem for refining the initial motion model. The optimization problem minimizes the difference between the refined motion model and the initial motion model under a set of plausibility constraints including constraints on the contact foot joint points and a time-dependent inertia tensor-based constraint. The motion model refinement system obtains the refined motion model by solving the optimization problem.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: February 1, 2022
    Assignee: Adobe Inc.
    Inventors: Jimei Yang, Davis Rempe, Bryan Russell, Aaron Hertzmann
  • Patent number: 11232547
    Abstract: A style of a digital image is transferred to another digital image of arbitrary resolution. A high-resolution (HR) content image is segmented into several low-resolution (LR) patches. The resolution of a style image is matched to have the same resolution as the LR content image patches. Style transfer is then performed on a patch-by-patch basis using, for example, a pair of feature transforms—whitening and coloring. The patch-by-patch style transfer process is then repeated at several increasing resolutions, or scale levels, of both the content and style images. The results of the style transfer at each scale level are incorporated into successive scale levels up to and including the original HR scale. As a result, style transfer can be performed with images having arbitrary resolutions to produce visually pleasing results with good spatial consistency.
    Type: Grant
    Filed: July 16, 2020
    Date of Patent: January 25, 2022
    Assignee: Adobe Inc.
    Inventors: Chen Fang, Zhe Lin, Zhaowen Wang, Yulun Zhang, Yilin Wang, Jimei Yang
  • Publication number: 20220020199
    Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.
    Type: Application
    Filed: September 27, 2021
    Publication date: January 20, 2022
    Applicant: Adobe Inc.
    Inventors: Ruben Eduardo Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit
  • Patent number: 11170551
    Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: November 9, 2021
    Assignee: Adobe Inc.
    Inventors: Ruben Eduardo Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit
  • Publication number: 20210342984
    Abstract: Methods and systems are provided for accurately filling holes, regions, and/or portions of high-resolution images using guided upsampling during image inpainting. For instance, an image inpainting system can apply guided upsampling to an inpainted image result to enable generation of a high-resolution inpainting result from a lower-resolution image that has undergone inpainting. To allow for guided upsampling during image inpainting, one or more neural networks can be used. For instance, a low-resolution result neural network (e.g., comprised of an encoder and a decoder) and a high-resolution input neural network (e.g., comprised of an encoder and a decoder). The image inpainting system can use such networks to generate a high-resolution inpainting image result that fills the hole, region, and/or portion of the image.
    Type: Application
    Filed: May 1, 2020
    Publication date: November 4, 2021
    Inventors: Zhe LIN, Yu ZENG, Jimei YANG, Jianming ZHANG, Elya SHECHTMAN
  • Publication number: 20210342983
    Abstract: Methods and systems are provided for accurately filling holes, regions, and/or portions of images using iterative image inpainting. In particular, iterative inpainting utilize a confidence analysis of predicted pixels determined during the iterations of inpainting. For instance, a confidence analysis can provide information that can be used as feedback to progressively fill undefined pixels that comprise the holes, regions, and/or portions of an image where information for those respective pixels is not known. To allow for accurate image inpainting, one or more neural networks can be used. For instance, a coarse result neural network (e.g., a GAN comprised of a generator and a discriminator) and a fine result neural network (e.g., a GAN comprised of a generator and two discriminators).
    Type: Application
    Filed: April 29, 2020
    Publication date: November 4, 2021
    Inventors: Zhe LIN, Yu ZENG, Jimei YANG, Jianming ZHANG, Elya SHECHTMAN
  • Publication number: 20210343059
    Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.
    Type: Application
    Filed: May 1, 2020
    Publication date: November 4, 2021
    Applicant: Adobe Inc.
    Inventors: Ruben Eduardo Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit
  • Publication number: 20210335028
    Abstract: In some embodiments, a motion model refinement system receives an input video depicting a human character and an initial motion model describing motions of individual joint points of the human character in a three-dimensional space. The motion model refinement system identifies foot joint points of the human character that are in contact with a ground plane using a trained contact estimation model. The motion model refinement system determines the ground plane based on the foot joint points and the initial motion model and constructs an optimization problem for refining the initial motion model. The optimization problem minimizes the difference between the refined motion model and the initial motion model under a set of plausibility constraints including constraints on the contact foot joint points and a time-dependent inertia tensor-based constraint. The motion model refinement system obtains the refined motion model by solving the optimization problem.
    Type: Application
    Filed: April 28, 2020
    Publication date: October 28, 2021
    Inventors: Jimei Yang, Davis Rempe, Bryan Russell, Aaron Hertzmann
  • Publication number: 20210295571
    Abstract: Introduced here are techniques for relighting an image by automatically segmenting a human object in an image. The segmented image is input to an encoder that transforms it into a feature space. The feature space is concatenated with coefficients of a target illumination for the image and input to an albedo decoder and a light transport detector to predict an albedo map and a light transport matrix, respectively. In addition, the output of the encoder is concatenated with outputs of residual parts of each decoder and fed to a light coefficients block, which predicts coefficients of the illumination for the image. The light transport matrix and predicted illumination coefficients are multiplied to obtain a shading map that can sharpen details of the image. Scaling the resulting image by the albedo map to produce the relight image. The relight image can be refined to denoise the relight image.
    Type: Application
    Filed: March 18, 2020
    Publication date: September 23, 2021
    Inventors: Xin Sun, Ruben Villegas, Manuel Lagunas Arto, Jimei Yang, Jianming Zhang
  • Patent number: 11115645
    Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: September 7, 2021
    Assignee: ADOBE INC.
    Inventors: Jimei Yang, Duygu Ceylan Aksit, Mehmet Ersin Yumer, Eunbyung Park
  • Patent number: 11017586
    Abstract: Systems and methods are described for generating a three dimensional (3D) effect from a two dimensional (2D) image. The methods may include generating a depth map based on a 2D image, identifying a camera path, generating one or more extremal views based on the 2D image and the camera path, generating a global point cloud by inpainting occlusion gaps in the one or more extremal views, generating one or more intermediate views based on the global point cloud and the camera path, and combining the one or more extremal views and the one or more intermediate views to produce a 3D motion effect.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: May 25, 2021
    Assignee: ADOBE INC.
    Inventors: Mai Long, Simon Niklaus, Jimei Yang
  • Patent number: 10997464
    Abstract: Digital image layout training is described using wireframe rendering within a generative adversarial network (GAN) system. A GAN system is employed to train the generator module to refine digital image layouts. To do so, a wireframe rendering discriminator module rasterizes a refined digital training digital image layout received from a generator module into a wireframe digital image layout. The wireframe digital image layout is then compared with at least one ground truth digital image layout using a loss function as part of machine learning by the wireframe discriminator module. The generator module is then trained by backpropagating a result of the comparison.
    Type: Grant
    Filed: November 9, 2018
    Date of Patent: May 4, 2021
    Assignee: Adobe Inc.
    Inventors: Jimei Yang, Jianming Zhang, Aaron Phillip Hertzmann, Jianan Li
  • Patent number: 10964100
    Abstract: According to one general aspect, systems and techniques for rendering a painting stroke of a three-dimensional digital painting include receiving a painting stroke input on a canvas, where the painting stroke includes a plurality of pixels. For each of the pixels in the plurality of pixels, a neighborhood patch of pixels is selected and input into a neural network and a shading function is output from the neural network. The painting stroke is rendered on the canvas using the shading function.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: March 30, 2021
    Assignee: ADOBE INC.
    Inventors: Xin Sun, Zhili Chen, Nathan Carr, Julio Marco Murria, Jimei Yang
  • Patent number: 10964084
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a digital animation of a digital animation character by utilizing a generative adversarial network and a hip motion prediction network. For example, the disclosed systems can utilize an unconditional generative adversarial network to generate a sequence of local poses of a digital animation character based on an input of a random code vector. The disclosed systems can also utilize a conditional generative adversarial network to generate a sequence of local poses based on an input of a set of keyframes. Based on the sequence of local poses, the disclosed systems can utilize a hip motion prediction network to generate a sequence of global poses based on hip velocities. In addition, the disclosed systems can generate an animation of a digital animation character based on the sequence of global poses.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: March 30, 2021
    Assignee: ADOBE INC.
    Inventors: Jingwan Lu, Yi Zhou, Connelly Barnes, Jimei Yang
  • Publication number: 20210082124
    Abstract: In some embodiments, an image manipulation application receives an incomplete image that includes a hole area lacking image content. The image manipulation application applies a contour detection operation to the incomplete image to detect an incomplete contour of a foreground object in the incomplete image. The hole area prevents the contour detection operation from detecting a completed contour of the foreground object. The image manipulation application further applies a contour completion model to the incomplete contour and the incomplete image to generate the completed contour for the foreground object. Based on the completed contour and the incomplete image, the image manipulation application generates image content for the hole area to generate a completed image.
    Type: Application
    Filed: November 24, 2020
    Publication date: March 18, 2021
    Inventors: Zhe Lin, Wei Xiong, Connelly Barnes, Jimei Yang, Xin Lu