Patents by Inventor Jimei Yang

Jimei Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11017586
    Abstract: Systems and methods are described for generating a three dimensional (3D) effect from a two dimensional (2D) image. The methods may include generating a depth map based on a 2D image, identifying a camera path, generating one or more extremal views based on the 2D image and the camera path, generating a global point cloud by inpainting occlusion gaps in the one or more extremal views, generating one or more intermediate views based on the global point cloud and the camera path, and combining the one or more extremal views and the one or more intermediate views to produce a 3D motion effect.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: May 25, 2021
    Assignee: ADOBE INC.
    Inventors: Mai Long, Simon Niklaus, Jimei Yang
  • Patent number: 10997464
    Abstract: Digital image layout training is described using wireframe rendering within a generative adversarial network (GAN) system. A GAN system is employed to train the generator module to refine digital image layouts. To do so, a wireframe rendering discriminator module rasterizes a refined digital training digital image layout received from a generator module into a wireframe digital image layout. The wireframe digital image layout is then compared with at least one ground truth digital image layout using a loss function as part of machine learning by the wireframe discriminator module. The generator module is then trained by backpropagating a result of the comparison.
    Type: Grant
    Filed: November 9, 2018
    Date of Patent: May 4, 2021
    Assignee: Adobe Inc.
    Inventors: Jimei Yang, Jianming Zhang, Aaron Phillip Hertzmann, Jianan Li
  • Patent number: 10964100
    Abstract: According to one general aspect, systems and techniques for rendering a painting stroke of a three-dimensional digital painting include receiving a painting stroke input on a canvas, where the painting stroke includes a plurality of pixels. For each of the pixels in the plurality of pixels, a neighborhood patch of pixels is selected and input into a neural network and a shading function is output from the neural network. The painting stroke is rendered on the canvas using the shading function.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: March 30, 2021
    Assignee: ADOBE INC.
    Inventors: Xin Sun, Zhili Chen, Nathan Carr, Julio Marco Murria, Jimei Yang
  • Patent number: 10964084
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a digital animation of a digital animation character by utilizing a generative adversarial network and a hip motion prediction network. For example, the disclosed systems can utilize an unconditional generative adversarial network to generate a sequence of local poses of a digital animation character based on an input of a random code vector. The disclosed systems can also utilize a conditional generative adversarial network to generate a sequence of local poses based on an input of a set of keyframes. Based on the sequence of local poses, the disclosed systems can utilize a hip motion prediction network to generate a sequence of global poses based on hip velocities. In addition, the disclosed systems can generate an animation of a digital animation character based on the sequence of global poses.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: March 30, 2021
    Assignee: ADOBE INC.
    Inventors: Jingwan Lu, Yi Zhou, Connelly Barnes, Jimei Yang
  • Publication number: 20210082124
    Abstract: In some embodiments, an image manipulation application receives an incomplete image that includes a hole area lacking image content. The image manipulation application applies a contour detection operation to the incomplete image to detect an incomplete contour of a foreground object in the incomplete image. The hole area prevents the contour detection operation from detecting a completed contour of the foreground object. The image manipulation application further applies a contour completion model to the incomplete contour and the incomplete image to generate the completed contour for the foreground object. Based on the completed contour and the incomplete image, the image manipulation application generates image content for the hole area to generate a completed image.
    Type: Application
    Filed: November 24, 2020
    Publication date: March 18, 2021
    Inventors: Zhe Lin, Wei Xiong, Connelly Barnes, Jimei Yang, Xin Lu
  • Patent number: 10922852
    Abstract: Oil painting simulation techniques are disclosed which simulate painting brush strokes using a trained neural network. In some examples, a method may include inferring a new height map of existing paint on a canvas after a new painting brush stroke is applied based on a bristle trajectory map that represents the new painting brush stroke and a height map of existing paint on the canvas prior to the application of the new painting brush stroke, and generating a rendering of the new painting brush stroke based on the new height map of existing paint on the canvas after the new painting brush stroke is applied to the canvas and a color map.
    Type: Grant
    Filed: August 13, 2019
    Date of Patent: February 16, 2021
    Assignee: Adobe Inc.
    Inventors: Zhili Chen, Zhaowen Wang, Rundong Wu, Jimei Yang
  • Patent number: 10922860
    Abstract: Computing systems and computer-implemented methods can be used for automatically generating a digital line drawing of the contents of a photograph. In various examples, these techniques include use of a neural network, referred to as a generator network, that is trained on a dataset of photographs and human-generated line drawings of the photographs. The training data set teaches the neural network to trace the edges and features of objects in the photographs, as well as which edges or features can be ignored. The output of the generator network is a two-tone digital image, where the background of the image is one tone, and the contents in the input photographs are represented by lines drawn in the second tone. In some examples, a second neural network, referred to as a restorer network, can further process the output of the generator network, and remove visual artifacts and clean up the lines.
    Type: Grant
    Filed: May 13, 2019
    Date of Patent: February 16, 2021
    Assignee: Adobe Inc.
    Inventors: Brian Price, Ning Xu, Naoto Inoue, Jimei Yang, Daicho Ito
  • Publication number: 20200410736
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a digital animation of a digital animation character by utilizing a generative adversarial network and a hip motion prediction network. For example, the disclosed systems can utilize an unconditional generative adversarial network to generate a sequence of local poses of a digital animation character based on an input of a random code vector. The disclosed systems can also utilize a conditional generative adversarial network to generate a sequence of local poses based on an input of a set of keyframes. Based on the sequence of local poses, the disclosed systems can utilize a hip motion prediction network to generate a sequence of global poses based on hip velocities. In addition, the disclosed systems can generate an animation of a digital animation character based on the sequence of global poses.
    Type: Application
    Filed: June 25, 2019
    Publication date: December 31, 2020
    Inventors: Jingwan Lu, Yi Zhou, Connelly Barnes, Jimei Yang
  • Patent number: 10878575
    Abstract: In some embodiments, an image manipulation application receives an incomplete image that includes a hole area lacking image content. The image manipulation application applies a contour detection operation to the incomplete image to detect an incomplete contour of a foreground object in the incomplete image. The hole area prevents the contour detection operation from detecting a completed contour of the foreground object. The image manipulation application further applies a contour completion model to the incomplete contour and the incomplete image to generate the completed contour for the foreground object. Based on the completed contour and the incomplete image, the image manipulation application generates image content for the hole area to generate a completed image.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: December 29, 2020
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Wei Xiong, Connelly Barnes, Jimei Yang, Xin Lu
  • Publication number: 20200364910
    Abstract: Computing systems and computer-implemented methods can be used for automatically generating a digital line drawing of the contents of a photograph. In various examples, these techniques include use of a neural network, referred to as a generator network, that is trained on a dataset of photographs and human-generated line drawings of the photographs. The training data set teaches the neural network to trace the edges and features of objects in the photographs, as well as which edges or features can be ignored. The output of the generator network is a two-tone digital image, where the background of the image is one tone, and the contents in the input photographs are represented by lines drawn in the second tone. In some examples, a second neural network, referred to as a restorer network, can further process the output of the generator network, and remove visual artifacts and clean up the lines.
    Type: Application
    Filed: May 13, 2019
    Publication date: November 19, 2020
    Inventors: Brian Price, Ning Xu, Naoto Inoue, Jimei Yang, Daicho Ito
  • Patent number: 10839575
    Abstract: Certain embodiments involve using an image completion neural network to perform user-guided image completion. For example, an image editing application accesses an input image having a completion region to be replaced with new image content. The image editing application also receives a guidance input that is applied to a portion of a completion region. The image editing application provides the input image and the guidance input to an image completion neural network that is trained to perform image-completion operations using guidance input. The image editing application produces a modified image by replacing the completion region of the input image with the new image content generated with the image completion network. The image editing application outputs the modified image having the new image content.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: November 17, 2020
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Patent number: 10839493
    Abstract: In implementations of transferring image style to content of a digital image, an image editing system includes an encoder that extracts features from a content image and features from a style image. A whitening and color transform generates coarse features from the content and style features extracted by the encoder for one pass of encoding and decoding. Hence, the processing delay and memory requirements are low. A feature transfer module iteratively transfers style features to the coarse feature map and generates a fine feature map. The image editing system fuses the fine features with the coarse features, and a decoder generates an output image with content of the content image in a style of the style image from the fused features. Accordingly, the image editing system efficiently transfers an image style to image content in real-time, without undesirable artifacts in the output image.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: November 17, 2020
    Assignee: Adobe Inc.
    Inventors: Chen Fang, Zhe Lin, Zhaowen Wang, Yulun Zhang, Yilin Wang, Jimei Yang
  • Publication number: 20200349688
    Abstract: A style of a digital image is transferred to another digital image of arbitrary resolution. A high-resolution (HR) content image is segmented into several low-resolution (LR) patches. The resolution of a style image is matched to have the same resolution as the LR content image patches. Style transfer is then performed on a patch-by-patch basis using, for example, a pair of feature transforms—whitening and coloring. The patch-by-patch style transfer process is then repeated at several increasing resolutions, or scale levels, of both the content and style images. The results of the style transfer at each scale level are incorporated into successive scale levels up to and including the original HR scale. As a result, style transfer can be performed with images having arbitrary resolutions to produce visually pleasing results with good spatial consistency.
    Type: Application
    Filed: July 16, 2020
    Publication date: November 5, 2020
    Applicant: Adobe Inc.
    Inventors: Chen Fang, Zhe Lin, Zhaowen Wang, Yulun Zhang, Yilin Wang, Jimei Yang
  • Publication number: 20200342576
    Abstract: Digital image completion by learning generation and patch matching jointly is described. Initially, a digital image having at least one hole is received. This holey digital image is provided as input to an image completer formed with a dual-stage framework that combines a coarse image neural network and an image refinement network. The coarse image neural network generates a coarse prediction of imagery for filling the holes of the holey digital image. The image refinement network receives the coarse prediction as input, refines the coarse prediction, and outputs a filled digital image having refined imagery that fills these holes. The image refinement network generates refined imagery using a patch matching technique, which includes leveraging information corresponding to patches of known pixels for filtering patches generated based on the coarse prediction. Based on this, the image completer outputs the filled digital image with the refined imagery.
    Type: Application
    Filed: July 14, 2020
    Publication date: October 29, 2020
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Publication number: 20200334894
    Abstract: Systems and methods are described for generating a three dimensional (3D) effect from a two dimensional (2D) image. The methods may include generating a depth map based on a 2D image, identifying a camera path, generating one or more extremal views based on the 2D image and the camera path, generating a global point cloud by inpainting occlusion gaps in the one or more extremal views, generating one or more intermediate views based on the global point cloud and the camera path, and combining the one or more extremal views and the one or more intermediate views to produce a 3D motion effect.
    Type: Application
    Filed: April 18, 2019
    Publication date: October 22, 2020
    Inventors: MAI LONG, Simon Niklaus, Jimei Yang
  • Publication number: 20200327675
    Abstract: In some embodiments, an image manipulation application receives an incomplete image that includes a hole area lacking image content. The image manipulation application applies a contour detection operation to the incomplete image to detect an incomplete contour of a foreground object in the incomplete image. The hole area prevents the contour detection operation from detecting a completed contour of the foreground object. The image manipulation application further applies a contour completion model to the incomplete contour and the incomplete image to generate the completed contour for the foreground object. Based on the completed contour and the incomplete image, the image manipulation application generates image content for the hole area to generate a completed image.
    Type: Application
    Filed: April 15, 2019
    Publication date: October 15, 2020
    Inventors: Zhe Lin, Wei Xiong, Connelly Barnes, Jimei Yang, Xin Lu
  • Patent number: 10769764
    Abstract: A style of a digital image is transferred to another digital image of arbitrary resolution. A high-resolution (HR) content image is segmented into several low-resolution (LR) patches. The resolution of a style image is matched to have the same resolution as the LR content image patches. Style transfer is then performed on a patch-by-patch basis using, for example, a pair of feature transforms—whitening and coloring. The patch-by-patch style transfer process is then repeated at several increasing resolutions, or scale levels, of both the content and style images. The results of the style transfer at each scale level are incorporated into successive scale levels up to and including the original HR scale. As a result, style transfer can be performed with images having arbitrary resolutions to produce visually pleasing results with good spatial consistency.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: September 8, 2020
    Assignee: Adobe Inc.
    Inventors: Chen Fang, Zhe Lin, Zhaowen Wang, Yulun Zhang, Yilin Wang, Jimei Yang
  • Patent number: 10755391
    Abstract: Digital image completion by learning generation and patch matching jointly is described. Initially, a digital image having at least one hole is received. This holey digital image is provided as input to an image completer formed with a dual-stage framework that combines a coarse image neural network and an image refinement network. The coarse image neural network generates a coarse prediction of imagery for filling the holes of the holey digital image. The image refinement network receives the coarse prediction as input, refines the coarse prediction, and outputs a filled digital image having refined imagery that fills these holes. The image refinement network generates refined imagery using a patch matching technique, which includes leveraging information corresponding to patches of known pixels for filtering patches generated based on the coarse prediction. Based on this, the image completer outputs the filled digital image with the refined imagery.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: August 25, 2020
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Publication number: 20200258204
    Abstract: A style of a digital image is transferred to another digital image of arbitrary resolution. A high-resolution (HR) content image is segmented into several low-resolution (LR) patches. The resolution of a style image is matched to have the same resolution as the LR content image patches. Style transfer is then performed on a patch-by-patch basis using, for example, a pair of feature transforms—whitening and coloring. The patch-by-patch style transfer process is then repeated at several increasing resolutions, or scale levels, of both the content and style images. The results of the style transfer at each scale level are incorporated into successive scale levels up to and including the original HR scale. As a result, style transfer can be performed with images having arbitrary resolutions to produce visually pleasing results with good spatial consistency.
    Type: Application
    Filed: February 8, 2019
    Publication date: August 13, 2020
    Applicant: Adobe Inc.
    Inventors: Chen Fang, Zhe Lin, Zhaowen Wang, Yulun Zhang, Yilin Wang, Jimei Yang
  • Publication number: 20200226724
    Abstract: In implementations of transferring image style to content of a digital image, an image editing system includes an encoder that extracts features from a content image and features from a style image. A whitening and color transform generates coarse features from the content and style features extracted by the encoder for one pass of encoding and decoding. Hence, the processing delay and memory requirements are low. A feature transfer module iteratively transfers style features to the coarse feature map and generates a fine feature map. The image editing system fuses the fine features with the coarse features, and a decoder generates an output image with content of the content image in a style of the style image from the fused features. Accordingly, the image editing system efficiently transfers an image style to image content in real-time, without undesirable artifacts in the output image.
    Type: Application
    Filed: January 11, 2019
    Publication date: July 16, 2020
    Applicant: Adobe Inc.
    Inventors: Chen Fang, Zhe Lin, Zhaowen Wang, Yulun Zhang, Yilin Wang, Jimei Yang