Patents by Inventor Jingwan Lu

Jingwan Lu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11636570
    Abstract: This disclosure describes one or more implementations of a digital image semantic layout manipulation system that generates refined digital images resembling the style of one or more input images while following the structure of an edited semantic layout. For example, in various implementations, the digital image semantic layout manipulation system builds and utilizes a sparse attention warped image neural network to generate high-resolution warped images and a digital image layout neural network to enhance and refine the high-resolution warped digital image into a realistic and accurate refined digital image.
    Type: Grant
    Filed: April 1, 2021
    Date of Patent: April 25, 2023
    Assignee: Adobe Inc.
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Jianming Zhang, Ning Xu
  • Publication number: 20230102055
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a modified digital image from extracted spatial and global codes. For example, the disclosed systems can utilize a global and spatial autoencoder to extract spatial codes and global codes from digital images. The disclosed systems can further utilize the global and spatial autoencoder to generate a modified digital image by combining extracted spatial and global codes in various ways for various applications such as style swapping, style blending, and attribute editing.
    Type: Application
    Filed: November 22, 2022
    Publication date: March 30, 2023
    Inventors: Taesung Park, Richard Zhang, Oliver Wang, Junyan Zhu, Jingwan Lu, Elya Shechtman, Alexei A. Efros
  • Publication number: 20230053588
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate synthetized digital images via multi-resolution generator neural networks. The disclosed system extracts multi-resolution features from a scene representation to condition a spatial feature tensor and a latent code to modulate an output of a generator neural network. For example, the disclosed systems utilizes a base encoder of the generator neural network to generate a feature set from a semantic label map of a scene. The disclosed system then utilizes a bottom-up encoder to extract multi-resolution features and generate a latent code from the feature set. Furthermore, the disclosed system determines a spatial feature tensor by utilizing a top-down encoder to up-sample and aggregate the multi-resolution features. The disclosed system then utilizes a decoder to generate a synthesized digital image based on the spatial feature tensor and the latent code.
    Type: Application
    Filed: August 12, 2021
    Publication date: February 23, 2023
    Inventors: Yuheng Li, Yijun Li, Jingwan Lu, Elya Shechtman, Krishna Kumar Singh
  • Publication number: 20230058793
    Abstract: The present disclosure relates to an image retouching system that automatically retouches digital images by accurately correcting face imperfections such as skin blemishes and redness. For instance, the image retouching system automatically retouches a digital image through separating digital images into multiple frequency layers, utilizing a separate corresponding neural network to apply frequency-specific corrections at various frequency layers, and combining the retouched frequency layers into a retouched digital image. As described herein, the image retouching system efficiently utilizes different neural networks to target and correct skin features specific to each frequency layer.
    Type: Application
    Filed: October 11, 2022
    Publication date: February 23, 2023
    Inventors: Federico Perazzi, Jingwan Lu
  • Publication number: 20230051749
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate synthetized digital images using class-specific generators for objects of different classes. The disclosed system modifies a synthesized digital image by utilizing a plurality of class-specific generator neural networks to generate a plurality of synthesized objects according to object classes identified in the synthesized digital image. The disclosed system determines object classes in the synthesized digital image such as via a semantic label map corresponding to the synthesized digital image. The disclosed system selects class-specific generator neural networks corresponding to the classes of objects in the synthesized digital image. The disclosed system also generates a plurality of synthesized objects utilizing the class-specific generator neural networks based on contextual data associated with the identified objects.
    Type: Application
    Filed: August 12, 2021
    Publication date: February 16, 2023
    Inventors: Yuheng Li, Yijun Li, Jingwan Lu, Elya Shechtman, Krishna Kumar Singh
  • Patent number: 11544880
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a modified digital image from extracted spatial and global codes. For example, the disclosed systems can utilize a global and spatial autoencoder to extract spatial codes and global codes from digital images. The disclosed systems can further utilize the global and spatial autoencoder to generate a modified digital image by combining extracted spatial and global codes in various ways for various applications such as style swapping, style blending, and attribute editing.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: January 3, 2023
    Assignee: Adobe Inc.
    Inventors: Taesung Park, Richard Zhang, Oliver Wang, Junyan Zhu, Jingwan Lu, Elya Shechtman, Alexei A Efros
  • Patent number: 11521299
    Abstract: The present disclosure relates to an image retouching system that automatically retouches digital images by accurately correcting face imperfections such as skin blemishes and redness. For instance, the image retouching system automatically retouches a digital image through separating digital images into multiple frequency layers, utilizing a separate corresponding neural network to apply frequency-specific corrections at various frequency layers, and combining the retouched frequency layers into a retouched digital image. As described herein, the image retouching system efficiently utilizes different neural networks to target and correct skin features specific to each frequency layer.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: December 6, 2022
    Assignee: Adobe Inc.
    Inventors: Federico Perazzi, Jingwan Lu
  • Patent number: 11508148
    Abstract: The present disclosure relates to systems, computer-implemented methods, and non-transitory computer readable medium for automatically transferring makeup from a reference face image to a target face image using a neural network trained using semi-supervised learning. For example, the disclosed systems can receive, at a neural network, a target face image and a reference face image, where the target face image is selected by a user via a graphical user interface (GUI) and the reference face image has makeup. The systems transfer, by the neural network, the makeup from the reference face image to the target face image, where the neural network is trained to transfer the makeup from the reference face image to the target face image using semi-supervised learning. The systems output for display the makeup on the target face image.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: November 22, 2022
    Assignee: ADOBE INC.
    Inventors: Yijun Li, Zhifei Zhang, Richard Zhang, Jingwan Lu
  • Publication number: 20220327657
    Abstract: This disclosure describes one or more implementations of a digital image semantic layout manipulation system that generates refined digital images resembling the style of one or more input images while following the structure of an edited semantic layout. For example, in various implementations, the digital image semantic layout manipulation system builds and utilizes a sparse attention warped image neural network to generate high-resolution warped images and a digital image layout neural network to enhance and refine the high-resolution warped digital image into a realistic and accurate refined digital image.
    Type: Application
    Filed: April 1, 2021
    Publication date: October 13, 2022
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Jianming Zhang, Ning Xu
  • Publication number: 20220270310
    Abstract: The present disclosure describes systems, methods, and non-transitory computer readable media for detecting user interactions to edit a digital image from a client device and modify the digital image for the client device by using a web-based intermediary that modifies a latent vector of the digital image and an image modification neural network to generate a modified digital image from the modified latent vector. In response to user interaction to modify a digital image, for instance, the disclosed systems modify a latent vector extracted from the digital image to reflect the requested modification. The disclosed systems further use a latent vector stream renderer (as an intermediary device) to generate an image delta that indicates a difference between the digital image and the modified digital image. The disclosed systems then provide the image delta as part of a digital stream to a client device to quickly render the modified digital image.
    Type: Application
    Filed: February 23, 2021
    Publication date: August 25, 2022
    Inventors: Akhilesh Kumar, Baldo Faieta, Piotr Walczyszyn, Ratheesh Kalarot, Archie Bagnall, Shabnam Ghadar, Wei-An Lin, Cameron Smith, Christian Cantrell, Patrick Hebron, Wilson Chan, Jingwan Lu, Holger Winnemoeller, Sven Olsen
  • Publication number: 20220261972
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that utilize image-guided model inversion of an image classifier with a discriminator. The disclosed systems utilize a neural network image classifier to encode features of an initial image and a target image. The disclosed system also reduces a feature distance between the features of the initial image and the features of the target image at a plurality of layers of the neural network image classifier by utilizing a feature distance regularizer. Additionally, the disclosed system reduces a patch difference between image patches of the initial image and image patches of the target image by utilizing a patch-based discriminator with a patch consistency regularizer. The disclosed system then generates a synthesized digital image based on the constrained feature set and constrained image patches of the initial image.
    Type: Application
    Filed: February 18, 2021
    Publication date: August 18, 2022
    Inventors: Pei Wang, Yijun Li, Jingwan Lu, Krishna Kumar Singh
  • Publication number: 20220254071
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and efficiently modifying a generative adversarial neural network using few-shot adaptation to generate digital images corresponding to a target domain while maintaining diversity of a source domain and realism of the target domain. In particular, the disclosed systems utilize a generative adversarial neural network with parameters learned from a large source domain. The disclosed systems preserve relative similarities and differences between digital images in the source domain using a cross-domain distance consistency loss. In addition, the disclosed systems utilize an anchor-based strategy to encourage different levels or measures of realism over digital images generated from latent vectors in different regions of a latent space.
    Type: Application
    Filed: January 29, 2021
    Publication date: August 11, 2022
    Inventors: Utkarsh Ojha, Yijun Li, Richard Zhang, Jingwan Lu, Elya Shechtman, Alexei A. Efros
  • Patent number: 11354792
    Abstract: Technologies for image processing based on a creation workflow for creating a type of images are provided. Both multi-stage image generation as well as multi-stage image editing of an existing image are supported. To accomplish this, one system models the sequential creation stages of the creation workflow. In the backward direction, inference networks can backward transform an image into various intermediate stages. In the forward direction, generation networks can forward transform an earlier-stage image into a later-stage image based on stage-specific operations. Advantageously, this technical solution overcomes the limitations of the single-stage generation strategy with a multi-stage framework to model different types of variation at various creation stages. Resultantly, both novices and seasoned artists can use these technologies to efficiently perform complex artwork creation or editing tasks.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: June 7, 2022
    Assignee: Adobe Inc.
    Inventors: Matthew David Fisher, Hung-Yu Tseng, Yijun Li, Jingwan Lu
  • Publication number: 20220148243
    Abstract: Face anonymization techniques are described that overcome conventional challenges to generate an anonymized face. In one example, a digital object editing system is configured to generate an anonymized face based on a target face and a reference face. As part of this, the digital object editing system employs an encoder as part of machine learning to extract a target encoding of the target face image and a reference encoding of the reference face. The digital object editing system then generates a mixed encoding from the target and reference encodings. The mixed encoding is employed by a machine-learning model of the digital object editing system to generate a mixed face. An object replacement module is used by the digital object editing system to replace the target face in the target digital image with the mixed face.
    Type: Application
    Filed: November 10, 2020
    Publication date: May 12, 2022
    Applicant: Adobe Inc.
    Inventors: Yang Yang, Zhixin Shu, Shabnam Ghadar, Jingwan Lu, Jakub Fiser, Elya Schechtman, Cameron Y. Smith, Baldo Antonio Faieta, Alex Charles Filipkowski
  • Patent number: 11328385
    Abstract: Techniques and systems are provided for configuring neural networks to perform warping of an object represented in an image to create a caricature of the object. For instance, in response to obtaining an image of an object, a warped image generator generates a warping field using the image as input. The warping field is generated using a model trained with pairings of training images and known warped images using supervised learning techniques and one or more losses. The warped image generator determines, based on the warping field, a set of displacements associated with pixels of the input image. These displacements indicate pixel displacement directions for the pixels of the input image. These displacements are applied to the digital image to generate a warped image of the object.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: May 10, 2022
    Assignee: Adobe Inc.
    Inventors: Julia Gong, Yannick Hold-Geoffroy, Jingwan Lu
  • Publication number: 20220122222
    Abstract: An improved system architecture uses a Generative Adversarial Network (GAN) including a specialized generator neural network to generate multiple resolution output images. The system produces a latent space representation of an input image. The system generates a first output image at a first resolution by providing the latent space representation of the input image as input to a generator neural network comprising an input layer, an output layer, and a plurality of intermediate layers and taking the first output image from an intermediate layer, of the plurality of intermediate layers of the generator neural network. The system generates a second output image at a second resolution different from the first resolution by providing the latent space representation of the input image as input to the generator neural network and taking the second output image from the output layer of the generator neural network.
    Type: Application
    Filed: July 23, 2021
    Publication date: April 21, 2022
    Inventors: Cameron Smith, Ratheesh Kalarot, Wei-An Lin, Richard Zhang, Niloy Mitra, Elya Shechtman, Shabnam Ghadar, Zhixin Shu, Yannick Hold-Geoffrey, Nathan Carr, Jingwan Lu, Oliver Wang, Jun-Yan Zhu
  • Publication number: 20220122221
    Abstract: An improved system architecture uses a pipeline including a Generative Adversarial Network (GAN) including a generator neural network and a discriminator neural network to generate an image. An input image in a first domain and information about a target domain are obtained. The domains correspond to image styles. An initial latent space representation of the input image is produced by encoding the input image. An initial output image is generated by processing the initial latent space representation with the generator neural network. Using the discriminator neural network, a score is computed indicating whether the initial output image is in the target domain. A loss is computed based on the computed score. The loss is minimized to compute an updated latent space representation. The updated latent space representation is processed with the generator neural network to generate an output image in the target domain.
    Type: Application
    Filed: July 23, 2021
    Publication date: April 21, 2022
    Inventors: Cameron Smith, Ratheesh Kalarot, Wei-An Lin, Richard Zhang, Niloy Mitra, Elya Shechtman, Shabnam Ghadar, Zhixin Shu, Yannick Hold-Geoffrey, Nathan Carr, Jingwan Lu, Oliver Wang, Jun-Yan Zhu
  • Publication number: 20220122224
    Abstract: The present disclosure relates to an image retouching system that automatically retouches digital images by accurately correcting face imperfections such as skin blemishes and redness. For instance, the image retouching system automatically retouches a digital image through separating digital images into multiple frequency layers, utilizing a separate corresponding neural network to apply frequency-specific corrections at various frequency layers, and combining the retouched frequency layers into a retouched digital image. As described herein, the image retouching system efficiently utilizes different neural networks to target and correct skin features specific to each frequency layer.
    Type: Application
    Filed: October 16, 2020
    Publication date: April 21, 2022
    Inventors: Federico Perazzi, Jingwan Lu
  • Publication number: 20220122232
    Abstract: Systems and methods generate a filtering function for editing an image with reduced attribute correlation. An image editing system groups training data into bins according to a distribution of a target attribute. For each bin, the system samples a subset of the training data based on a pre-determined target distribution of a set of additional attributes in the training data. The system identifies a direction in the sampled training data corresponding to the distribution of the target attribute to generate a filtering vector for modifying the target attribute in an input image, obtains a latent space representation of an input image, applies the filtering vector to the latent space representation of the input image to generate a filtered latent space representation of the input image, and provides the filtered latent space representation as input to a neural network to generate an output image with a modification to the target attribute.
    Type: Application
    Filed: September 7, 2021
    Publication date: April 21, 2022
    Inventors: Wei-An Lin, Baldo Faieta, Cameron Smith, Elya Shechtman, Jingwan Lu, Jun-Yan Zhu, Niloy Mitra, Ratheesh Kalarot, Richard Zhang, Shabnam Ghadar, Zhixin Shu
  • Publication number: 20220121932
    Abstract: Systems and methods train an encoder neural network for fast and accurate projection into the latent space of a Generative Adversarial Network (GAN). The encoder is trained by providing an input training image to the encoder and producing, by the encoder, a latent space representation of the input training image. The latent space representation is provided as input to the GAN to generate a generated training image. A latent code is sampled from a latent space associated with the GAN and the sampled latent code is provided as input to the GAN. The GAN generates a synthetic training image based on the sampled latent code. The sampled latent code is provided as input to the encoder to produce a synthetic training code. The encoder is updated by minimizing a loss between the generated training image and the input training image, and the synthetic training code and the sampled latent code.
    Type: Application
    Filed: July 23, 2021
    Publication date: April 21, 2022
    Inventors: Ratheesh Kalarot, Wei-An Lin, Cameron Smith, Zhixin Shu, Baldo Faieta, Shabnam Ghadar, Jingwan Lu, Aliakbar Darabi, Jun-Yan Zhu, Niloy Mitra, Richard Zhang, Elya Shechtman