Patents by Inventor Jingwan Lu

Jingwan Lu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12272031
    Abstract: An image inpainting system is described that receives an input image that includes a masked region. From the input image, the image inpainting system generates a synthesized image that depicts an object in the masked region by selecting a first code that represents a known factor characterizing a visual appearance of the object and a second code that represents an unknown factor characterizing the visual appearance of the object apart from the known factor in latent space. The input image, the first code, and the second code are provided as input to a generative adversarial network that is trained to generate the synthesized image using contrastive losses. Different synthesized images are generated from the same input image using different combinations of first and second codes, and the synthesized images are output for display.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: April 8, 2025
    Assignee: Adobe Inc.
    Inventors: Krishna Kumar Singh, Yuheng Li, Yijun Li, Jingwan Lu, Elya Shechtman
  • Patent number: 12260530
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
    Type: Grant
    Filed: March 27, 2023
    Date of Patent: March 25, 2025
    Assignee: Adobe Inc.
    Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz, Qing Liu, Jianming Zhang, Zhe Lin
  • Patent number: 12254597
    Abstract: An item recommendation system receives a set of recommendable items and a request to select, from the set of recommendable items, a contrast group. The item recommendation system selects a contrast group from the set of recommendable items by applying a image modification model to the set of recommendable items. The image modification model includes an item selection model configured to determine an unbiased conversion rate for each item of the set of recommendable items and select a recommended item from the set of recommendable items having a greatest unbiased conversion rate. The image modification model includes a contrast group selection model configured to select, for the recommended item, a contrast group comprising the recommended item and one or more contrast items. The item recommendation system transmits the contrast group responsive to the request.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: March 18, 2025
    Assignee: Adobe Inc.
    Inventors: Cameron Smith, Wei-An Lin, Timothy M. Converse, Shabnam Ghadar, Ratheesh Kalarot, John Nack, Jingwan Lu, Hui Qu, Elya Shechtman, Baldo Faieta
  • Patent number: 12254594
    Abstract: Methods, systems, and non-transitory computer readable media are disclosed for intelligently enhancing details in edited images. The disclosed system iteratively updates residual detail latent code for segments in edited images where detail has been lost through the editing process. More particularly, the disclosed system enhances an edited segment in an edited image based on details in a detailed segment of an image. Additionally, the disclosed system may utilize a detail neural network encoder to project the detailed segment and a corresponding segment of the edited image into a residual detail latent code. In some embodiments, the disclosed system generates a refined edited image based on the residual detail latent code and a latent vector of the edited image.
    Type: Grant
    Filed: April 1, 2022
    Date of Patent: March 18, 2025
    Assignee: Adobe Inc.
    Inventors: Hui Qu, Jingwan Lu, Saeid Motiian, Shabnam Ghadar, Wei-An Lin, Elya Shechtman
  • Patent number: 12249132
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for adapting generative neural networks to target domains utilizing an image translation neural network. In particular, in one or more embodiments, the disclosed systems utilize an image translation neural network to translate target results to a source domain for input in target neural network adaptation. For instance, in some embodiments, the disclosed systems compare a translated target result with a source result from a pretrained source generative neural network to adjust parameters of a target generative neural network to produce results corresponding in features to source results and corresponding in style to the target domain.
    Type: Grant
    Filed: July 27, 2022
    Date of Patent: March 11, 2025
    Assignee: Adobe Inc.
    Inventors: Yijun Li, Nicholas Kolkin, Jingwan Lu, Elya Shechtman
  • Publication number: 20250078349
    Abstract: A method, apparatus, and non-transitory computer readable medium for image generation are described. Embodiments of the present disclosure obtain a content input and a style input via a user interface or from a database. The content input includes a target spatial layout and the style input includes a target style. A content encoder of an image processing apparatus encodes the content input to obtain a spatial layout mask representing the target spatial layout. A style encoder of the image processing apparatus encodes the style input to obtain a style embedding representing the target style. An image generation model of the image processing apparatus generates an image based on the spatial layout mask and the style embedding, where the image includes the target spatial layout and the target style.
    Type: Application
    Filed: September 1, 2023
    Publication date: March 6, 2025
    Inventors: Wonwoong Cho, Hareesh Ravi, Midhun Harikumar, Vinh Ngoc Khuc, Krishna Kumar Singh, Jingwan Lu, Ajinkya Gorakhnath Kale
  • Publication number: 20250078406
    Abstract: A modeling system accesses a two-dimensional (2D) input image displayed via a user interface, the 2D input image depicting, at a first view, a first object. At least one region of the first object is not represented by pixel values of the 2D input image. The modeling system generates, by applying a 3D representation generation model to the 2D input image, a three-dimensional (3D) representation of the first object that depicts an entirety of the first object including the first region. The modeling system displays, via the user interface, the 3D representation, wherein the 3D representation is viewable via the user interface from a plurality of views including the first view.
    Type: Application
    Filed: September 5, 2023
    Publication date: March 6, 2025
    Inventors: Jae Shin Yoon, Yangtuanfeng Wang, Krishna Kumar Singh, Junying Wang, Jingwan Lu
  • Publication number: 20250069203
    Abstract: A method, non-transitory computer readable medium, apparatus, and system for image generation are described. An embodiment of the present disclosure includes obtaining an input image, an inpainting mask, and a plurality of content preservation values corresponding to different regions of the inpainting mask, and identifying a plurality of mask bands of the inpainting mask based on the plurality of content preservation values. An image generation model generates an output image based on the input image and the inpainting mask. The output image is generated in a plurality of phases. Each of the plurality of phases uses a corresponding mask band of the plurality of mask bands as an input.
    Type: Application
    Filed: August 24, 2023
    Publication date: February 27, 2025
    Inventors: Yuqian Zhou, Krishna Kumar Singh, Benjamin Delarre, Zhe Lin, Jingwan Lu, Taesung Park, Sohrab Amirghodsi, Elya Shechtman
  • Publication number: 20250069299
    Abstract: One or more aspects of a method, apparatus, and non-transitory computer readable medium include obtaining an input latent vector for an image generation network and a target lighting representation. A modified latent vector is generated based on the input latent vector and the target lighting representation, and an image generation network generates an image based on the modified latent vector using.
    Type: Application
    Filed: August 21, 2023
    Publication date: February 27, 2025
    Inventors: Kevin Duarte, Wei-An Lin, Ratheesh Kalarot, Shabnam Ghadar, Jingwan Lu, Elya Shechtman
  • Patent number: 12230014
    Abstract: An image generation system enables user input during the process of training a generative model to influence the model's ability to generate new images with desired visual features. A source generative model for a source domain is fine-tuned using training images in a target domain to provide an adapted generative model for the target domain. Interpretable factors are determined for the source generative model and the adapted generative model. A user interface is provided that enables a user to select one or more interpretable factors. The user-selected interpretable factor(s) are used to generate a user-adapted generative model, for instance, by using a loss function based on the user-selected interpretable factor(s). The user-adapted generative model can be used to create new images in the target domain.
    Type: Grant
    Filed: February 25, 2022
    Date of Patent: February 18, 2025
    Assignee: ADOBE INC.
    Inventors: Yijun Li, Utkarsh Ojha, Richard Zhang, Jingwan Lu, Elya Shechtman, Alexei A. Efros
  • Publication number: 20250054116
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing a cascaded modulation inpainting neural network. For example, the disclosed systems utilize a cascaded modulation inpainting neural network that includes cascaded modulation decoder layers. For example, in one or more decoder layers, the disclosed systems start with global code modulation that captures the global-range image structures followed by an additional modulation that refines the global predictions. Accordingly, in one or more implementations, the image inpainting system provides a mechanism to correct distorted local details. Furthermore, in one or more implementations, the image inpainting system leverages fast Fourier convolutions block within different resolution layers of the encoder architecture to expand the receptive field of the encoder and to allow the network encoder to better capture global structure.
    Type: Application
    Filed: October 28, 2024
    Publication date: February 13, 2025
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
  • Publication number: 20250037431
    Abstract: Systems and methods for training a Generative Adversarial Network (GAN) using feature regularization are described herein. Embodiments are configured to generate a candidate image using a generator network of a GAN, classify the candidate image as real or generated using a discriminator network of the GAN, and train the GAN to generate realistic images based on the classifying of the candidate image. The training process includes regularizing a gradient with respect to features extracted using a discriminator network of the GAN.
    Type: Application
    Filed: July 24, 2023
    Publication date: January 30, 2025
    Inventors: Min Jin Chong, Krishna Kumar Singh, Yijun Li, Jingwan Lu
  • Patent number: 12211178
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for combining digital images. In particular, in one or more embodiments, the disclosed systems combine latent codes of a source digital image and a target digital image utilizing a blending network to determine a combined latent encoding and generate a combined digital image from the combined latent encoding utilizing a generative neural network. In some embodiments, the disclosed systems determine an intersection face mask between the source digital image and the combined digital image utilizing a face segmentation network and combine the source digital image and the combined digital image utilizing the intersection face mask to generate a blended digital image.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: January 28, 2025
    Assignee: Adobe Inc.
    Inventors: Tobias Hinz, Shabnam Ghadar, Richard Zhang, Ratheesh Kalarot, Jingwan Lu, Elya Shechtman
  • Patent number: 12204610
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for training a generative inpainting neural network to accurately generate inpainted digital images via object-aware training and/or masked regularization. For example, the disclosed systems utilize an object-aware training technique to learn parameters for a generative inpainting neural network based on masking individual object instances depicted within sample digital images of a training dataset. In some embodiments, the disclosed systems also (or alternatively) utilize a masked regularization technique as part of training to prevent overfitting by penalizing a discriminator neural network utilizing a regularization term that is based on an object mask. In certain cases, the disclosed systems further generate an inpainted digital image utilizing a trained generative inpainting model with parameters learned via the object-aware training and/or the masked regularization.
    Type: Grant
    Filed: February 14, 2022
    Date of Patent: January 21, 2025
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Haitian Zheng, Jingwan Lu, Scott Cohen, Jianming Zhang, Ning Xu, Elya Shechtman, Connelly Barnes, Sohrab Amirghodsi
  • Publication number: 20250005812
    Abstract: In implementations of systems for human reposing based on multiple input views, a computing device implements a reposing system to receive input data describing: input digital images; pluralities of keypoints corresponding to the input digital images, the pluralities of keypoints representing poses of a person depicted in the input digital images; and a plurality of keypoints representing a target pose. The reposing system generates selection masks corresponding to the input digital images by processing the input data using a machine learning model. The selection masks represent likelihoods of spatial correspondence between pixels of an output digital image and portions of the input digital images. The reposing system generates the output digital image depicting the person in the target pose for display in a user interface based on the selection masks and the input data.
    Type: Application
    Filed: June 28, 2023
    Publication date: January 2, 2025
    Applicant: Adobe Inc.
    Inventors: Rishabh Jain, Mayur Hemani, Mausoom Sarkar, Krishna Kumar Singh, Jingwan Lu, Duygu Ceylan Aksit, Balaji Krishnamurthy
  • Publication number: 20250005824
    Abstract: Systems and methods for image processing are described. One aspect of the systems and methods includes receiving a plurality of images comprising a first image depicting a first body part and a second image depicting a second body part and encoding, using a texture encoder, the first image and the second image to obtain a first texture embedding and a second texture embedding, respectively. Then, a composite image is generated using a generative decoder, the composite image depicting the first body part and the second body part based on the first texture embedding and the second texture embedding.
    Type: Application
    Filed: June 27, 2023
    Publication date: January 2, 2025
    Inventors: Rishabh Jain, Mayur Hemani, Duygu Ceylan Aksit, Krishna Kumar Singh, Jingwan Lu, Mausoom Sarkar, Balaji Krishnamurthy
  • Publication number: 20240428564
    Abstract: In implementations of systems for generating images for human reposing, a computing device implements a reposing system to receive input data describing an input digital image depicting a person in a first pose, a first plurality of keypoints representing the first pose, and a second plurality of keypoints representing a second pose. The reposing system generates a mapping by processing the input data using a first machine learning model. The mapping indicates a plurality of first portions of the person in the second pose that are visible in the input digital image and a plurality of second portions of the person in the second pose that are invisible in the input digital image. The reposing system generates an output digital image depicting the person in the second pose by processing the mapping, the first plurality of keypoints, and the second plurality of keypoints using a second machine learning model.
    Type: Application
    Filed: June 22, 2023
    Publication date: December 26, 2024
    Applicant: Adobe Inc.
    Inventors: Rishabh Jain, Mayur Hemani, Mausoom Sarkar, Krishna Kumar Singh, Jingwan Lu, Duygu Ceylan Aksit, Balaji Krishnamurthy
  • Publication number: 20240428491
    Abstract: The present disclosure relates to a system that utilizes neural networks to generate looping animations from still images. The system fits a 3D model to a pose of a person in a digital image. The system receives a 3D animation sequence that transitions between a starting pose and an ending pose. The system generates, utilizing an animation transition neural network, first and second 3D animation transition sequences that respectively transition between the pose of the person and the starting pose and between the ending pose and the pose of the person. The system modifies each of the 3D animation sequence, the first 3D animation transition sequence, and the second 3D animation transition sequence by applying a texture map. The system generates a looping 3D animation by combining the modified 3D animation sequence, the modified first 3D animation transition sequence, and the modified second 3D animation transition sequence.
    Type: Application
    Filed: June 23, 2023
    Publication date: December 26, 2024
    Inventors: Jae Shin Yoon, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jingwan Lu, Jimei Yang, Zhixin Shu, Chengan He, Yi Zhou, Jun Saito, James Zachary
  • Patent number: 12165295
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing a cascaded modulation inpainting neural network. For example, the disclosed systems utilize a cascaded modulation inpainting neural network that includes cascaded modulation decoder layers. For example, in one or more decoder layers, the disclosed systems start with global code modulation that captures the global-range image structures followed by an additional modulation that refines the global predictions. Accordingly, in one or more implementations, the image inpainting system provides a mechanism to correct distorted local details. Furthermore, in one or more implementations, the image inpainting system leverages fast Fourier convolutions block within different resolution layers of the encoder architecture to expand the receptive field of the encoder and to allow the network encoder to better capture global structure.
    Type: Grant
    Filed: May 4, 2022
    Date of Patent: December 10, 2024
    Assignee: Adobe Inc.
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
  • Publication number: 20240404013
    Abstract: Embodiments include systems and methods for generative image filling based on text and a reference image. In one aspect, the system obtains an input image, a reference image, and a text prompt. Then, the system encodes the reference image to obtain an image embedding and encodes the text prompt to obtain a text embedding. Subsequently, a composite image is generated based on the input image, the image embedding, and the text embedding.
    Type: Application
    Filed: November 21, 2023
    Publication date: December 5, 2024
    Inventors: Yuqian Zhou, Krishna Kumar Singh, Zhe Lin, Qing Liu, Zhifei Zhang, Sohrab Amirghodsi, Elya Shechtman, Jingwan Lu