Patents by Inventor Jingwan Lu

Jingwan Lu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220122307
    Abstract: Systems and methods combine an input image with an edited image generated using a generator neural network to preserve detail from the original image. A computing system provides an input image to a machine learning model to generate a latent space representation of the input image. The system provides the latent space representation to a generator neural network to generate a generated image. The system generates multiple scale representations of the input image, as well as multiple scale representations of the generated image. The system generates a first combined image based on first scale representations of the images and a first value. The system generates a second combined image based on second scale representations of the images and a second value. The system blends the first combined image with the second combined image to generate an output image.
    Type: Application
    Filed: September 7, 2021
    Publication date: April 21, 2022
    Inventors: Ratheesh Kalarot, Kevin Wampler, Jingwan Lu, Jakub Fiser, Elya Shechtman, Aliakbar Darabi, Alexandru Vasile Costin
  • Publication number: 20220122305
    Abstract: An improved system architecture uses a pipeline including an encoder and a Generative Adversarial Network (GAN) including a generator neural network to generate edited images with improved speed, realism, and identity preservation. The encoder produces an initial latent space representation of an input image by encoding the input image. The generator neural network generates an initial output image by processing the initial latent space representation of the input image. The system generates an optimized latent space representation of the input image using a loss minimization technique that minimizes a loss between the input image and the initial output image. The loss is based on target perceptual features extracted from the input image and initial perceptual features extracted from the initial output image. The system outputs the optimized latent space representation of the input image for downstream use.
    Type: Application
    Filed: July 23, 2021
    Publication date: April 21, 2022
    Inventors: Cameron Smith, Ratheesh Kalarot, Wei-An Lin, Richard Zhang, Niloy Mitra, Elya Shechtman, Shabnam Ghadar, Zhixin Shu, Yannick Hold-Geoffrey, Nathan Carr, Jingwan Lu, Oliver Wang, Jun-Yan Zhu
  • Publication number: 20220122306
    Abstract: Systems and methods dynamically adjust an available range for editing an attribute in an image. An image editing system computes a metric for an attribute in an input image as a function of a latent space representation of the input image and a filtering vector for editing the input image. The image editing system compares the metric to a threshold. If the metric exceeds the threshold, then the image editing system selects a first range for editing the attribute in the input image. If the metric does not exceed the threshold, a second range is selected. The image editing system causes display of a user interface for editing the input image comprising an interface element for editing the attribute within the selected range.
    Type: Application
    Filed: September 7, 2021
    Publication date: April 21, 2022
    Inventors: Wei-An Lin, Baldo Faieta, Cameron Smith, Elya Shechtman, Jingwan Lu, Jun-Yan Zhu, Niloy Mitra, Ratheesh Kalarot, Richard Zhang, Shabnam Ghadar, Zhixin Shu
  • Publication number: 20220122308
    Abstract: Systems and methods seamlessly blend edited and unedited regions of an image. A computing system crops an input image around a region to be edited. The system applies an affine transformation to rotate the cropped input image. The system provides the rotated cropped input image as input to a machine learning model to generate a latent space representation of the rotated cropped input image. The system edits the latent space representation and provides the edited latent space representation to a generator neural network to generate a generated edited image. The system applies an inverse affine transformation to rotate the generated edited image and aligns an identified segment of the rotated generated edited image with an identified corresponding segment of the input image to produce an aligned rotated generated edited image. The system blends the aligned rotated generated edited image with the input image to generate an edited output image.
    Type: Application
    Filed: September 7, 2021
    Publication date: April 21, 2022
    Inventors: Ratheesh Kalarot, Kevin Wampler, Jingwan Lu, Jakub Fiser, Elya Shechtman, Aliakbar Darabi, Alexandru Vasile Costin
  • Publication number: 20220121931
    Abstract: Systems and methods train and apply a specialized encoder neural network for fast and accurate projection into the latent space of a Generative Adversarial Network (GAN). The specialized encoder neural network includes an input layer, a feature extraction layer, and a bottleneck layer positioned after the feature extraction layer. The projection process includes providing an input image to the encoder and producing, by the encoder, a latent space representation of the input image. Producing the latent space representation includes extracting a feature vector from the feature extraction layer, providing the feature vector to the bottleneck layer as input, and producing the latent space representation as output. The latent space representation produced by the encoder is provided as input to the GAN, which generates an output image based upon the latent space representation. The encoder is trained using specialized loss functions including a segmentation loss and a mean latent loss.
    Type: Application
    Filed: July 23, 2021
    Publication date: April 21, 2022
    Inventors: Ratheesh Kalarot, Wei-An Lin, Cameron Smith, Zhixin Shu, Baldo Faieta, Shabnam Ghadar, Jingwan Lu, Aliakbar Darabi, Jun-Yan Zhu, Niloy Mitra, Richard Zhang, Elya Shechtman
  • Publication number: 20220101578
    Abstract: Methods, systems, and non-transitory computer readable media are disclosed for generating a composite image comprising objects in positions from two or more different digital images. In one or more embodiments, the disclosed system receives a sequence of images and identifies objects within the sequence of images. In one example, the disclosed system determines a target position for a first object based on detecting user selection of the first object in the target position from a first image. The disclosed system can generate a fixed object image comprising the first object in the target position. The disclosed system can generate preview images comprising the fixed object image with the second object sequencing through a plurality of positions as seen in the sequence of images. Based on a second user selection of a desired preview image, the disclosed system can generate the composite image.
    Type: Application
    Filed: September 30, 2020
    Publication date: March 31, 2022
    Inventors: Ajay Bedi, Ajay Jain, Jingwan Lu, Anugrah Prakash, Prasenjit Mondal, Sachin Soni, Sanjeev Tagra
  • Publication number: 20220076374
    Abstract: One example method involves operations for receiving a request to transform an input image into a target image. Operations further include providing the input image to a machine learning model trained to adapt images. Training the machine learning model includes accessing training data having a source domain of images and a target domain of images with a target style. Training further includes using a pre-trained generative model to generate an adapted source domain of adapted images having the target style. The adapted source domain is generated by determining a rate of change for parameters of the target style, generating weighted parameters by applying a weight to each of the parameters based on their respective rate of change, and applying the weighted parameters to the source domain. Additionally, operations include using the machine learning model to generate the target image by modifying parameters of the input image using the target style.
    Type: Application
    Filed: September 4, 2020
    Publication date: March 10, 2022
    Inventors: Yijun Li, Richard Zhang, Jingwan Lu, Elya Schechtman
  • Patent number: 11232607
    Abstract: In implementations of adding color to digital images, an image colorization system can display a digital image to be color adjusted in an image editing interface and convert pixel content of the digital image to a LAB color space. The image colorization system can determine a lightness value (L) in the LAB color space of the pixel content of the digital image at a specified point on the digital image, and determine colors representable in an RGB color space based on combinations of A,B value pairs with the lightness value (L) in the LAB color space. The image colorization system can then determine a range of the colors for display in a color gamut in the image editing interface, the range of the colors corresponding to the A,B value pairs with the lightness value (L) of the pixel content at the specified point on the digital image.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: January 25, 2022
    Assignee: Adobe Inc.
    Inventors: Nishant Kumar, Vikas Sharma, Shantanu Agarwal, Sameer Bhatt, Rupali Arora, Richard Zhang, Anuradha Yadav, Jingwan Lu, Matthew David Fisher
  • Publication number: 20210358177
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a modified digital image from extracted spatial and global codes. For example, the disclosed systems can utilize a global and spatial autoencoder to extract spatial codes and global codes from digital images. The disclosed systems can further utilize the global and spatial autoencoder to generate a modified digital image by combining extracted spatial and global codes in various ways for various applications such as style swapping, style blending, and attribute editing.
    Type: Application
    Filed: May 14, 2020
    Publication date: November 18, 2021
    Inventors: Taesung Park, Richard Zhang, Oliver Wang, Junyan Zhu, Jingwan Lu, Elya Shechtman, Alexei A Efros
  • Publication number: 20210319532
    Abstract: Techniques and systems are provided for configuring neural networks to perform warping of an object represented in an image to create a caricature of the object. For instance, in response to obtaining an image of an object, a warped image generator generates a warping field using the image as input. The warping field is generated using a model trained with pairings of training images and known warped images using supervised learning techniques and one or more losses. The warped image generator determines, based on the warping field, a set of displacements associated with pixels of the input image. These displacements indicate pixel displacement directions for the pixels of the input image. These displacements are applied to the digital image to generate a warped image of the object.
    Type: Application
    Filed: April 14, 2020
    Publication date: October 14, 2021
    Inventors: Julia Gong, Yannick Hold-Geoffroy, Jingwan Lu
  • Patent number: 11138776
    Abstract: Various methods and systems are provided for image-management operations that includes generating adaptive image armatures based on an alignment between composition lines of a reference armature and a position of an object in an image. In operation, a reference armature for an image is accessed. The reference armature includes a plurality of composition lines that define a frame of reference for image composition. An alignment map is determined using the reference armature. The alignment map includes alignment information that indicates alignment between the composition lines of the reference armature and the position of the object in the image. Based on the alignment map, an adaptive image armature is determined. The adaptive image armature includes a subset of the composition lines of the reference armature. The adaptive image armature is displayed.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: October 5, 2021
    Assignee: ADOBE INC.
    Inventors: Radomir Mech, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Jianming Zhang, Jane Little E
  • Publication number: 20210295045
    Abstract: The present disclosure relates to systems, computer-implemented methods, and non-transitory computer readable medium for automatically transferring makeup from a reference face image to a target face image using a neural network trained using semi-supervised learning. For example, the disclosed systems can receive, at a neural network, a target face image and a reference face image, where the target face image is selected by a user via a graphical user interface (GUI) and the reference face image has makeup. The systems transfer, by the neural network, the makeup from the reference face image to the target face image, where the neural network is trained to transfer the makeup from the reference face image to the target face image using semi-supervised learning. The systems output for display the makeup on the target face image.
    Type: Application
    Filed: March 18, 2020
    Publication date: September 23, 2021
    Inventors: Yijun Li, Zhifei Zhang, Richard Zhang, Jingwan Lu
  • Patent number: 11107257
    Abstract: Disclosed herein are embodiments of systems and computer-implemented methods for extracting a set of discrete colors from an input image. A playful palette may be automatically generated from the set of discrete colors, where the playful palette contains a gamut limited to a blend of the set of discrete colors. A representation of the playful palette may be displayed on a graphical user interface of an electronic device. In a first method, an optimization may be performed using a bidirectional objective function comparing the color gamut of the input image and rendering of a candidate playful palette. Initial blobs may be generated by clustering. In a second method, color subsampling may be performed from the image, and a self-organizing map (SOM) may be generated. Clustering the SOM colors may be performed, and each pixel of the SOM may be replaced with an average color value to generate a cluster map.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: August 31, 2021
    Assignee: ADOBE INC.
    Inventors: Stephen Diverdi, Jose Ignacio Echevarria Vallespi, Jingwan Lu
  • Patent number: 11093660
    Abstract: Methods and systems for aiding users in generating object pattern designs with increased speed. In particular, one or more embodiments train a sequence-based machine-learning model using training objects, each training object including a plurality of regions with a plurality of design elements. One or more embodiments identify a plurality regions of an object with a first region adjacent a second region. One or more embodiments receive a user selection of a design element for populating the first region with a first design element from a plurality of design elements. One or more embodiments identify a second design element from the plurality of design elements based on the first design element using the trained sequence-based machine-learning model. One or more embodiments also populate the second region with one or more instances of the second design element.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: August 17, 2021
    Assignee: ADOBE INC.
    Inventors: Paul Asente, Jingwan Lu, Huy Phan
  • Publication number: 20210248727
    Abstract: This disclosure includes technologies for image processing based on a creation workflow for creating a type of images. The disclosed technologies can support both multi-stage image generation as well as multi-stage image editing of an existing image. To accomplish this, the disclosed system models the sequential creation stages of the creation workflow. In the backward direction, inference networks can backward transform an image into various intermediate stages. In the forward direction, generation networks can forward transform an earlier-stage image into a later-stage image based on stage-specific operations. Advantageously, the disclosed technical solution overcomes the limitations of the single-stage generation strategy with a multi-stage framework to model different types of variation at various creation stages. Resultantly, both novices and seasoned artists can use the disclosed technologies to efficiently perform complex artwork creation or editing tasks.
    Type: Application
    Filed: February 7, 2020
    Publication date: August 12, 2021
    Inventors: Matthew David Fisher, Hung-Yu Tseng, Yijun Li, Jingwan Lu
  • Patent number: 11087503
    Abstract: An interactive palette interface includes a color picker for digital paint applications. A user can create, modify and select colors for creating digital artwork using the interactive palette interface. The interactive palette interface includes a mixing dish in which colors can be added, removed and rearranged to blend together to create gradients and gamuts. The mixing dish is a digital simulation of a physical palette on which an artist adds and mixes various colors of paint before applying the paint to the artwork. Color blobs, which are logical groups of pixels in the mixing dish, can be spatially rearranged and scaled by a user to create and explore different combinations of colors. The color, position and size of each blob influences the color of other pixels in the mixing dish. Edits to the mixing dish are non-destructive, and an infinite history of color combinations is preserved.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: August 10, 2021
    Assignee: Adobe Inc.
    Inventors: Maria Shugrina, Stephen J. DiVerdi, Jingwan Lu
  • Publication number: 20210233287
    Abstract: In implementations of adding color to digital images, an image colorization system can display a digital image to be color adjusted in an image editing interface and convert pixel content of the digital image to a LAB color space. The image colorization system can determine a lightness value (L) in the LAB color space of the pixel content of the digital image at a specified point on the digital image, and determine colors representable in an RGB color space based on combinations of A,B value pairs with the lightness value (L) in the LAB color space. The image colorization system can then determine a range of the colors for display in a color gamut in the image editing interface, the range of the colors corresponding to the A,B value pairs with the lightness value (L) of the pixel content at the specified point on the digital image.
    Type: Application
    Filed: January 24, 2020
    Publication date: July 29, 2021
    Applicant: Adobe Inc.
    Inventors: Nishant Kumar, Vikas Sharma, Shantanu Agarwal, Sameer Bhatt, Rupali Arora, Richard Zhang, Anuradha Yadav, Jingwan Lu, Matthew David Fisher
  • Patent number: 11048335
    Abstract: Stroke operation prediction techniques and systems for three-dimensional digital content are described. In one example, stroke operation data is received that describes a stroke operation input via a user interface as part of the three-dimensional digital content. A cycle is generated that defines a closed path within the three-dimensional digital content based on the input stroke operation and at least one other stroke operation in the user interface. A surface is constructed based on the generated cycle. A predicted stroke operation is generated based at least in part on the constructed surface. The predicted stroke operation is then output in real time in the user interface as part of the three-dimensional digital content as the stroke operation data is received.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: June 29, 2021
    Assignee: Adobe Inc.
    Inventors: Jingwan Lu, Stephen J. DiVerdi, Byungmoon Kim, Jun Xing
  • Patent number: 11043015
    Abstract: Techniques for propagating a reflection of an object. In an example, a method includes receiving an input image comprising a first reflection of a first object on a reflective surface. The method further includes generating a second reflection for a second object in the input image. The second reflection is a reflection of the second object on the reflective surface. The method includes adding the second reflection to the input image. The method includes outputting a modified image comprising the first object, first reflection, the second object, and the second reflection.
    Type: Grant
    Filed: August 12, 2019
    Date of Patent: June 22, 2021
    Assignee: Adobe Inc.
    Inventors: Sanjeev Tagra, Sachin Soni, Ajay Jain, Prasenjit Mondal, Jingwan Lu
  • Patent number: 11024060
    Abstract: Techniques are provided for converting a self-portrait image into a neutral-pose portrait image, including receiving a self-portrait input image, which contains at least one person who is the subject of the self-portrait. A nearest pose search selects a target neutral-pose image that closely matches or approximates the pose of the upper torso region of the subject in the self-portrait input image. Coordinate-based inpainting maps pixels from the upper torso region in the self-portrait input image to corresponding regions in the selected target neutral-pose image to produce a coarse result image. A neutral-pose composition refines the coarse result image by synthesizing details in the body region of the subject (which in some cases includes the subject's head, arms, and torso), and inpainting pixels into missing portions of the background. The refined image is composited with the original self-portrait input image to produce a neutral-pose result image.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: June 1, 2021
    Assignee: Adobe Inc.
    Inventors: Liqian Ma, Jingwan Lu, Zhe Lin, Connelly Barnes, Alexei A. Efros