Patents by Inventor Jingwan Lu

Jingwan Lu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180322661
    Abstract: An interactive palette interface includes a color picker for digital paint applications. A user can create, modify and select colors for creating digital artwork using the interactive palette interface. The interactive palette interface includes a mixing dish in which colors can be added, removed and rearranged to blend together to create gradients and gamuts. The mixing dish is a digital simulation of a physical palette on which an artist adds and mixes various colors of paint before applying the paint to the artwork. Color blobs, which are logical groups of pixels in the mixing dish, can be spatially rearranged and scaled by a user to create and explore different combinations of colors. The color, position and size of each blob influences the color of other pixels in the mixing dish. Edits to the mixing dish are non-destructive, and an infinite history of color combinations is preserved.
    Type: Application
    Filed: May 8, 2017
    Publication date: November 8, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Maria Shugrina, Stephen J. DiVerdi, Jingwan Lu
  • Patent number: 10109083
    Abstract: Systems and methods provide for on the fly creation of curvy, digital brush strokes using incremental, local optimization. Samples from a user's input stroke are detected and matched with exemplar brush stroke segments as the user proceeds to provide input. For each set of samples, a temporary segment is generated and displayed for the user, and the temporary segment is later replaced by a permanent segment as subsequent samples sets are matched. Additionally, this optimization allows for updated parameterization in corner regions to provide a more realistic curve in the digital brush stroke. Specifically, intersecting ribs in the corners may be collapsed to prevent the rendering of artifacts. Additionally, and corner structures may be inserted in a break in a corner structure. These corner structures may be extensions of samples around the break and may correct distortion that results from the rib collapsing.
    Type: Grant
    Filed: August 12, 2016
    Date of Patent: October 23, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Stephen Joseph DiVerdi, Jingwan Lu
  • Publication number: 20180239434
    Abstract: Stroke operation prediction techniques and systems for three-dimensional digital content are described. In one example, stroke operation data is received that describes a stroke operation input via a user interface as part of the three-dimensional digital content. A cycle is generated that defines a closed path within the three-dimensional digital content based on the input stroke operation and at least one other stroke operation in the user interface. A surface is constructed based on the generated cycle. A predicted stroke operation is generated based at least in part on the constructed surface. The predicted stroke operation is then output in real time in the user interface as part of the three-dimensional digital content as the stroke operation data is received.
    Type: Application
    Filed: February 21, 2017
    Publication date: August 23, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Jingwan Lu, Stephen J. DiVerdi, Byungmoon Kim, Jun Xing
  • Patent number: 10019817
    Abstract: Example-based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: July 10, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Paul J. Asente, Jingwan Lu, Michal Luká{hacek over (c)}, Elya Schechtman
  • Publication number: 20180150947
    Abstract: Methods and systems are provided for transforming sketches into stylized electronic paintings. A neural network system is trained where the training includes training a first neural network that converts input sketches into output images and training a second neural network that converts images into output paintings. Similarity for the first neural network is evaluated between the output image and a reference image and similarity for the second neural network is evaluated between the output painting, the output image, and a reference painting. The neural network system is modified based on the evaluated similarity. The trained neural network is used to generate an output painting from an input sketch where the output painting maintains features from the input sketch utilizing an extrapolated intermediate image and reflects a designated style from the reference painting.
    Type: Application
    Filed: March 13, 2017
    Publication date: May 31, 2018
    Inventors: Jingwan Lu, Patsorn Sangkloy, Chen Fang
  • Publication number: 20180122131
    Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.
    Type: Application
    Filed: December 22, 2017
    Publication date: May 3, 2018
    Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Lukác, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Publication number: 20180101972
    Abstract: A procedural model enables a user to configure a global space organization function for the generation of decorative ornaments. The user provides data to seed the generation of the ornaments, as well as localized interactive edits to the generated ornaments. The procedural model iteratively places decorative elements at a subset of locations within an ornament area (or domain) based on generalized placement functions employed by the global space organization function. As such, the user is enabled to interactively generate and edit decorative ornaments via configuring the global space organization function and employing editing tools. Such functionality significantly decreases the effort typically required to generate ornate ornaments, while retaining control of the aesthetic organization and structure of the ornament. The generalized placement functions and heuristics of the global space organization function enable such control.
    Type: Application
    Filed: October 7, 2016
    Publication date: April 12, 2018
    Inventors: Paul John Asente, Jingwan Lu, Lena Edith Elfriede Gieseke
  • Patent number: 9905054
    Abstract: Techniques for controlling patch-usage in image synthesis are described. In implementations, a curve is fitted to a set of sorted matching errors that correspond to potential source-to-target patch assignments between a source image and a target image. Then, an error budget is determined using the curve. In an example, the error budget is usable to identify feasible patch assignments from the potential source-to-target patch assignments. Using the error budget along with uniform patch-usage enforcement, source patches from the source image are assigned to target patches in the target image. Then, at least one of the assigned source patches is assigned to an additional target patch based on the error budget. Subsequently, an image is synthesized based on the source patches assigned to the target patches.
    Type: Grant
    Filed: June 9, 2016
    Date of Patent: February 27, 2018
    Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Luká{hacek over (c)}, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Publication number: 20180047189
    Abstract: Systems and methods provide for on the fly creation of curvy, digital brush strokes using incremental, local optimization. Samples from a user's input stroke are detected and matched with exemplar brush stroke segments as the user proceeds to provide input. For each set of samples, a temporary segment is generated and displayed for the user, and the temporary segment is later replaced by a permanent segment as subsequent samples sets are matched. Additionally, this optimization allows for updated parameterization in corner regions to provide a more realistic curve in the digital brush stroke. Specifically, intersecting ribs in the corners may be collapsed to prevent the rendering of artifacts. Additionally, and corner structures may be inserted in a break in a corner structure. These corner structures may be extensions of samples around the break and may correct distortion that results from the rib collapsing.
    Type: Application
    Filed: August 12, 2016
    Publication date: February 15, 2018
    Inventors: STEPHEN JOSEPH DiVERDI, JINGWAN LU
  • Patent number: 9881413
    Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.
    Type: Grant
    Filed: June 9, 2016
    Date of Patent: January 30, 2018
    Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Luká{hacek over (c)}, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Patent number: 9870638
    Abstract: Appearance transfer techniques are described in the following. In one example, a search and vote process is configured to select patches from the image exemplar and then search for a location in the target image that is a best fit for the patches. As part of this selection, a patch usage counter may also be employed in an example to ensure that selection of each of the patches from the image exemplar does not vary by more than one, one to another. In another example, transfer of an appearance of a boundary and interiors regions from the image exemplar to a target image is preserved.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: January 16, 2018
    Inventors: Ondrej Jamri{hacek over (s)}ka, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
  • Patent number: 9852523
    Abstract: Appearance transfer techniques are described in the following that maintain temporal coherence between frames. In one example, a previous frame of a target video is warped that occurs in the sequence of the target video before a particular frame being synthesized. Color of the particular frame is transferred from an appearance of a corresponding frame of a video exemplar. In a further example, emitter portions are identified and addressed to preserve temporal coherence. This is performed to reduce an influence of the emitter portion of the target region in the selection of patches.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: December 26, 2017
    Inventors: Ondrej Jamri{hacek over (s)}ka, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
  • Publication number: 20170358128
    Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.
    Type: Application
    Filed: June 9, 2016
    Publication date: December 14, 2017
    Inventors: Jakub Fiser, Ondrej Jamriska, Michal Lukác, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Publication number: 20170358143
    Abstract: Techniques for controlling patch-usage in image synthesis are described. In implementations, a curve is fitted to a set of sorted matching errors that correspond to potential source-to-target patch assignments between a source image and a target image. Then, an error budget is determined using the curve. In an example, the error budget is usable to identify feasible patch assignments from the potential source-to-target patch assignments. Using the error budget along with uniform patch-usage enforcement, source patches from the source image are assigned to target patches in the target image. Then, at least one of the assigned source patches is assigned to an additional target patch based on the error budget. Subsequently, an image is synthesized based on the source patches assigned to the target patches.
    Type: Application
    Filed: June 9, 2016
    Publication date: December 14, 2017
    Inventors: Jakub Fiser, Ondrej Jamriska, Michal Lukác, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Publication number: 20170243376
    Abstract: Appearance transfer techniques are described in the following that maintain temporal coherence between frames. In one example, a previous frame of a target video is warped that occurs in the sequence of the target video before a particular frame being synthesized. Color of the particular frame is transferred from an appearance of a corresponding frame of a video exemplar. In a further example, emitter portions are identified and addressed to preserve temporal coherence. This is performed to reduce an influence of the emitter portion of the target region in the selection of patches.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 24, 2017
    Inventors: Ondrej Jamriska, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
  • Publication number: 20170243388
    Abstract: Appearance transfer techniques are described in the following. In one example, a search and vote process is configured to select patches from the image exemplar and then search for a location in the target image that is a best fit for the patches. As part of this selection, a patch usage counter may also be employed in an example to ensure that selection of each of the patches from the image exemplar does not vary by more than one, one to another. In another example, transfer of an appearance of a boundary and interiors regions from the image exemplar to a target image is preserved.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 24, 2017
    Inventors: Ondrej Jamriska, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
  • Publication number: 20170169340
    Abstract: Methods and systems for aiding users in generating object pattern designs with increased speed. In particular, one or more embodiments train a sequence-based machine-learning model using training objects, each training object including a plurality of regions with a plurality of design elements. One or more embodiments identify a plurality regions of an object with a first region adjacent a second region. One or more embodiments receive a user selection of a design element for populating the first region with a first design element from a plurality of design elements. One or more embodiments identify a second design element from the plurality of design elements based on the first design element using the trained sequence-based machine-learning model. One or more embodiments also populate the second region with one or more instances of the second design element.
    Type: Application
    Filed: December 14, 2015
    Publication date: June 15, 2017
    Inventors: Paul Asente, Jingwan Lu, Huy Phan
  • Publication number: 20170109900
    Abstract: Example-based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.
    Type: Application
    Filed: December 29, 2016
    Publication date: April 20, 2017
    Applicant: Adobe Systems Incorporated
    Inventors: Paul J. Asente, Jingwan Lu, Michal Lukác, Elya Schechtman
  • Patent number: 9536327
    Abstract: Example-based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.
    Type: Grant
    Filed: May 28, 2015
    Date of Patent: January 3, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Paul J. Asente, Jingwan Lu, Michal Luká{hacek over (c)}, Elya Schechtman
  • Publication number: 20160350942
    Abstract: Example based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.
    Type: Application
    Filed: May 28, 2015
    Publication date: December 1, 2016
    Inventors: Paul J. Asente, Jingwan Lu, Michal Lukác, Elya Schechtman