Patents by Inventor Daniel Sýkora

Daniel Sýkora has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230086807
    Abstract: Embodiments are disclosed for segmented image generation. The method may include receiving an input image and a segmentation mask, projecting, using a differentiable machine learning pipeline, a plurality of segments of the input image into a plurality of latent spaces associated with a plurality of generators to obtain a plurality of projected segments, and compositing the plurality of projected segments into an output image.
    Type: Application
    Filed: April 19, 2022
    Publication date: March 23, 2023
    Inventors: Michal LUKÁC, Elya SHECHTMAN, Daniel SÝKORA, David FUTSCHIK
  • Publication number: 20230070666
    Abstract: Embodiments are disclosed for translating an image from a source visual domain to a target visual domain. In particular, in one or more embodiments, the disclosed systems and methods comprise a training process that includes receiving a training input including a pair of keyframes and an unpaired image. The pair of keyframes represent a visual translation from a first version of an image in a source visual domain to a second version of the image in a target visual domain. The one or more embodiments further include sending the pair of keyframes and the unpaired image to an image translation network to generate a first training image and a second training image. The one or more embodiments further include training the image translation network to translate images from the source visual domain to the target visual domain based on a calculated loss using the first and second training images.
    Type: Application
    Filed: September 3, 2021
    Publication date: March 9, 2023
    Inventors: Michal LUKÁC, Daniel SÝKORA, David FUTSCHIK, Zhaowen WANG, Elya SHECHTMAN
  • Patent number: 10789754
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use style-aware puppets patterned after a source-character-animation sequence to generate a target-character-animation sequence. In particular, the disclosed systems can generate style-aware puppets based on an animation character drawn or otherwise created (e.g., by an artist) for the source-character-animation sequence. The style-aware puppets can include, for instance, a character-deformational model, a skeletal-difference map, and a visual-texture representation of an animation character from a source-character-animation sequence. By using style-aware puppets, the disclosed systems can both preserve and transfer a detailed visual appearance and stylized motion of an animation character from a source-character-animation sequence to a target-character-animation sequence.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: September 29, 2020
    Assignee: ADOBE INC.
    Inventors: Vladimir Kim, Wilmot Li, Marek Dvoro{hacek over (z)}{hacek over (n)}ák, Daniel Sýkora
  • Patent number: 10783691
    Abstract: Certain embodiments involve generating one or more of appearance guide and a positional guide and using one or more of the guides to synthesize a stylized image or animation. For example, a system obtains data indicating a target image and a style exemplar image. The system generates an appearance guide, a positional guide, or both from the target image and the style exemplar image. The system uses one or more of the guides to transfer a texture or style from the style exemplar image to the target image.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: September 22, 2020
    Assignees: ADOBE INC., CZECH TECHNICAL UNIVERSITY IN PRAGUE
    Inventors: David Simons, Michal Lukac, Daniel Sykora, Elya Shechtman, Paul Asente, Jingwan Lu, Jakub Fiser, Ondrej Jamriska
  • Publication number: 20200082591
    Abstract: Certain embodiments involve generating one or more of appearance guide and a positional guide and using one or more of the guides to synthesize a stylized image or animation. For example, a system obtains data indicating a target image and a style exemplar image. The system generates an appearance guide, a positional guide, or both from the target image and the style exemplar image. The system uses one or more of the guides to transfer a texture or style from the style exemplar image to the target image.
    Type: Application
    Filed: November 12, 2019
    Publication date: March 12, 2020
    Inventors: David Simons, Michal Lukac, Daniel Sykora, Elya Shechtman, Paul Asente, Jingwan Lu, Jakub Fiser, Ondrej Jamriska
  • Publication number: 20200035010
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use style-aware puppets patterned after a source-character-animation sequence to generate a target-character-animation sequence. In particular, the disclosed systems can generate style-aware puppets based on an animation character drawn or otherwise created (e.g., by an artist) for the source-character-animation sequence. The style-aware puppets can include, for instance, a character-deformational model, a skeletal-difference map, and a visual-texture representation of an animation character from a source-character-animation sequence. By using style-aware puppets, the disclosed systems can both preserve and transfer a detailed visual appearance and stylized motion of an animation character from a source-character-animation sequence to a target-character-animation sequence.
    Type: Application
    Filed: July 27, 2018
    Publication date: January 30, 2020
    Inventors: Vladimir Kim, Wilmot Li, Marek Dvoroznák, Daniel Sýkora
  • Patent number: 10504267
    Abstract: Certain embodiments involve generating an appearance guide, a segmentation guide, and a positional guide and using one or more of the guides to synthesize a stylized image or animation. For example, a system obtains data indicating a target and a style exemplar image and generates a segmentation guide for segmenting the target image and the style exemplar image and identifying a feature of the target image and a corresponding feature of the style exemplar image. The system generates a positional guide for determining positions of the target feature and style feature relative to a common grid system. The system generates an appearance guide for modifying intensity levels and contrast values in the target image based on the style exemplar image. The system uses one or more of the guides to transfer a texture of the style feature to the corresponding target feature.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: December 10, 2019
    Assignee: Adobe Inc.
    Inventors: David Simons, Michal Lukac, Daniel Sykora, Elya Shechtman, Paul Asente, Jingwan Lu, Jakub Fiser, Ondrej Jamriska
  • Patent number: 10176624
    Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: January 8, 2019
    Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Luká{hacek over (c)}, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Publication number: 20180350030
    Abstract: Certain embodiments involve generating an appearance guide, a segmentation guide, and a positional guide and using one or more of the guides to synthesize a stylized image or animation. For example, a system obtains data indicating a target and a style exemplar image and generates a segmentation guide for segmenting the target image and the style exemplar image and identifying a feature of the target image and a corresponding feature of the style exemplar image. The system generates a positional guide for determining positions of the target feature and style feature relative to a common grid system. The system generates an appearance guide for modifying intensity levels and contrast values in the target image based on the style exemplar image. The system uses one or more of the guides to transfer a texture of the style feature to the corresponding target feature.
    Type: Application
    Filed: October 16, 2017
    Publication date: December 6, 2018
    Inventors: David Simons, Michal Lukac, Daniel Sykora, Elya Shechtman, Paul Asente, Jingwan Lu, Jakub Fiser, Ondrej Jamriska
  • Publication number: 20180122131
    Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.
    Type: Application
    Filed: December 22, 2017
    Publication date: May 3, 2018
    Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Lukác, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Patent number: 9905054
    Abstract: Techniques for controlling patch-usage in image synthesis are described. In implementations, a curve is fitted to a set of sorted matching errors that correspond to potential source-to-target patch assignments between a source image and a target image. Then, an error budget is determined using the curve. In an example, the error budget is usable to identify feasible patch assignments from the potential source-to-target patch assignments. Using the error budget along with uniform patch-usage enforcement, source patches from the source image are assigned to target patches in the target image. Then, at least one of the assigned source patches is assigned to an additional target patch based on the error budget. Subsequently, an image is synthesized based on the source patches assigned to the target patches.
    Type: Grant
    Filed: June 9, 2016
    Date of Patent: February 27, 2018
    Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Luká{hacek over (c)}, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Patent number: 9881413
    Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.
    Type: Grant
    Filed: June 9, 2016
    Date of Patent: January 30, 2018
    Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Luká{hacek over (c)}, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Patent number: 9870638
    Abstract: Appearance transfer techniques are described in the following. In one example, a search and vote process is configured to select patches from the image exemplar and then search for a location in the target image that is a best fit for the patches. As part of this selection, a patch usage counter may also be employed in an example to ensure that selection of each of the patches from the image exemplar does not vary by more than one, one to another. In another example, transfer of an appearance of a boundary and interiors regions from the image exemplar to a target image is preserved.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: January 16, 2018
    Inventors: Ondrej Jamri{hacek over (s)}ka, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
  • Patent number: 9852523
    Abstract: Appearance transfer techniques are described in the following that maintain temporal coherence between frames. In one example, a previous frame of a target video is warped that occurs in the sequence of the target video before a particular frame being synthesized. Color of the particular frame is transferred from an appearance of a corresponding frame of a video exemplar. In a further example, emitter portions are identified and addressed to preserve temporal coherence. This is performed to reduce an influence of the emitter portion of the target region in the selection of patches.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: December 26, 2017
    Inventors: Ondrej Jamri{hacek over (s)}ka, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
  • Publication number: 20170358143
    Abstract: Techniques for controlling patch-usage in image synthesis are described. In implementations, a curve is fitted to a set of sorted matching errors that correspond to potential source-to-target patch assignments between a source image and a target image. Then, an error budget is determined using the curve. In an example, the error budget is usable to identify feasible patch assignments from the potential source-to-target patch assignments. Using the error budget along with uniform patch-usage enforcement, source patches from the source image are assigned to target patches in the target image. Then, at least one of the assigned source patches is assigned to an additional target patch based on the error budget. Subsequently, an image is synthesized based on the source patches assigned to the target patches.
    Type: Application
    Filed: June 9, 2016
    Publication date: December 14, 2017
    Inventors: Jakub Fiser, Ondrej Jamriska, Michal Lukác, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Publication number: 20170358128
    Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.
    Type: Application
    Filed: June 9, 2016
    Publication date: December 14, 2017
    Inventors: Jakub Fiser, Ondrej Jamriska, Michal Lukác, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Publication number: 20170243376
    Abstract: Appearance transfer techniques are described in the following that maintain temporal coherence between frames. In one example, a previous frame of a target video is warped that occurs in the sequence of the target video before a particular frame being synthesized. Color of the particular frame is transferred from an appearance of a corresponding frame of a video exemplar. In a further example, emitter portions are identified and addressed to preserve temporal coherence. This is performed to reduce an influence of the emitter portion of the target region in the selection of patches.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 24, 2017
    Inventors: Ondrej Jamriska, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
  • Publication number: 20170243388
    Abstract: Appearance transfer techniques are described in the following. In one example, a search and vote process is configured to select patches from the image exemplar and then search for a location in the target image that is a best fit for the patches. As part of this selection, a patch usage counter may also be employed in an example to ensure that selection of each of the patches from the image exemplar does not vary by more than one, one to another. In another example, transfer of an appearance of a boundary and interiors regions from the image exemplar to a target image is preserved.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 24, 2017
    Inventors: Ondrej Jamriska, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
  • Patent number: 9123145
    Abstract: Techniques are presented for controlling the amount of temporal noise in certain animation sequences. Sketchy animation sequences are received in an input in a digital form and used to create an altered version of the same animation with temporal coherence enforced down to the stroke level, resulting in a reduction of the perceived noise. The amount of reduction is variable and can be controlled via a single parameter to achieve a desired artistic effect.
    Type: Grant
    Filed: June 15, 2012
    Date of Patent: September 1, 2015
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Gioacchino Noris, Daniel Sykora, Stelian Coros, Alexander Hornung, Brian Whited, Maryann Simmons, Markus Gross, Robert Sumner
  • Patent number: 9082005
    Abstract: A method is provided for sketch segmentation via smart scribbles, the results of which are especially suitable for interactive real-time graphics editing applications. A vector-based drawing may be segmented into labels based on input scribbles provided by a user. By organizing the labeling as an energy minimization problem, an approximate solution can be found using a sequence of binary graph cuts for an equivalent graph, providing an optimized implementation in a polynomial time suitable for real-time drawing applications. The energy function may include time, proximity, direction, and curvature between strokes as smoothness terms, and proximity, direction, and oriented curvature between strokes and scribbles as data terms. Additionally, the energy function may be modified to provide for user control over locality control, allowing the selection of appropriately sized labeling regions by scribble input speed or scribble input pressure.
    Type: Grant
    Filed: March 19, 2012
    Date of Patent: July 14, 2015
    Assignee: Disney Enterprises, Inc.
    Inventors: Gioacchino Noris, Daniel Sykora, Ariel Shamir, Stelian Coros, Alexander Hornung, Robert Sumner, Maryann Simmons, Brian Whited, Markus Gross