Patents by Inventor Elya Shechtman

Elya Shechtman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200151938
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that integrate (or embed) a non-photorealistic rendering (“NPR”) generator with a style-transfer-neural network to generate stylized images that both correspond to a source image and resemble a stroke style. By integrating an NPR generator with a style-transfer-neural network, the disclosed methods, non-transitory computer readable media, and systems can accurately capture a stroke style resembling one or both of stylized edges or stylized shadings. When training such a style-transfer-neural network, the integrated NPR generator can enable the disclosed methods, non-transitory computer readable media, and systems to use real-stroke drawings (instead of conventional paired-ground-truth drawings) for training the network to accurately portray a stroke style.
    Type: Application
    Filed: November 8, 2018
    Publication date: May 14, 2020
    Inventors: Elya Shechtman, Yijun Li, Chen Fang, Aaron Hertzmann
  • Patent number: 10650490
    Abstract: Environmental map generation techniques and systems are described. A digital image is scaled to achieve a target aspect ratio using a content aware scaling technique. A canvas is generated that is dimensionally larger than the scaled digital image and the scaled digital image is inserted within the canvas thereby resulting in an unfilled portion of the canvas. An initially filled canvas is then generated by filling the unfilled portion using a content aware fill technique based on the inserted digital image. A plurality of polar coordinate canvases is formed by transforming original coordinates of the canvas into polar coordinates. The unfilled portions of the polar coordinate canvases are filled using a content-aware fill technique that is initialized based on the initially filled canvas. An environmental map of the digital image is generated by combining a plurality of original coordinate canvas portions formed from the polar coordinate canvases.
    Type: Grant
    Filed: April 9, 2019
    Date of Patent: May 12, 2020
    Assignee: Adobe Inc.
    Inventors: Xue Bai, Elya Shechtman, Sylvain Philippe Paris
  • Publication number: 20200118254
    Abstract: Certain aspects involve video inpainting via confidence-weighted motion estimation. For instance, a video editor accesses video content having a target region to be modified in one or more video frames. The video editor computes a motion for a boundary of the target region. The video editor interpolates, from the boundary motion, a target motion of a target pixel within the target region. In the interpolation, confidence values assigned to boundary pixels control how the motion of these pixels contributes to the interpolated target motion. A confidence value is computed based on a difference between forward and reverse motion with respect to a particular boundary pixel, a texture in a region that includes the particular boundary pixel, or a combination thereof. The video editor modifies the target region in the video by updating color data of the target pixel to correspond to the target motion interpolated from the boundary motion.
    Type: Application
    Filed: April 9, 2019
    Publication date: April 16, 2020
    Inventors: Geoffrey Oxholm, Oliver Wang, Elya Shechtman, Michal Lukac, Ramiz Sheikh
  • Patent number: 10621760
    Abstract: Techniques are disclosed for the synthesis of a full set of slotted content, based upon only partial observations of the slotted content. With respect to a font, the slots may comprise particular letters or symbols or glyphs in an alphabet. Based upon partial observations of a subset of glyphs from a font, a full set of the glyphs corresponding to the font may be synthesized and may further be ornamented.
    Type: Grant
    Filed: June 15, 2018
    Date of Patent: April 14, 2020
    Assignee: Adobe Inc.
    Inventors: Matthew David Fisher, Samaneh Azadi, Vladimir Kim, Elya Shechtman, Zhaowen Wang
  • Patent number: 10607065
    Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: March 31, 2020
    Assignee: Adobe Inc.
    Inventors: Rebecca Ilene Milman, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shechtman, Duygu Ceylan Aksit, David P. Simons
  • Publication number: 20200090389
    Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.
    Type: Application
    Filed: November 7, 2019
    Publication date: March 19, 2020
    Applicant: Adobe Inc.
    Inventors: Sunil Hadap, Elya Shechtman, Zhixin Shu, Kalyan Sunkavalli, Mehmet Yumer
  • Publication number: 20200082591
    Abstract: Certain embodiments involve generating one or more of appearance guide and a positional guide and using one or more of the guides to synthesize a stylized image or animation. For example, a system obtains data indicating a target image and a style exemplar image. The system generates an appearance guide, a positional guide, or both from the target image and the style exemplar image. The system uses one or more of the guides to transfer a texture or style from the style exemplar image to the target image.
    Type: Application
    Filed: November 12, 2019
    Publication date: March 12, 2020
    Inventors: David Simons, Michal Lukac, Daniel Sykora, Elya Shechtman, Paul Asente, Jingwan Lu, Jakub Fiser, Ondrej Jamriska
  • Patent number: 10586311
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for improved patch validity testing for patch-based synthesis applications using similarity transforms. The improved patch validity tests are used to validate (or invalidate) candidate patches as valid patches falling within a sampling region of a source image. The improved patch validity tests include a hole dilation test for patch validity, a no-dilation test for patch invalidity, and a comprehensive pixel test for patch invalidity. A fringe test for range invalidity can be used to identify pixels with an invalid range and invalidate corresponding candidate patches. The fringe test for range invalidity can be performed as a precursor to any or all of the improved patch validity tests. In this manner, validated candidate patches are used to automatically reconstruct a target image.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: March 10, 2020
    Assignee: Adobe Inc.
    Inventors: Sohrab Amirghodsi, Kevin Wampler, Elya Shechtman, Aliakbar Darabi
  • Patent number: 10573040
    Abstract: Image modification using detected symmetry is described. In example implementations, an image modification module detects multiple local symmetries in an original image by discovering repeated correspondences that are each related by a transformation. The transformation can include a translation, a rotation, a reflection, a scaling, or a combination thereof. Each repeated correspondence includes three patches that are similar to one another and are respectively defined by three pixels of the original image. The image modification module generates a global symmetry of the original image by analyzing an applicability to the multiple local symmetries of multiple candidate homographies contributed by the multiple local symmetries. The image modification module associates individual pixels of the original image with a global symmetry indicator to produce a global symmetry association map.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: February 25, 2020
    Assignee: Adobe Inc.
    Inventors: Kalyan Krishna Sunkavalli, Nathan Aaron Carr, Michal Lukac, Elya Shechtman
  • Patent number: 10565758
    Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.
    Type: Grant
    Filed: June 14, 2017
    Date of Patent: February 18, 2020
    Assignee: Adobe Inc.
    Inventors: Sunil Hadap, Elya Shechtman, Zhixin Shu, Kalyan Sunkavalli, Mehmet Yumer
  • Patent number: 10546212
    Abstract: The present disclosure is directed toward systems and methods for image patch matching. In particular, the systems and methods described herein sample image patches to identify those image patches that match a target image patch. The systems and methods described herein probabilistically accept image patch proposals as potential matches based on an oracle. The oracle is computationally inexpensive to evaluate but more approximate than similarity heuristics. The systems and methods use the oracle to quickly guide the search to areas of the search space more likely to have a match. Once areas are identified that likely include a match, the systems and methods use a more accurate similarity function to identify patch matches.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: January 28, 2020
    Assignee: Adobe Inc.
    Inventors: Nathan Carr, Kalyan Sunkavalli, Michal Lukac, Elya Shechtman
  • Patent number: 10521892
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed at relighting a target image based on a lighting effect from a reference image. In one embodiment, a target image and a reference image are received, the reference image includes a lighting effect desired to be applied to the target image. A lighting transfer is performed using color data and geometrical data associated with the reference image and color data and geometrical data associated with the target image. The lighting transfer causes generation of a relit image that corresponds with the target image having a lighting effect of the reference image. The relit image is provided for display to a user via one or more output devices. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: August 31, 2016
    Date of Patent: December 31, 2019
    Assignee: ADOBE INC.
    Inventors: Kalyan K. Sunkavalli, Sunil Hadap, Elya Shechtman, Zhixin Shu
  • Publication number: 20190385346
    Abstract: Techniques are disclosed for the synthesis of a full set of slotted content, based upon only partial observations of the slotted content. With respect to a font, the slots may comprise particular letters or symbols or glyphs in an alphabet. Based upon partial observations of a subset of glyphs from a font, a full set of the glyphs corresponding to the font may be synthesized and may further be ornamented.
    Type: Application
    Filed: June 15, 2018
    Publication date: December 19, 2019
    Applicant: Adobe Inc.
    Inventors: Matthew David Fisher, Samaneh Azadi, Vladimir Kim, Elya Shechtman, Zhaowen Wang
  • Patent number: 10504267
    Abstract: Certain embodiments involve generating an appearance guide, a segmentation guide, and a positional guide and using one or more of the guides to synthesize a stylized image or animation. For example, a system obtains data indicating a target and a style exemplar image and generates a segmentation guide for segmenting the target image and the style exemplar image and identifying a feature of the target image and a corresponding feature of the style exemplar image. The system generates a positional guide for determining positions of the target feature and style feature relative to a common grid system. The system generates an appearance guide for modifying intensity levels and contrast values in the target image based on the style exemplar image. The system uses one or more of the guides to transfer a texture of the style feature to the corresponding target feature.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: December 10, 2019
    Assignee: Adobe Inc.
    Inventors: David Simons, Michal Lukac, Daniel Sykora, Elya Shechtman, Paul Asente, Jingwan Lu, Jakub Fiser, Ondrej Jamriska
  • Patent number: 10489676
    Abstract: The present disclosure is directed toward systems and methods for image patch matching. In particular, the systems and methods described herein sample image patches to identify those image patches that match a target image patch. The systems and methods described herein probabilistically accept image patch proposals as potential matches based on an oracle. The oracle is computationally inexpensive to evaluate but more approximate than similarity heuristics. The systems and methods use the oracle to quickly guide the search to areas of the search space more likely to have a match. Once areas are identified that likely include a match, the systems and methods use a more accurate similarity function to identify patch matches.
    Type: Grant
    Filed: November 3, 2016
    Date of Patent: November 26, 2019
    Assignee: Adobe Inc.
    Inventors: Nathan Carr, Kalyan Sunkavalli, Michal Lukac, Elya Shechtman
  • Publication number: 20190340419
    Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
    Type: Application
    Filed: May 3, 2018
    Publication date: November 7, 2019
    Applicant: Adobe Inc.
    Inventors: Rebecca Ilene Milman, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shechtman, Duygu Ceylan Aksit, David P. Simons
  • Patent number: 10467777
    Abstract: Texture modeling techniques for image data are described. In one or more implementations, texels in image data are discovered by one or more computing devices, each texel representing an element that repeats to form a texture pattern in the image data. Regularity of the texels in the image data is modeled by the one or more computing devices to define translations and at least one other transformation of texels in relation to each other.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: November 5, 2019
    Assignee: Adobe Inc.
    Inventors: Siying Liu, Kalyan Sunkavalli, Nathan A. Carr, Elya Shechtman
  • Publication number: 20190333194
    Abstract: Embodiments described herein are directed to methods and systems for facilitating control of smoothness of transitions between images. In embodiments, a difference of color values of pixels between a foreground image and the background image are identified along a boundary associated with a location at which to paste the foreground image relative to the background image. Thereafter, recursive down sampling of a region of pixels within the boundary by a sampling factor is performed to produce a plurality of down sampled images having color difference indicators associated with each pixel of the down sampled images. Such color difference indicators indicate whether a difference of color value exists for the corresponding pixel. To effectuate a seamless transition, the color difference indicators are normalized in association with each recursively down sampled image.
    Type: Application
    Filed: July 11, 2019
    Publication date: October 31, 2019
    Inventors: SYLVAIN PARIS, SOHRAB AMIRGHODSI, ALIAKBAR DARABI, ELYA SHECHTMAN
  • Patent number: 10453491
    Abstract: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.
    Type: Grant
    Filed: February 12, 2019
    Date of Patent: October 22, 2019
    Assignee: Adobe Inc.
    Inventors: Geoffrey Oxholm, Elya Shechtman, Oliver Wang
  • Publication number: 20190295227
    Abstract: Techniques for using deep learning to facilitate patch-based image inpainting are described. In an example, a computer system hosts a neural network trained to generate, from an image, code vectors including features learned by the neural network and descriptive of patches. The image is received and contains a region of interest (e.g., a hole missing content). The computer system inputs it to the network and, in response, receives the code vectors. Each code vector is associated with a pixel in the image. Rather than comparing RGB values between patches, the computer system compares the code vector of a pixel inside the region to code vectors of pixels outside the region to find the best match based on a feature similarity measure (e.g., a cosine similarity). The pixel value of the pixel inside the region is set based on the pixel value of the matched pixel outside this region.
    Type: Application
    Filed: March 26, 2018
    Publication date: September 26, 2019
    Inventors: Oliver Wang, Michal Lukac, Elya Shechtman, Mahyar Najibikohnehshahri