Patents by Inventor Elya Shechtman

Elya Shechtman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200372710
    Abstract: Techniques are disclosed for 3D object reconstruction using photometric mesh representations. A decoder is pretrained to transform points sampled from 2D patches of representative objects into 3D polygonal meshes. An image frame of the object is fed into an encoder to get an initial latent code vector. For each frame and camera pair from the sequence, a polygonal mesh is rendered at the given viewpoints. The mesh is optimized by creating a virtual viewpoint, rasterized to obtain a depth map. The 3D mesh projections are aligned by projecting the coordinates corresponding to the polygonal face vertices of the rasterized mesh to both selected viewpoints. The photometric error is determined from RGB pixel intensities sampled from both frames. Gradients from the photometric error are backpropagated into the vertices of the assigned polygonal indices by relating the barycentric coordinates of each image to update the latent code vector.
    Type: Application
    Filed: August 5, 2020
    Publication date: November 26, 2020
    Applicant: Adobe, Inc.
    Inventors: Oliver Wang, Vladimir Kim, Matthew Fisher, Elya Shechtman, Chen-Hsuan Lin, Bryan Russell
  • Publication number: 20200342634
    Abstract: Techniques are disclosed for neural network based interpolation of image textures. A methodology implementing the techniques according to an embodiment includes training a global encoder network to generate global latent vectors based on training texture images, and training a local encoder network to generate local latent tensors based on the training texture images. The method further includes interpolating between the global latent vectors associated with each set of training images, and interpolating between the local latent tensors associated with each set of training images. The method further includes training a decoder network to generate reconstructions of the training texture images and to generate an interpolated texture based on the interpolated global latent vectors and the interpolated local latent tensors. The training of the encoder and decoder networks is based on a minimization of a loss function of the reconstructions and a minimization of a loss function of the interpolated texture.
    Type: Application
    Filed: April 24, 2019
    Publication date: October 29, 2020
    Applicant: Adobe Inc.
    Inventors: Connelly Barnes, Sohrab Amirghodsi, Michal Lukac, Elya Shechtman, Ning Yu
  • Patent number: 10818043
    Abstract: An example method for neural network based interpolation of image textures includes training a global encoder network to generate global latent vectors based on training texture images, and training a local encoder network to generate local latent tensors based on the training texture images. The example method further includes interpolating between the global latent vectors associated with each set of training images, and interpolating between the local latent tensors associated with each set of training images. The example method further includes training a decoder network to generate reconstructions of the training texture images and to generate an interpolated texture based on the interpolated global latent vectors and the interpolated local latent tensors. The training of the encoder and decoder networks is based on a minimization of a loss function of the reconstructions and a minimization of a loss function of the interpolated texture.
    Type: Grant
    Filed: April 24, 2019
    Date of Patent: October 27, 2020
    Assignee: Adobe Inc.
    Inventors: Connelly Barnes, Sohrab Amirghodsi, Michal Lukac, Elya Shechtman, Ning Yu
  • Publication number: 20200302251
    Abstract: The present disclosure relates to an image composite system that employs a generative adversarial network to generate realistic composite images. For example, in one or more embodiments, the image composite system trains a geometric prediction neural network using an adversarial discrimination neural network to learn warp parameters that provide correct geometric alignment of foreground objects with respect to a background image. Once trained, the determined warp parameters provide realistic geometric corrections to foreground objects such that the warped foreground objects appear to blend into background images naturally when composited together.
    Type: Application
    Filed: June 9, 2020
    Publication date: September 24, 2020
    Inventors: Elya Shechtman, Oliver Wang, Mehmet Yumer, Chen-Hsuan Lin
  • Patent number: 10783691
    Abstract: Certain embodiments involve generating one or more of appearance guide and a positional guide and using one or more of the guides to synthesize a stylized image or animation. For example, a system obtains data indicating a target image and a style exemplar image. The system generates an appearance guide, a positional guide, or both from the target image and the style exemplar image. The system uses one or more of the guides to transfer a texture or style from the style exemplar image to the target image.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: September 22, 2020
    Assignees: ADOBE INC., CZECH TECHNICAL UNIVERSITY IN PRAGUE
    Inventors: David Simons, Michal Lukac, Daniel Sykora, Elya Shechtman, Paul Asente, Jingwan Lu, Jakub Fiser, Ondrej Jamriska
  • Patent number: 10769848
    Abstract: Techniques are disclosed for 3D object reconstruction using photometric mesh representations. A decoder is pretrained to transform points sampled from 2D patches of representative objects into 3D polygonal meshes. An image frame of the object is fed into an encoder to get an initial latent code vector. For each frame and camera pair from the sequence, a polygonal mesh is rendered at the given viewpoints. The mesh is optimized by creating a virtual viewpoint, rasterized to obtain a depth map. The 3D mesh projections are aligned by projecting the coordinates corresponding to the polygonal face vertices of the rasterized mesh to both selected viewpoints. The photometric error is determined from RGB pixel intensities sampled from both frames. Gradients from the photometric error are backpropagated into the vertices of the assigned polygonal indices by relating the barycentric coordinates of each image to update the latent code vector.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: September 8, 2020
    Assignee: Adobe, Inc.
    Inventors: Oliver Wang, Vladimir Kim, Matthew Fisher, Elya Shechtman, Chen-Hsuan Lin, Bryan Russell
  • Publication number: 20200279355
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for automatically synthesizing a content-aware fill using similarity transformed patches. A user interface receives a user-specified hole and a user-specified sampling region, both of which may be stored in a constraint mask. A brush tool can be used to interactively brush the sampling region and modify the constraint mask. The mask is passed to a patch-based synthesizer configured to synthesize the fill using similarity transformed patches sampled from the sampling region. Fill properties such as similarity transform parameters can be set to control the manner in which the fill is synthesized. A live preview can be provided with gradual updates of the synthesized fill prior to completion. Once a fill has been synthesized, the user interface presents the original image, replacing the hole with the synthesized fill.
    Type: Application
    Filed: May 19, 2020
    Publication date: September 3, 2020
    Inventors: Sohrab AMIRGHODSI, Sarah Jane STUCKEY, Elya SHECHTMAN
  • Publication number: 20200265294
    Abstract: In implementations of object animation using generative neural networks, one or more computing devices of a system implement an animation system for reproducing animation of an object in a digital video. A mesh of the object is obtained from a first frame of the digital video and a second frame of the digital video having the object is selected. Features of the object from the second frame are mapped to vertices of the mesh, and the mesh is warped based on the mapping. The warped mesh is rendered as an image by a neural renderer and compared to the object from the second frame to train a neural network. The rendered image is then refined by a generator of a generative adversarial network which includes a discriminator. The discriminator trains the generator to reproduce the object from the second frame as the refined image.
    Type: Application
    Filed: February 14, 2019
    Publication date: August 20, 2020
    Applicant: Adobe Inc.
    Inventors: Vladimir Kim, Omid Poursaeed, Jun Saito, Elya Shechtman
  • Patent number: 10748324
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that integrate (or embed) a non-photorealistic rendering (“NPR”) generator with a style-transfer-neural network to generate stylized images that both correspond to a source image and resemble a stroke style. By integrating an NPR generator with a style-transfer-neural network, the disclosed methods, non-transitory computer readable media, and systems can accurately capture a stroke style resembling one or both of stylized edges or stylized shadings. When training such a style-transfer-neural network, the integrated NPR generator can enable the disclosed methods, non-transitory computer readable media, and systems to use real-stroke drawings (instead of conventional paired-ground-truth drawings) for training the network to accurately portray a stroke style.
    Type: Grant
    Filed: November 8, 2018
    Date of Patent: August 18, 2020
    Assignee: ADOBE INC.
    Inventors: Elya Shechtman, Yijun Li, Chen Fang, Aaron Hertzmann
  • Patent number: 10740881
    Abstract: Techniques for using deep learning to facilitate patch-based image inpainting are described. In an example, a computer system hosts a neural network trained to generate, from an image, code vectors including features learned by the neural network and descriptive of patches. The image is received and contains a region of interest (e.g., a hole missing content). The computer system inputs it to the network and, in response, receives the code vectors. Each code vector is associated with a pixel in the image. Rather than comparing RGB values between patches, the computer system compares the code vector of a pixel inside the region to code vectors of pixels outside the region to find the best match based on a feature similarity measure (e.g., a cosine similarity). The pixel value of the pixel inside the region is set based on the pixel value of the matched pixel outside this region.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: August 11, 2020
    Assignee: Adobe Inc.
    Inventors: Oliver Wang, Michal Lukac, Elya Shechtman, Mahyar Najibikohnehshahri
  • Publication number: 20200242804
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
    Type: Application
    Filed: January 25, 2019
    Publication date: July 30, 2020
    Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
  • Patent number: 10719913
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed at image synthesis utilizing sampling of patch correspondence information between iterations at different scales. A patch synthesis technique can be performed to synthesize a target region at a first image scale based on portions of a source region that are identified by the patch synthesis technique. The image can then be sampled to generate an image at a second image scale. The sampling can include generating patch correspondence information for the image at the second image scale. Invalid patch assignments in the patch correspondence information at the second image scale can then be identified, and valid patches can be assigned to the pixels having invalid patch assignments. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: October 15, 2018
    Date of Patent: July 21, 2020
    Assignee: Adobe Inc.
    Inventors: Sohrab Amirghodsi, Aliakbar Darabi, Elya Shechtman
  • Patent number: 10719742
    Abstract: The present disclosure relates to an image composite system that employs a generative adversarial network to generate realistic composite images. For example, in one or more embodiments, the image composite system trains a geometric prediction neural network using an adversarial discrimination neural network to learn warp parameters that provide correct geometric alignment of foreground objects with respect to a background image. Once trained, the determined warp parameters provide realistic geometric corrections to foreground objects such that the warped foreground objects appear to blend into background images naturally when composited together.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: July 21, 2020
    Assignee: ADOBE INC.
    Inventors: Elya Shechtman, Oliver Wang, Mehmet Yumer, Chen-Hsuan Lin
  • Patent number: 10706509
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for automatically synthesizing a content-aware fill using similarity transformed patches. A user interface receives a user-specified hole and a user-specified sampling region, both of which may be stored in a constraint mask. A brush tool can be used to interactively brush the sampling region and modify the constraint mask. The mask is passed to a patch-based synthesizer configured to synthesize the fill using similarity transformed patches sampled from the sampling region. Fill properties such as similarity transform parameters can be set to control the manner in which the fill is synthesized. A live preview can be provided with gradual updates of the synthesized fill prior to completion. Once a fill has been synthesized, the user interface presents the original image, replacing the hole with the synthesized fill.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: July 7, 2020
    Assignee: ADOBE INC.
    Inventors: Sohrab Amirghodsi, Sarah Jane Stuckey, Elya Shechtman
  • Patent number: 10692265
    Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.
    Type: Grant
    Filed: November 7, 2019
    Date of Patent: June 23, 2020
    Assignee: Adobe Inc.
    Inventors: Sunil Hadap, Elya Shechtman, Zhixin Shu, Kalyan Sunkavalli, Mehmet Yumer
  • Publication number: 20200184697
    Abstract: Image modification using detected symmetry is described. In example implementations, an image modification module detects multiple local symmetries in an original image by discovering repeated correspondences that are each related by a transformation. The transformation can include a translation, a rotation, a reflection, a scaling, or a combination thereof. Each repeated correspondence includes three patches that are similar to one another and are respectively defined by three pixels of the original image. The image modification module generates a global symmetry of the original image by analyzing an applicability to the multiple local symmetries of multiple candidate homographies contributed by the multiple local symmetries. The image modification module associates individual pixels of the original image with a global symmetry indicator to produce a global symmetry association map.
    Type: Application
    Filed: February 19, 2020
    Publication date: June 11, 2020
    Applicant: Adobe Inc.
    Inventors: Kalyan Krishna Sunkavalli, Nathan Aaron Carr, Michal Lukác, Elya Shechtman
  • Publication number: 20200151938
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that integrate (or embed) a non-photorealistic rendering (“NPR”) generator with a style-transfer-neural network to generate stylized images that both correspond to a source image and resemble a stroke style. By integrating an NPR generator with a style-transfer-neural network, the disclosed methods, non-transitory computer readable media, and systems can accurately capture a stroke style resembling one or both of stylized edges or stylized shadings. When training such a style-transfer-neural network, the integrated NPR generator can enable the disclosed methods, non-transitory computer readable media, and systems to use real-stroke drawings (instead of conventional paired-ground-truth drawings) for training the network to accurately portray a stroke style.
    Type: Application
    Filed: November 8, 2018
    Publication date: May 14, 2020
    Inventors: Elya Shechtman, Yijun Li, Chen Fang, Aaron Hertzmann
  • Patent number: 10650490
    Abstract: Environmental map generation techniques and systems are described. A digital image is scaled to achieve a target aspect ratio using a content aware scaling technique. A canvas is generated that is dimensionally larger than the scaled digital image and the scaled digital image is inserted within the canvas thereby resulting in an unfilled portion of the canvas. An initially filled canvas is then generated by filling the unfilled portion using a content aware fill technique based on the inserted digital image. A plurality of polar coordinate canvases is formed by transforming original coordinates of the canvas into polar coordinates. The unfilled portions of the polar coordinate canvases are filled using a content-aware fill technique that is initialized based on the initially filled canvas. An environmental map of the digital image is generated by combining a plurality of original coordinate canvas portions formed from the polar coordinate canvases.
    Type: Grant
    Filed: April 9, 2019
    Date of Patent: May 12, 2020
    Assignee: Adobe Inc.
    Inventors: Xue Bai, Elya Shechtman, Sylvain Philippe Paris
  • Publication number: 20200118254
    Abstract: Certain aspects involve video inpainting via confidence-weighted motion estimation. For instance, a video editor accesses video content having a target region to be modified in one or more video frames. The video editor computes a motion for a boundary of the target region. The video editor interpolates, from the boundary motion, a target motion of a target pixel within the target region. In the interpolation, confidence values assigned to boundary pixels control how the motion of these pixels contributes to the interpolated target motion. A confidence value is computed based on a difference between forward and reverse motion with respect to a particular boundary pixel, a texture in a region that includes the particular boundary pixel, or a combination thereof. The video editor modifies the target region in the video by updating color data of the target pixel to correspond to the target motion interpolated from the boundary motion.
    Type: Application
    Filed: April 9, 2019
    Publication date: April 16, 2020
    Inventors: Geoffrey Oxholm, Oliver Wang, Elya Shechtman, Michal Lukac, Ramiz Sheikh
  • Patent number: 10621760
    Abstract: Techniques are disclosed for the synthesis of a full set of slotted content, based upon only partial observations of the slotted content. With respect to a font, the slots may comprise particular letters or symbols or glyphs in an alphabet. Based upon partial observations of a subset of glyphs from a font, a full set of the glyphs corresponding to the font may be synthesized and may further be ornamented.
    Type: Grant
    Filed: June 15, 2018
    Date of Patent: April 14, 2020
    Assignee: Adobe Inc.
    Inventors: Matthew David Fisher, Samaneh Azadi, Vladimir Kim, Elya Shechtman, Zhaowen Wang