Patents by Inventor Elya Shechtman

Elya Shechtman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210287007
    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference frame to other video frames depicting a scene. For example, a computing system accesses a set of video frames with annotations identifying a target region to be modified. The computing system determines a motion of the target region's boundary across the set of video frames, and also interpolates pixel motion within the target region across the set of video frames. The computing system also inserts, responsive to user input, a reference frame into the set of video frames. The reference frame can include reference color data from a user-specified modification to the target region. The computing system can use the reference color data and the interpolated motion to update color data in the target region across set of video frames.
    Type: Application
    Filed: March 12, 2020
    Publication date: September 16, 2021
    Inventors: Oliver Wang, Matthew Fisher, John Nelson, Geoffrey Oxholm, Elya Shechtman, Wenqi Xian
  • Patent number: 11094083
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: August 17, 2021
    Assignee: ADOBE INC.
    Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
  • Publication number: 20210248801
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for generating an animation of a talking head from an input audio signal of speech and a representation (such as a static image) of a head to animate. Generally, a neural network can learn to predict a set of 3D facial landmarks that can be used to drive the animation. In some embodiments, the neural network can learn to detect different speaking styles in the input speech and account for the different speaking styles when predicting the 3D facial landmarks. Generally, template 3D facial landmarks can be identified or extracted from the input image or other representation of the head, and the template 3D facial landmarks can be used with successive windows of audio from the input speech to predict 3D facial landmarks and generate a corresponding animation with plausible 3D effects.
    Type: Application
    Filed: February 12, 2020
    Publication date: August 12, 2021
    Inventors: Dingzeyu Li, Yang Zhou, Jose Ignacio Echevarria Vallespi, Elya Shechtman
  • Patent number: 11080833
    Abstract: A method for manipulating a target image includes generating a query of the target image and keys and values of a first reference image. The method also includes generating matching costs by comparing the query of the target image with each key of the reference image and generating a set of weights from the matching costs. Further, the method includes generating a set of weighted values by applying each weight of the set of weights to a corresponding value of the values of the reference image and generating a weighted patch by adding each weighted value of the set of weighted values together. Additionally, the method includes generating a combined weighted patch by combining the weighted patch with additional weighted patches associated with additional queries of the target image and generating a manipulated image by applying the combined weighted patch to an image processing algorithm.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: August 3, 2021
    Assignee: Adobe Inc.
    Inventors: Connelly Barnes, Utkarsh Singhal, Elya Shechtman, Michael Gharbi
  • Patent number: 11081139
    Abstract: Certain aspects involve video inpainting via confidence-weighted motion estimation. For instance, a video editor accesses video content having a target region to be modified in one or more video frames. The video editor computes a motion for a boundary of the target region. The video editor interpolates, from the boundary motion, a target motion of a target pixel within the target region. In the interpolation, confidence values assigned to boundary pixels control how the motion of these pixels contributes to the interpolated target motion. A confidence value is computed based on a difference between forward and reverse motion with respect to a particular boundary pixel, a texture in a region that includes the particular boundary pixel, or a combination thereof. The video editor modifies the target region in the video by updating color data of the target pixel to correspond to the target motion interpolated from the boundary motion.
    Type: Grant
    Filed: April 9, 2019
    Date of Patent: August 3, 2021
    Assignee: Adobe Inc.
    Inventors: Geoffrey Oxholm, Oliver Wang, Elya Shechtman, Michal Lukac, Ramiz Sheikh
  • Publication number: 20210233213
    Abstract: Techniques of adjusting the salience of an image include generating values of photographic development parameters for a foreground and background of an image to adjust the salience of the image in the foreground. These parameters are global in nature over the image rather than local. Moreover, the optimization of the salience over such sets of global parameters is provided through two sets of these parameters by an encoder: one set corresponding to the foreground, in which the salience is to be either increased or decreased, and the other set corresponding to the background. Once the set of development parameters corresponding to the foreground region and the set of development parameters corresponding to the background region have been determined, a decoder generates an adjusted image with an increased salience based on these sets of development parameters.
    Type: Application
    Filed: January 24, 2020
    Publication date: July 29, 2021
    Inventors: Youssef Alami Mejjati, Zoya Bylinskii, Elya Shechtman
  • Patent number: 11042969
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for automatically synthesizing a content-aware sampling region for a hole-filling algorithm such as content-aware fill. Given a source image and a hole (or other target region to fill), a sampling region can be synthesized by identifying a band of pixels surrounding the hole, clustering these pixels based on one or more characteristics (e.g., color, x/y coordinates, depth, focus, etc.), passing each of the resulting clusters as foreground pixels to a segmentation algorithm, and unioning the resulting pixels to form the sampling region. The sampling region can be stored in a constraint mask and passed to a hole-filling algorithm such as content-aware fill to synthesize a fill for the hole (or other target region) from patches sampled from the synthesized sampling region.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: June 22, 2021
    Assignee: ADOBE INC.
    Inventors: Sohrab Amirghodsi, Elya Shechtman, Derek Novo
  • Publication number: 20210158495
    Abstract: A method for manipulating a target image includes generating a query of the target image and keys and values of a first reference image. The method also includes generating matching costs by comparing the query of the target image with each key of the reference image and generating a set of weights from the matching costs. Further, the method includes generating a set of weighted values by applying each weight of the set of weights to a corresponding value of the values of the reference image and generating a weighted patch by adding each weighted value of the set of weighted values together. Additionally, the method includes generating a combined weighted patch by combining the weighted patch with additional weighted patches associated with additional queries of the target image and generating a manipulated image by applying the combined weighted patch to an image processing algorithm.
    Type: Application
    Filed: November 22, 2019
    Publication date: May 27, 2021
    Inventors: Connelly Barnes, Utkarsh Singhal, Elya Shechtman, Michael Gharbi
  • Publication number: 20210158570
    Abstract: This disclosure involves training generative adversarial networks to shot-match two unmatched images in a context-sensitive manner. For example, aspects of the present disclosure include accessing a trained generative adversarial network including a trained generator model and a trained discriminator model. A source image and a reference image may be inputted into the generator model to generate a modified source image. The modified source image and the reference image may be inputted into the discriminator model to determine a likelihood that the modified source image is color-matched with the reference image. The modified source image may be outputted as a shot-match with the reference image in response to determining, using the discriminator model, that the modified source image and the reference image are color-matched.
    Type: Application
    Filed: November 22, 2019
    Publication date: May 27, 2021
    Inventors: Tharun Mohandoss, Pulkit Gera, Oliver Wang, Kartik Sethi, Kalyan Sunkavalli, Elya Shechtman, Chetan Nanda
  • Publication number: 20210160466
    Abstract: Systems and techniques for automatic digital parameter adjustment are described that leverage insights learned from an image set to automatically predict parameter values for an input item of digital visual content. To do so, the automatic digital parameter adjustment techniques described herein captures visual and contextual features of digital visual content to determine balanced visual output in a range of visual scenes and settings. The visual and contextual features of digital visual content are used to train a parameter adjustment model through machine learning techniques that captures feature patterns and interactions. The parameter adjustment model exploits these feature interactions to determine visually pleasing parameter values for an input item of digital visual content. The predicted parameter values are output, allowing further adjustment to the parameter values.
    Type: Application
    Filed: November 26, 2019
    Publication date: May 27, 2021
    Applicant: Adobe Inc.
    Inventors: Pulkit Gera, Oliver Wang, Kalyan Krishna Sunkavalli, Elya Shechtman, Chetan Nanda
  • Publication number: 20210142042
    Abstract: In implementations of skin tone assisted digital image color matching, a device implements a color editing system, which includes a facial detection module to detect faces in an input image and in a reference image, and includes a skin tone model to determine a skin tone value reflective of a skin tone of each of the faces. A color matching module can be implemented to group the faces into one or more face groups based on the skin tone value of each of the faces, match a face group pair as an input image face group paired with a reference image face group, and generate a modified image from the input image based on color features of the reference image, the color features including face skin tones of the respective faces in the face group pair as part of the color features applied to modify the input image.
    Type: Application
    Filed: January 21, 2021
    Publication date: May 13, 2021
    Applicant: Adobe Inc.
    Inventors: Kartik Sethi, Oliver Wang, Tharun Mohandoss, Elya Shechtman, Chetan Nanda
  • Publication number: 20210142463
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified digital images by utilizing a patch match algorithm to generate nearest neighbor fields for a second digital image based on a nearest neighbor field associated with a first digital image. For example, the disclosed systems can identify a nearest neighbor field associated with a first digital image of a first resolution. Based on the nearest neighbor field of the first digital image, the disclosed systems can utilize a patch match algorithm to generate a nearest neighbor field for a second digital image of a second resolution larger than the first resolution. The disclosed systems can further generate a modified digital image by filling a target region of the second digital image utilizing the generated nearest neighbor field.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 13, 2021
    Inventors: Sohrab Amirghodsi, Aliakbar Darabi, Elya Shechtman
  • Patent number: 10977549
    Abstract: In implementations of object animation using generative neural networks, one or more computing devices of a system implement an animation system for reproducing animation of an object in a digital video. A mesh of the object is obtained from a first frame of the digital video and a second frame of the digital video having the object is selected. Features of the object from the second frame are mapped to vertices of the mesh, and the mesh is warped based on the mapping. The warped mesh is rendered as an image by a neural renderer and compared to the object from the second frame to train a neural network. The rendered image is then refined by a generator of a generative adversarial network which includes a discriminator. The discriminator trains the generator to reproduce the object from the second frame as the refined image.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: April 13, 2021
    Assignee: Adobe Inc.
    Inventors: Vladimir Kim, Omid Poursaeed, Jun Saito, Elya Shechtman
  • Patent number: 10936853
    Abstract: In implementations of skin tone assisted digital image color matching, a device implements a color editing system, which includes a facial detection module to detect faces in an input image and in a reference image, and includes a skin tone model to determine a skin tone value reflective of a skin tone of each of the faces. A color matching module can be implemented to group the faces into one or more face groups based on the skin tone value of each of the faces, match a face group pair as an input image face group paired with a reference image face group, and generate a modified image from the input image based on color features of the reference image, the color features including face skin tones of the respective faces in the face group pair as part of the color features applied to modify the input image.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: March 2, 2021
    Assignee: Adobe Inc.
    Inventors: Kartik Sethi, Oliver Wang, Tharun Mohandoss, Elya Shechtman, Chetan Nanda
  • Publication number: 20210056668
    Abstract: Techniques are disclosed for filling or otherwise replacing a target region of a primary image with a corresponding region of an auxiliary image. The filling or replacing can be done with an overlay (no subtractive process need be run on the primary image). Because the primary and auxiliary images may not be aligned, both geometric and photometric transformations are applied to the primary and/or auxiliary images. For instance, a geometric transformation of the auxiliary image is performed, to better align features of the auxiliary image with corresponding features of the primary image. Also, a photometric transformation of the auxiliary image is performed, to better match color of one or more pixels of the auxiliary image with color of corresponding one or more pixels of the primary image. The corresponding region of the transformed auxiliary image is then copied and overlaid on the target region of the primary image.
    Type: Application
    Filed: August 22, 2019
    Publication date: February 25, 2021
    Applicant: Adobe Inc.
    Inventors: Connelly Barnes, Sohrab Amirghodsi, Elya Shechtman
  • Patent number: 10915991
    Abstract: Embodiments described herein are directed to methods and systems for facilitating control of smoothness of transitions between images. In embodiments, a difference of color values of pixels between a foreground image and the background image are identified along a boundary associated with a location at which to paste the foreground image relative to the background image. Thereafter, recursive down sampling of a region of pixels within the boundary by a sampling factor is performed to produce a plurality of down sampled images having color difference indicators associated with each pixel of the down sampled images. Such color difference indicators indicate whether a difference of color value exists for the corresponding pixel. To effectuate a seamless transition, the color difference indicators are normalized in association with each recursively down sampled image.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: February 9, 2021
    Assignee: ADOBE INC.
    Inventors: Sylvain Paris, Sohrab Amirghodsi, Aliakbar Darabi, Elya Shechtman
  • Publication number: 20210012189
    Abstract: Techniques for incorporating a black-box function into a neural network are described. For example, an image editing function may be the black-box function and may be wrapped into a layer of the neural network. A set of parameters and a source image are provided to the black-box function, and the output image that represents the source image with the set of parameters applied to the source image is output from the black-box function. To address the issue that the black-box function may not be differentiable, a loss optimization may calculate the gradients of the function using, for example, a finite differences calculation, and the gradients are used to train the neural network to ensure the output image is representative of an expected ground truth image.
    Type: Application
    Filed: July 10, 2019
    Publication date: January 14, 2021
    Inventors: Oliver Wang, Kevin Wampler, Kalyan Krishna Sunkavalli, Elya Shechtman, Siddhant Jain
  • Publication number: 20200372619
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for automatically synthesizing a content-aware sampling region for a hole-filling algorithm such as content-aware fill. Given a source image and a hole (or other target region to fill), a sampling region can be synthesized by identifying a band of pixels surrounding the hole, clustering these pixels based on one or more characteristics (e.g., color, x/y coordinates, depth, focus, etc.), passing each of the resulting clusters as foreground pixels to a segmentation algorithm, and unioning the resulting pixels to form the sampling region. The sampling region can be stored in a constraint mask and passed to a hole-filling algorithm such as content-aware fill to synthesize a fill for the hole (or other target region) from patches sampled from the synthesized sampling region.
    Type: Application
    Filed: May 23, 2019
    Publication date: November 26, 2020
    Inventors: Sohrab Amirghodsi, Elya Shechtman, Derek Novo
  • Publication number: 20200372710
    Abstract: Techniques are disclosed for 3D object reconstruction using photometric mesh representations. A decoder is pretrained to transform points sampled from 2D patches of representative objects into 3D polygonal meshes. An image frame of the object is fed into an encoder to get an initial latent code vector. For each frame and camera pair from the sequence, a polygonal mesh is rendered at the given viewpoints. The mesh is optimized by creating a virtual viewpoint, rasterized to obtain a depth map. The 3D mesh projections are aligned by projecting the coordinates corresponding to the polygonal face vertices of the rasterized mesh to both selected viewpoints. The photometric error is determined from RGB pixel intensities sampled from both frames. Gradients from the photometric error are backpropagated into the vertices of the assigned polygonal indices by relating the barycentric coordinates of each image to update the latent code vector.
    Type: Application
    Filed: August 5, 2020
    Publication date: November 26, 2020
    Applicant: Adobe, Inc.
    Inventors: Oliver Wang, Vladimir Kim, Matthew Fisher, Elya Shechtman, Chen-Hsuan Lin, Bryan Russell
  • Publication number: 20200342634
    Abstract: Techniques are disclosed for neural network based interpolation of image textures. A methodology implementing the techniques according to an embodiment includes training a global encoder network to generate global latent vectors based on training texture images, and training a local encoder network to generate local latent tensors based on the training texture images. The method further includes interpolating between the global latent vectors associated with each set of training images, and interpolating between the local latent tensors associated with each set of training images. The method further includes training a decoder network to generate reconstructions of the training texture images and to generate an interpolated texture based on the interpolated global latent vectors and the interpolated local latent tensors. The training of the encoder and decoder networks is based on a minimization of a loss function of the reconstructions and a minimization of a loss function of the interpolated texture.
    Type: Application
    Filed: April 24, 2019
    Publication date: October 29, 2020
    Applicant: Adobe Inc.
    Inventors: Connelly Barnes, Sohrab Amirghodsi, Michal Lukac, Elya Shechtman, Ning Yu