Patents by Inventor Elya Shechtman

Elya Shechtman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11178368
    Abstract: Systems and techniques for automatic digital parameter adjustment are described that leverage insights learned from an image set to automatically predict parameter values for an input item of digital visual content. To do so, the automatic digital parameter adjustment techniques described herein captures visual and contextual features of digital visual content to determine balanced visual output in a range of visual scenes and settings. The visual and contextual features of digital visual content are used to train a parameter adjustment model through machine learning techniques that captures feature patterns and interactions. The parameter adjustment model exploits these feature interactions to determine visually pleasing parameter values for an input item of digital visual content. The predicted parameter values are output, allowing further adjustment to the parameter values.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: November 16, 2021
    Assignee: Adobe Inc.
    Inventors: Pulkit Gera, Oliver Wang, Kalyan Krishna Sunkavalli, Elya Shechtman, Chetan Nanda
  • Publication number: 20210342983
    Abstract: Methods and systems are provided for accurately filling holes, regions, and/or portions of images using iterative image inpainting. In particular, iterative inpainting utilize a confidence analysis of predicted pixels determined during the iterations of inpainting. For instance, a confidence analysis can provide information that can be used as feedback to progressively fill undefined pixels that comprise the holes, regions, and/or portions of an image where information for those respective pixels is not known. To allow for accurate image inpainting, one or more neural networks can be used. For instance, a coarse result neural network (e.g., a GAN comprised of a generator and a discriminator) and a fine result neural network (e.g., a GAN comprised of a generator and two discriminators).
    Type: Application
    Filed: April 29, 2020
    Publication date: November 4, 2021
    Inventors: Zhe LIN, Yu ZENG, Jimei YANG, Jianming ZHANG, Elya SHECHTMAN
  • Publication number: 20210342984
    Abstract: Methods and systems are provided for accurately filling holes, regions, and/or portions of high-resolution images using guided upsampling during image inpainting. For instance, an image inpainting system can apply guided upsampling to an inpainted image result to enable generation of a high-resolution inpainting result from a lower-resolution image that has undergone inpainting. To allow for guided upsampling during image inpainting, one or more neural networks can be used. For instance, a low-resolution result neural network (e.g., comprised of an encoder and a decoder) and a high-resolution input neural network (e.g., comprised of an encoder and a decoder). The image inpainting system can use such networks to generate a high-resolution inpainting image result that fills the hole, region, and/or portion of the image.
    Type: Application
    Filed: May 1, 2020
    Publication date: November 4, 2021
    Inventors: Zhe LIN, Yu ZENG, Jimei YANG, Jianming ZHANG, Elya SHECHTMAN
  • Patent number: 11158090
    Abstract: This disclosure involves training generative adversarial networks to shot-match two unmatched images in a context-sensitive manner. For example, aspects of the present disclosure include accessing a trained generative adversarial network including a trained generator model and a trained discriminator model. A source image and a reference image may be inputted into the generator model to generate a modified source image. The modified source image and the reference image may be inputted into the discriminator model to determine a likelihood that the modified source image is color-matched with the reference image. The modified source image may be outputted as a shot-match with the reference image in response to determining, using the discriminator model, that the modified source image and the reference image are color-matched.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: October 26, 2021
    Assignee: Adobe Inc.
    Inventors: Tharun Mohandoss, Pulkit Gera, Oliver Wang, Kartik Sethi, Kalyan Sunkavalli, Elya Shechtman, Chetan Nanda
  • Publication number: 20210312599
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for automatically synthesizing a content-aware sampling region for a hole-filling algorithm such as content-aware fill. Given a source image and a hole (or other target region to fill), a sampling region can be synthesized by identifying a band of pixels surrounding the hole, clustering these pixels based on one or more characteristics (e.g., color, x/y coordinates, depth, focus, etc.), passing each of the resulting clusters as foreground pixels to a segmentation algorithm, and unioning the resulting pixels to form the sampling region. The sampling region can be stored in a constraint mask and passed to a hole-filling algorithm such as content-aware fill to synthesize a fill for the hole (or other target region) from patches sampled from the synthesized sampling region.
    Type: Application
    Filed: June 17, 2021
    Publication date: October 7, 2021
    Inventors: Sohrab Amirghodsi, Elya Shechtman, Derek Novo
  • Patent number: 11138693
    Abstract: Techniques of adjusting the salience of an image include generating values of photographic development parameters for a foreground and background of an image to adjust the salience of the image in the foreground. These parameters are global in nature over the image rather than local. Moreover, the optimization of the salience over such sets of global parameters is provided through two sets of these parameters by an encoder: one set corresponding to the foreground, in which the salience is to be either increased or decreased, and the other set corresponding to the background. Once the set of development parameters corresponding to the foreground region and the set of development parameters corresponding to the background region have been determined, a decoder generates an adjusted image with an increased salience based on these sets of development parameters.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: October 5, 2021
    Assignee: ADOBE INC.
    Inventors: Youssef Alami Mejjati, Zoya Bylinskii, Elya Shechtman
  • Publication number: 20210287007
    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference frame to other video frames depicting a scene. For example, a computing system accesses a set of video frames with annotations identifying a target region to be modified. The computing system determines a motion of the target region's boundary across the set of video frames, and also interpolates pixel motion within the target region across the set of video frames. The computing system also inserts, responsive to user input, a reference frame into the set of video frames. The reference frame can include reference color data from a user-specified modification to the target region. The computing system can use the reference color data and the interpolated motion to update color data in the target region across set of video frames.
    Type: Application
    Filed: March 12, 2020
    Publication date: September 16, 2021
    Inventors: Oliver Wang, Matthew Fisher, John Nelson, Geoffrey Oxholm, Elya Shechtman, Wenqi Xian
  • Patent number: 11094083
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: August 17, 2021
    Assignee: ADOBE INC.
    Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
  • Publication number: 20210248801
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for generating an animation of a talking head from an input audio signal of speech and a representation (such as a static image) of a head to animate. Generally, a neural network can learn to predict a set of 3D facial landmarks that can be used to drive the animation. In some embodiments, the neural network can learn to detect different speaking styles in the input speech and account for the different speaking styles when predicting the 3D facial landmarks. Generally, template 3D facial landmarks can be identified or extracted from the input image or other representation of the head, and the template 3D facial landmarks can be used with successive windows of audio from the input speech to predict 3D facial landmarks and generate a corresponding animation with plausible 3D effects.
    Type: Application
    Filed: February 12, 2020
    Publication date: August 12, 2021
    Inventors: Dingzeyu Li, Yang Zhou, Jose Ignacio Echevarria Vallespi, Elya Shechtman
  • Patent number: 11081139
    Abstract: Certain aspects involve video inpainting via confidence-weighted motion estimation. For instance, a video editor accesses video content having a target region to be modified in one or more video frames. The video editor computes a motion for a boundary of the target region. The video editor interpolates, from the boundary motion, a target motion of a target pixel within the target region. In the interpolation, confidence values assigned to boundary pixels control how the motion of these pixels contributes to the interpolated target motion. A confidence value is computed based on a difference between forward and reverse motion with respect to a particular boundary pixel, a texture in a region that includes the particular boundary pixel, or a combination thereof. The video editor modifies the target region in the video by updating color data of the target pixel to correspond to the target motion interpolated from the boundary motion.
    Type: Grant
    Filed: April 9, 2019
    Date of Patent: August 3, 2021
    Assignee: Adobe Inc.
    Inventors: Geoffrey Oxholm, Oliver Wang, Elya Shechtman, Michal Lukac, Ramiz Sheikh
  • Patent number: 11080833
    Abstract: A method for manipulating a target image includes generating a query of the target image and keys and values of a first reference image. The method also includes generating matching costs by comparing the query of the target image with each key of the reference image and generating a set of weights from the matching costs. Further, the method includes generating a set of weighted values by applying each weight of the set of weights to a corresponding value of the values of the reference image and generating a weighted patch by adding each weighted value of the set of weighted values together. Additionally, the method includes generating a combined weighted patch by combining the weighted patch with additional weighted patches associated with additional queries of the target image and generating a manipulated image by applying the combined weighted patch to an image processing algorithm.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: August 3, 2021
    Assignee: Adobe Inc.
    Inventors: Connelly Barnes, Utkarsh Singhal, Elya Shechtman, Michael Gharbi
  • Publication number: 20210233213
    Abstract: Techniques of adjusting the salience of an image include generating values of photographic development parameters for a foreground and background of an image to adjust the salience of the image in the foreground. These parameters are global in nature over the image rather than local. Moreover, the optimization of the salience over such sets of global parameters is provided through two sets of these parameters by an encoder: one set corresponding to the foreground, in which the salience is to be either increased or decreased, and the other set corresponding to the background. Once the set of development parameters corresponding to the foreground region and the set of development parameters corresponding to the background region have been determined, a decoder generates an adjusted image with an increased salience based on these sets of development parameters.
    Type: Application
    Filed: January 24, 2020
    Publication date: July 29, 2021
    Inventors: Youssef Alami Mejjati, Zoya Bylinskii, Elya Shechtman
  • Patent number: 11042969
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for automatically synthesizing a content-aware sampling region for a hole-filling algorithm such as content-aware fill. Given a source image and a hole (or other target region to fill), a sampling region can be synthesized by identifying a band of pixels surrounding the hole, clustering these pixels based on one or more characteristics (e.g., color, x/y coordinates, depth, focus, etc.), passing each of the resulting clusters as foreground pixels to a segmentation algorithm, and unioning the resulting pixels to form the sampling region. The sampling region can be stored in a constraint mask and passed to a hole-filling algorithm such as content-aware fill to synthesize a fill for the hole (or other target region) from patches sampled from the synthesized sampling region.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: June 22, 2021
    Assignee: ADOBE INC.
    Inventors: Sohrab Amirghodsi, Elya Shechtman, Derek Novo
  • Publication number: 20210158570
    Abstract: This disclosure involves training generative adversarial networks to shot-match two unmatched images in a context-sensitive manner. For example, aspects of the present disclosure include accessing a trained generative adversarial network including a trained generator model and a trained discriminator model. A source image and a reference image may be inputted into the generator model to generate a modified source image. The modified source image and the reference image may be inputted into the discriminator model to determine a likelihood that the modified source image is color-matched with the reference image. The modified source image may be outputted as a shot-match with the reference image in response to determining, using the discriminator model, that the modified source image and the reference image are color-matched.
    Type: Application
    Filed: November 22, 2019
    Publication date: May 27, 2021
    Inventors: Tharun Mohandoss, Pulkit Gera, Oliver Wang, Kartik Sethi, Kalyan Sunkavalli, Elya Shechtman, Chetan Nanda
  • Publication number: 20210158495
    Abstract: A method for manipulating a target image includes generating a query of the target image and keys and values of a first reference image. The method also includes generating matching costs by comparing the query of the target image with each key of the reference image and generating a set of weights from the matching costs. Further, the method includes generating a set of weighted values by applying each weight of the set of weights to a corresponding value of the values of the reference image and generating a weighted patch by adding each weighted value of the set of weighted values together. Additionally, the method includes generating a combined weighted patch by combining the weighted patch with additional weighted patches associated with additional queries of the target image and generating a manipulated image by applying the combined weighted patch to an image processing algorithm.
    Type: Application
    Filed: November 22, 2019
    Publication date: May 27, 2021
    Inventors: Connelly Barnes, Utkarsh Singhal, Elya Shechtman, Michael Gharbi
  • Publication number: 20210160466
    Abstract: Systems and techniques for automatic digital parameter adjustment are described that leverage insights learned from an image set to automatically predict parameter values for an input item of digital visual content. To do so, the automatic digital parameter adjustment techniques described herein captures visual and contextual features of digital visual content to determine balanced visual output in a range of visual scenes and settings. The visual and contextual features of digital visual content are used to train a parameter adjustment model through machine learning techniques that captures feature patterns and interactions. The parameter adjustment model exploits these feature interactions to determine visually pleasing parameter values for an input item of digital visual content. The predicted parameter values are output, allowing further adjustment to the parameter values.
    Type: Application
    Filed: November 26, 2019
    Publication date: May 27, 2021
    Applicant: Adobe Inc.
    Inventors: Pulkit Gera, Oliver Wang, Kalyan Krishna Sunkavalli, Elya Shechtman, Chetan Nanda
  • Publication number: 20210142042
    Abstract: In implementations of skin tone assisted digital image color matching, a device implements a color editing system, which includes a facial detection module to detect faces in an input image and in a reference image, and includes a skin tone model to determine a skin tone value reflective of a skin tone of each of the faces. A color matching module can be implemented to group the faces into one or more face groups based on the skin tone value of each of the faces, match a face group pair as an input image face group paired with a reference image face group, and generate a modified image from the input image based on color features of the reference image, the color features including face skin tones of the respective faces in the face group pair as part of the color features applied to modify the input image.
    Type: Application
    Filed: January 21, 2021
    Publication date: May 13, 2021
    Applicant: Adobe Inc.
    Inventors: Kartik Sethi, Oliver Wang, Tharun Mohandoss, Elya Shechtman, Chetan Nanda
  • Publication number: 20210142463
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified digital images by utilizing a patch match algorithm to generate nearest neighbor fields for a second digital image based on a nearest neighbor field associated with a first digital image. For example, the disclosed systems can identify a nearest neighbor field associated with a first digital image of a first resolution. Based on the nearest neighbor field of the first digital image, the disclosed systems can utilize a patch match algorithm to generate a nearest neighbor field for a second digital image of a second resolution larger than the first resolution. The disclosed systems can further generate a modified digital image by filling a target region of the second digital image utilizing the generated nearest neighbor field.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 13, 2021
    Inventors: Sohrab Amirghodsi, Aliakbar Darabi, Elya Shechtman
  • Patent number: 10977549
    Abstract: In implementations of object animation using generative neural networks, one or more computing devices of a system implement an animation system for reproducing animation of an object in a digital video. A mesh of the object is obtained from a first frame of the digital video and a second frame of the digital video having the object is selected. Features of the object from the second frame are mapped to vertices of the mesh, and the mesh is warped based on the mapping. The warped mesh is rendered as an image by a neural renderer and compared to the object from the second frame to train a neural network. The rendered image is then refined by a generator of a generative adversarial network which includes a discriminator. The discriminator trains the generator to reproduce the object from the second frame as the refined image.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: April 13, 2021
    Assignee: Adobe Inc.
    Inventors: Vladimir Kim, Omid Poursaeed, Jun Saito, Elya Shechtman
  • Patent number: 10936853
    Abstract: In implementations of skin tone assisted digital image color matching, a device implements a color editing system, which includes a facial detection module to detect faces in an input image and in a reference image, and includes a skin tone model to determine a skin tone value reflective of a skin tone of each of the faces. A color matching module can be implemented to group the faces into one or more face groups based on the skin tone value of each of the faces, match a face group pair as an input image face group paired with a reference image face group, and generate a modified image from the input image based on color features of the reference image, the color features including face skin tones of the respective faces in the face group pair as part of the color features applied to modify the input image.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: March 2, 2021
    Assignee: Adobe Inc.
    Inventors: Kartik Sethi, Oliver Wang, Tharun Mohandoss, Elya Shechtman, Chetan Nanda