Patents by Inventor Elya Shechtman

Elya Shechtman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11880957
    Abstract: One example method involves operations for receiving a request to transform an input image into a target image. Operations further include providing the input image to a machine learning model trained to adapt images. Training the machine learning model includes accessing training data having a source domain of images and a target domain of images with a target style. Training further includes using a pre-trained generative model to generate an adapted source domain of adapted images having the target style. The adapted source domain is generated by determining a rate of change for parameters of the target style, generating weighted parameters by applying a weight to each of the parameters based on their respective rate of change, and applying the weighted parameters to the source domain. Additionally, operations include using the machine learning model to generate the target image by modifying parameters of the input image using the target style.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: January 23, 2024
    Assignee: Adobe Inc.
    Inventors: Yijun Li, Richard Zhang, Jingwan Lu, Elya Shechtman
  • Patent number: 11880766
    Abstract: An improved system architecture uses a pipeline including a Generative Adversarial Network (GAN) including a generator neural network and a discriminator neural network to generate an image. An input image in a first domain and information about a target domain are obtained. The domains correspond to image styles. An initial latent space representation of the input image is produced by encoding the input image. An initial output image is generated by processing the initial latent space representation with the generator neural network. Using the discriminator neural network, a score is computed indicating whether the initial output image is in the target domain. A loss is computed based on the computed score. The loss is minimized to compute an updated latent space representation. The updated latent space representation is processed with the generator neural network to generate an output image in the target domain.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: January 23, 2024
    Assignee: Adobe Inc.
    Inventors: Cameron Smith, Ratheesh Kalarot, Wei-An Lin, Richard Zhang, Niloy Mitra, Elya Shechtman, Shabnam Ghadar, Zhixin Shu, Yannick Hold-Geoffrey, Nathan Carr, Jingwan Lu, Oliver Wang, Jun-Yan Zhu
  • Patent number: 11875221
    Abstract: Systems and methods generate a filtering function for editing an image with reduced attribute correlation. An image editing system groups training data into bins according to a distribution of a target attribute. For each bin, the system samples a subset of the training data based on a pre-determined target distribution of a set of additional attributes in the training data. The system identifies a direction in the sampled training data corresponding to the distribution of the target attribute to generate a filtering vector for modifying the target attribute in an input image, obtains a latent space representation of an input image, applies the filtering vector to the latent space representation of the input image to generate a filtered latent space representation of the input image, and provides the filtered latent space representation as input to a neural network to generate an output image with a modification to the target attribute.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Wei-An Lin, Baldo Faieta, Cameron Smith, Elya Shechtman, Jingwan Lu, Jun-Yan Zhu, Niloy Mitra, Ratheesh Kalarot, Richard Zhang, Shabnam Ghadar, Zhixin Shu
  • Patent number: 11869173
    Abstract: Various disclosed embodiments are directed to inpainting one or more portions of a target image based on merging (or selecting) one or more portions of a warped image with (or from) one or more portions of an inpainting candidate (e.g., via a learning model). This, among other functionality described herein, resolves the inaccuracies of existing image inpainting technologies.
    Type: Grant
    Filed: December 27, 2022
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Yuqian Zhou, Elya Shechtman, Connelly Stuart Barnes, Sohrab Amirghodsi
  • Patent number: 11861762
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate synthetized digital images using class-specific generators for objects of different classes. The disclosed system modifies a synthesized digital image by utilizing a plurality of class-specific generator neural networks to generate a plurality of synthesized objects according to object classes identified in the synthesized digital image. The disclosed system determines object classes in the synthesized digital image such as via a semantic label map corresponding to the synthesized digital image. The disclosed system selects class-specific generator neural networks corresponding to the classes of objects in the synthesized digital image. The disclosed system also generates a plurality of synthesized objects utilizing the class-specific generator neural networks based on contextual data associated with the identified objects.
    Type: Grant
    Filed: August 12, 2021
    Date of Patent: January 2, 2024
    Assignee: Adobe Inc.
    Inventors: Yuheng Li, Yijun Li, Jingwan Lu, Elya Shechtman, Krishna Kumar Singh
  • Patent number: 11854244
    Abstract: A panoptic labeling system includes a modified panoptic labeling neural network (“modified PLNN”) that is trained to generate labels for pixels in an input image. The panoptic labeling system generates modified training images by combining training images with mask instances from annotated images. The modified PLNN determines a set of labels representing categories of objects depicted in the modified training images. The modified PLNN also determines a subset of the labels representing categories of objects depicted in the input image. For each mask pixel in a modified training image, the modified PLNN calculates a probability indicating whether the mask pixel has the same label as an object pixel. The modified PLNN generates a mask label for each mask pixel, based on the probability. The panoptic labeling system provides the mask label to, for example, a digital graphics editing system that uses the labels to complete an infill operation.
    Type: Grant
    Filed: October 20, 2022
    Date of Patent: December 26, 2023
    Assignee: ADOBE INC.
    Inventors: Sohrab Amirghodsi, Zhe Lin, Yilin Wang, Tianshu Yu, Connelly Barnes, Elya Shechtman
  • Publication number: 20230385992
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that implement an inpainting framework having computer-implemented machine learning models to generate high-resolution inpainting results. For instance, in one or more embodiments, the disclosed systems generate an inpainted digital image utilizing a deep inpainting neural network from a digital image having a replacement region. The disclosed systems further generate, utilizing a visual guide algorithm, at least one deep visual guide from the inpainted digital image. Using a patch match model and the at least one deep visual guide, the disclosed systems generate a plurality of modified digital images from the digital image by replacing the region of pixels of the digital image with replacement pixels. Additionally, the disclosed systems select, utilizing an inpainting curation model, a modified digital image from the plurality of modified digital images to provide to a client device.
    Type: Application
    Filed: May 25, 2022
    Publication date: November 30, 2023
    Inventors: Connelly Barnes, Elya Shechtman, Sohrab Amirghodsi, Zhe Lin
  • Patent number: 11823357
    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference video frame to other video frames depicting a scene. One example method includes one or more processing devices that performs operations that include accessing a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The operations also includes computing a target motion of a target pixel that is subject to a motion constraint. The motion constraint is based on a three-dimensional model of the reference object. Further, operations include determining color data of the target pixel to correspond to the target motion. The color data includes a color value and a gradient. Operations also include determining gradient constraints using gradient values of neighbor pixels. Additionally, the processing devices updates the color data of the target pixel subject to the gradient constraints.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: November 21, 2023
    Assignee: Adobe Inc.
    Inventors: Oliver Wang, John Nelson, Geoffrey Oxholm, Elya Shechtman
  • Publication number: 20230368339
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing class-specific cascaded modulation inpainting neural network. For example, the disclosed systems utilize a class-specific cascaded modulation inpainting neural network that includes cascaded modulation decoder layers to generate replacement pixels portraying a particular target object class. To illustrate, in response to user selection of a replacement region and target object class, the disclosed systems utilize a class-specific cascaded modulation inpainting neural network corresponding to the target object class to generate an inpainted digital image that portrays an instance of the target object class within the replacement region.
    Type: Application
    Filed: May 13, 2022
    Publication date: November 16, 2023
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
  • Publication number: 20230360180
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing a cascaded modulation inpainting neural network. For example, the disclosed systems utilize a cascaded modulation inpainting neural network that includes cascaded modulation decoder layers. For example, in one or more decoder layers, the disclosed systems start with global code modulation that captures the global-range image structures followed by an additional modulation that refines the global predictions. Accordingly, in one or more implementations, the image inpainting system provides a mechanism to correct distorted local details. Furthermore, in one or more implementations, the image inpainting system leverages fast Fourier convolutions block within different resolution layers of the encoder architecture to expand the receptive field of the encoder and to allow the network encoder to better capture global structure.
    Type: Application
    Filed: May 4, 2022
    Publication date: November 9, 2023
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
  • Patent number: 11810326
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: November 7, 2023
    Assignee: Adobe Inc.
    Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
  • Publication number: 20230342893
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for combining digital images. In particular, in one or more embodiments, the disclosed systems combine latent codes of a source digital image and a target digital image utilizing a blending network to determine a combined latent encoding and generate a combined digital image from the combined latent encoding utilizing a generative neural network. In some embodiments, the disclosed systems determine an intersection face mask between the source digital image and the combined digital image utilizing a face segmentation network and combine the source digital image and the combined digital image utilizing the intersection face mask to generate a blended digital image.
    Type: Application
    Filed: April 21, 2022
    Publication date: October 26, 2023
    Inventors: Tobias Hinz, Shabnam Ghadar, Richard Zhang, Ratheesh Kalarot, Jingwan Lu, Elya Shechtman
  • Publication number: 20230342884
    Abstract: An image inpainting system is described that receives an input image that includes a masked region. From the input image, the image inpainting system generates a synthesized image that depicts an object in the masked region by selecting a first code that represents a known factor characterizing a visual appearance of the object and a second code that represents an unknown factor characterizing the visual appearance of the object apart from the known factor in latent space. The input image, the first code, and the second code are provided as input to a generative adversarial network that is trained to generate the synthesized image using contrastive losses. Different synthesized images are generated from the same input image using different combinations of first and second codes, and the synthesized images are output for display.
    Type: Application
    Filed: April 21, 2022
    Publication date: October 26, 2023
    Applicant: Adobe Inc.
    Inventors: Krishna Kumar Singh, Yuheng Li, Yijun Li, Jingwan Lu, Elya Shechtman
  • Publication number: 20230316475
    Abstract: An item recommendation system receives a set of recommendable items and a request to select, from the set of recommendable items, a contrast group. The item recommendation system selects a contrast group from the set of recommendable items by applying a image modification model to the set of recommendable items. The image modification model includes an item selection model configured to determine an unbiased conversion rate for each item of the set of recommendable items and select a recommended item from the set of recommendable items having a greatest unbiased conversion rate. The image modification model includes a contrast group selection model configured to select, for the recommended item, a contrast group comprising the recommended item and one or more contrast items. The item recommendation system transmits the contrast group responsive to the request.
    Type: Application
    Filed: March 30, 2022
    Publication date: October 5, 2023
    Inventors: Cameron Smith, Wei-An Lin, Timothy M. Converse, Shabnam Ghadar, Ratheesh Kalarot, John Nack, Jingwan Lu, Hui Qu, Elya Shechtman, Baldo Faieta
  • Publication number: 20230316606
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for latent-based editing of digital images using a generative neural network. In particular, in one or more embodiments, the disclosed systems perform latent-based editing of a digital image by mapping a feature tensor and a set of style vectors for the digital image into a joint feature style space. In one or more implementations, the disclosed systems apply a joint feature style perturbation and/or modification vectors within the joint feature style space to determine modified style vectors and a modified feature tensor. Moreover, in one or more embodiments the disclosed systems generate a modified digital image utilizing a generative neural network from the modified style vectors and the modified feature tensor.
    Type: Application
    Filed: March 21, 2022
    Publication date: October 5, 2023
    Inventors: Hui Qu, Baldo Faieta, Cameron Smith, Elya Shechtman, Jingwan Lu, Ratheesh Kalarot, Richard Zhang, Saeid Motiian, Shabnam Ghadar, Wei-An Lin
  • Publication number: 20230316474
    Abstract: Methods, systems, and non-transitory computer readable media are disclosed for intelligently enhancing details in edited images. The disclosed system iteratively updates residual detail latent code for segments in edited images where detail has been lost through the editing process. More particularly, the disclosed system enhances an edited segment in an edited image based on details in a detailed segment of an image. Additionally, the disclosed system may utilize a detail neural network encoder to project the detailed segment and a corresponding segment of the edited image into a residual detail latent code. In some embodiments, the disclosed system generates a refined edited image based on the residual detail latent code and a latent vector of the edited image.
    Type: Application
    Filed: April 1, 2022
    Publication date: October 5, 2023
    Inventors: Hui Qu, Jingwan Lu, Saeid Motiian, Shabnam Ghadar, Wei-An Lin, Elya Shechtman
  • Patent number: 11776188
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for generating an animation of a talking head from an input audio signal of speech and a representation (such as a static image) of a head to animate. Generally, a neural network can learn to predict a set of 3D facial landmarks that can be used to drive the animation. In some embodiments, the neural network can learn to detect different speaking styles in the input speech and account for the different speaking styles when predicting the 3D facial landmarks. Generally, template 3D facial landmarks can be identified or extracted from the input image or other representation of the head, and the template 3D facial landmarks can be used with successive windows of audio from the input speech to predict 3D facial landmarks and generate a corresponding animation with plausible 3D effects.
    Type: Grant
    Filed: August 15, 2022
    Date of Patent: October 3, 2023
    Assignee: ADOBE INC.
    Inventors: Dingzeyu Li, Yang Zhou, Jose Ignacio Echevarria Vallespi, Elya Shechtman
  • Patent number: 11769227
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate synthetized digital images via multi-resolution generator neural networks. The disclosed system extracts multi-resolution features from a scene representation to condition a spatial feature tensor and a latent code to modulate an output of a generator neural network. For example, the disclosed systems utilizes a base encoder of the generator neural network to generate a feature set from a semantic label map of a scene. The disclosed system then utilizes a bottom-up encoder to extract multi-resolution features and generate a latent code from the feature set. Furthermore, the disclosed system determines a spatial feature tensor by utilizing a top-down encoder to up-sample and aggregate the multi-resolution features. The disclosed system then utilizes a decoder to generate a synthesized digital image based on the spatial feature tensor and the latent code.
    Type: Grant
    Filed: August 12, 2021
    Date of Patent: September 26, 2023
    Assignee: Adobe Inc.
    Inventors: Yuheng Li, Yijun Li, Jingwan Lu, Elya Shechtman, Krishna Kumar Singh
  • Publication number: 20230298148
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement a dual-branched neural network architecture to harmonize composite images. For example, in one or more implementations, the transformer-based harmonization system uses a convolutional branch and a transformer branch to generate a harmonized composite image based on an input composite image and a corresponding segmentation mask. More particularly, the convolutional branch comprises a series of convolutional neural network layers followed by a style normalization layer to extract localized information from the input composite image. Further, the transformer branch comprises a series of transformer neural network layers to extract global information based on different resolutions of the input composite image.
    Type: Application
    Filed: March 21, 2022
    Publication date: September 21, 2023
    Inventors: He Zhang, Jianming Zhang, Jose Ignacio Echevarria Vallespi, Kalyan Sunkavalli, Meredith Payne Stotzner, Yinglan Ma, Zhe Lin, Elya Shechtman, Frederick Mandia
  • Patent number: 11762951
    Abstract: Embodiments are disclosed for generative image congealing which provides an unsupervised learning technique that learns transformations of real data to improve the image quality of GANs trained using that image data. In particular, in one or more embodiments, the disclosed systems and methods comprise generating, by a spatial transformer network, an aligned real image for a real image from an unaligned real dataset, providing, by the spatial transformer network, the aligned real image to an adversarial discrimination network to determine if the aligned real image resembles aligned synthetic images generated by a generator network, and training, by a training manager, the spatial transformer network to learn updated transformations based on the determination of the adversarial discrimination network.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: September 19, 2023
    Assignee: Adobe Inc.
    Inventors: Elya Shechtman, William Peebles, Richard Zhang, Jun-Yan Zhu, Alyosha Efros