Patents by Inventor Elya Shechtman

Elya Shechtman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11893763
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a modified digital image from extracted spatial and global codes. For example, the disclosed systems can utilize a global and spatial autoencoder to extract spatial codes and global codes from digital images. The disclosed systems can further utilize the global and spatial autoencoder to generate a modified digital image by combining extracted spatial and global codes in various ways for various applications such as style swapping, style blending, and attribute editing.
    Type: Grant
    Filed: November 22, 2022
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Taesung Park, Richard Zhang, Oliver Wang, Junyan Zhu, Jingwan Lu, Elya Shechtman, Alexei A Efros
  • Publication number: 20240037717
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating neural network based perceptual artifact segmentations in synthetic digital image content. The disclosed system utilizing neural networks to detect perceptual artifacts in digital images in connection with generating or modifying digital images. The disclosed system determines a digital image including one or more synthetically modified portions. The disclosed system utilizes an artifact segmentation machine-learning model to detect perceptual artifacts in the synthetically modified portion(s). The artifact segmentation machine-learning model is trained to detect perceptual artifacts based on labeled artifact regions of synthetic training digital images. Additionally, the disclosed system utilizes the artifact segmentation machine-learning model in an iterative inpainting process. The disclosed system utilizes one or more digital image inpainting models to inpaint in a digital image.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 1, 2024
    Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Elya Shechtman, Yuqian Zhou, Connelly Barnes
  • Publication number: 20240037922
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for adapting generative neural networks to target domains utilizing an image translation neural network. In particular, in one or more embodiments, the disclosed systems utilize an image translation neural network to translate target results to a source domain for input in target neural network adaptation. For instance, in some embodiments, the disclosed systems compare a translated target result with a source result from a pretrained source generative neural network to adjust parameters of a target generative neural network to produce results corresponding in features to source results and corresponding in style to the target domain.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 1, 2024
    Inventors: Yijun Li, Nicholas Kolkin, Jingwan Lu, Elya Shechtman
  • Patent number: 11887216
    Abstract: The present disclosure describes systems and methods for image processing. Embodiments of the present disclosure include an image processing apparatus configured to generate modified images (e.g., synthetic faces) by conditionally changing attributes or landmarks of an input image. A machine learning model of the image processing apparatus encodes the input image to obtain a joint conditional vector that represents attributes and landmarks of the input image in a vector space. The joint conditional vector is then modified, according to the techniques described herein, to form a latent vector used to generate a modified image. In some cases, the machine learning model is trained using a generative adversarial network (GAN) with a normalization technique, followed by joint training of a landmark embedding and attribute embedding (e.g., to reduce inference time).
    Type: Grant
    Filed: November 19, 2021
    Date of Patent: January 30, 2024
    Assignee: ADOBE, INC.
    Inventors: Ratheesh Kalarot, Timothy M. Converse, Shabnam Ghadar, John Thomas Nack, Jingwan Lu, Elya Shechtman, Baldo Faieta, Akhilesh Kumar
  • Publication number: 20240028871
    Abstract: Embodiments are disclosed for performing wire segmentation of images using machine learning. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input image, generating, by a first trained neural network model, a global probability map representation of the input image indicating a probability value of each pixel including a representation of wires, and identifying regions of the input image indicated as including the representation of wires. The disclosed systems and methods further comprise, for each region from the identified regions, concatenating the region and information from the global probability map to create a concatenated input, and generating, by a second trained neural network model, a local probability map representation of the region based on the concatenated input, indicating pixels of the region including representations of wires. The disclosed systems and methods further comprise aggregating local probability maps for each region.
    Type: Application
    Filed: July 21, 2022
    Publication date: January 25, 2024
    Applicant: Adobe Inc.
    Inventors: Mang Tik CHIU, Connelly BARNES, Zijun WEI, Zhe LIN, Yuqian ZHOU, Xuaner ZHANG, Sohrab AMIRGHODSI, Florian KAINZ, Elya SHECHTMAN
  • Patent number: 11880957
    Abstract: One example method involves operations for receiving a request to transform an input image into a target image. Operations further include providing the input image to a machine learning model trained to adapt images. Training the machine learning model includes accessing training data having a source domain of images and a target domain of images with a target style. Training further includes using a pre-trained generative model to generate an adapted source domain of adapted images having the target style. The adapted source domain is generated by determining a rate of change for parameters of the target style, generating weighted parameters by applying a weight to each of the parameters based on their respective rate of change, and applying the weighted parameters to the source domain. Additionally, operations include using the machine learning model to generate the target image by modifying parameters of the input image using the target style.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: January 23, 2024
    Assignee: Adobe Inc.
    Inventors: Yijun Li, Richard Zhang, Jingwan Lu, Elya Shechtman
  • Patent number: 11880766
    Abstract: An improved system architecture uses a pipeline including a Generative Adversarial Network (GAN) including a generator neural network and a discriminator neural network to generate an image. An input image in a first domain and information about a target domain are obtained. The domains correspond to image styles. An initial latent space representation of the input image is produced by encoding the input image. An initial output image is generated by processing the initial latent space representation with the generator neural network. Using the discriminator neural network, a score is computed indicating whether the initial output image is in the target domain. A loss is computed based on the computed score. The loss is minimized to compute an updated latent space representation. The updated latent space representation is processed with the generator neural network to generate an output image in the target domain.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: January 23, 2024
    Assignee: Adobe Inc.
    Inventors: Cameron Smith, Ratheesh Kalarot, Wei-An Lin, Richard Zhang, Niloy Mitra, Elya Shechtman, Shabnam Ghadar, Zhixin Shu, Yannick Hold-Geoffrey, Nathan Carr, Jingwan Lu, Oliver Wang, Jun-Yan Zhu
  • Patent number: 11875221
    Abstract: Systems and methods generate a filtering function for editing an image with reduced attribute correlation. An image editing system groups training data into bins according to a distribution of a target attribute. For each bin, the system samples a subset of the training data based on a pre-determined target distribution of a set of additional attributes in the training data. The system identifies a direction in the sampled training data corresponding to the distribution of the target attribute to generate a filtering vector for modifying the target attribute in an input image, obtains a latent space representation of an input image, applies the filtering vector to the latent space representation of the input image to generate a filtered latent space representation of the input image, and provides the filtered latent space representation as input to a neural network to generate an output image with a modification to the target attribute.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Wei-An Lin, Baldo Faieta, Cameron Smith, Elya Shechtman, Jingwan Lu, Jun-Yan Zhu, Niloy Mitra, Ratheesh Kalarot, Richard Zhang, Shabnam Ghadar, Zhixin Shu
  • Patent number: 11869173
    Abstract: Various disclosed embodiments are directed to inpainting one or more portions of a target image based on merging (or selecting) one or more portions of a warped image with (or from) one or more portions of an inpainting candidate (e.g., via a learning model). This, among other functionality described herein, resolves the inaccuracies of existing image inpainting technologies.
    Type: Grant
    Filed: December 27, 2022
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Yuqian Zhou, Elya Shechtman, Connelly Stuart Barnes, Sohrab Amirghodsi
  • Patent number: 11861762
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate synthetized digital images using class-specific generators for objects of different classes. The disclosed system modifies a synthesized digital image by utilizing a plurality of class-specific generator neural networks to generate a plurality of synthesized objects according to object classes identified in the synthesized digital image. The disclosed system determines object classes in the synthesized digital image such as via a semantic label map corresponding to the synthesized digital image. The disclosed system selects class-specific generator neural networks corresponding to the classes of objects in the synthesized digital image. The disclosed system also generates a plurality of synthesized objects utilizing the class-specific generator neural networks based on contextual data associated with the identified objects.
    Type: Grant
    Filed: August 12, 2021
    Date of Patent: January 2, 2024
    Assignee: Adobe Inc.
    Inventors: Yuheng Li, Yijun Li, Jingwan Lu, Elya Shechtman, Krishna Kumar Singh
  • Patent number: 11854244
    Abstract: A panoptic labeling system includes a modified panoptic labeling neural network (“modified PLNN”) that is trained to generate labels for pixels in an input image. The panoptic labeling system generates modified training images by combining training images with mask instances from annotated images. The modified PLNN determines a set of labels representing categories of objects depicted in the modified training images. The modified PLNN also determines a subset of the labels representing categories of objects depicted in the input image. For each mask pixel in a modified training image, the modified PLNN calculates a probability indicating whether the mask pixel has the same label as an object pixel. The modified PLNN generates a mask label for each mask pixel, based on the probability. The panoptic labeling system provides the mask label to, for example, a digital graphics editing system that uses the labels to complete an infill operation.
    Type: Grant
    Filed: October 20, 2022
    Date of Patent: December 26, 2023
    Assignee: ADOBE INC.
    Inventors: Sohrab Amirghodsi, Zhe Lin, Yilin Wang, Tianshu Yu, Connelly Barnes, Elya Shechtman
  • Publication number: 20230385992
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that implement an inpainting framework having computer-implemented machine learning models to generate high-resolution inpainting results. For instance, in one or more embodiments, the disclosed systems generate an inpainted digital image utilizing a deep inpainting neural network from a digital image having a replacement region. The disclosed systems further generate, utilizing a visual guide algorithm, at least one deep visual guide from the inpainted digital image. Using a patch match model and the at least one deep visual guide, the disclosed systems generate a plurality of modified digital images from the digital image by replacing the region of pixels of the digital image with replacement pixels. Additionally, the disclosed systems select, utilizing an inpainting curation model, a modified digital image from the plurality of modified digital images to provide to a client device.
    Type: Application
    Filed: May 25, 2022
    Publication date: November 30, 2023
    Inventors: Connelly Barnes, Elya Shechtman, Sohrab Amirghodsi, Zhe Lin
  • Patent number: 11823357
    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference video frame to other video frames depicting a scene. One example method includes one or more processing devices that performs operations that include accessing a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The operations also includes computing a target motion of a target pixel that is subject to a motion constraint. The motion constraint is based on a three-dimensional model of the reference object. Further, operations include determining color data of the target pixel to correspond to the target motion. The color data includes a color value and a gradient. Operations also include determining gradient constraints using gradient values of neighbor pixels. Additionally, the processing devices updates the color data of the target pixel subject to the gradient constraints.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: November 21, 2023
    Assignee: Adobe Inc.
    Inventors: Oliver Wang, John Nelson, Geoffrey Oxholm, Elya Shechtman
  • Publication number: 20230368339
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing class-specific cascaded modulation inpainting neural network. For example, the disclosed systems utilize a class-specific cascaded modulation inpainting neural network that includes cascaded modulation decoder layers to generate replacement pixels portraying a particular target object class. To illustrate, in response to user selection of a replacement region and target object class, the disclosed systems utilize a class-specific cascaded modulation inpainting neural network corresponding to the target object class to generate an inpainted digital image that portrays an instance of the target object class within the replacement region.
    Type: Application
    Filed: May 13, 2022
    Publication date: November 16, 2023
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
  • Publication number: 20230360180
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing a cascaded modulation inpainting neural network. For example, the disclosed systems utilize a cascaded modulation inpainting neural network that includes cascaded modulation decoder layers. For example, in one or more decoder layers, the disclosed systems start with global code modulation that captures the global-range image structures followed by an additional modulation that refines the global predictions. Accordingly, in one or more implementations, the image inpainting system provides a mechanism to correct distorted local details. Furthermore, in one or more implementations, the image inpainting system leverages fast Fourier convolutions block within different resolution layers of the encoder architecture to expand the receptive field of the encoder and to allow the network encoder to better capture global structure.
    Type: Application
    Filed: May 4, 2022
    Publication date: November 9, 2023
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
  • Patent number: 11810326
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: November 7, 2023
    Assignee: Adobe Inc.
    Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
  • Publication number: 20230342893
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for combining digital images. In particular, in one or more embodiments, the disclosed systems combine latent codes of a source digital image and a target digital image utilizing a blending network to determine a combined latent encoding and generate a combined digital image from the combined latent encoding utilizing a generative neural network. In some embodiments, the disclosed systems determine an intersection face mask between the source digital image and the combined digital image utilizing a face segmentation network and combine the source digital image and the combined digital image utilizing the intersection face mask to generate a blended digital image.
    Type: Application
    Filed: April 21, 2022
    Publication date: October 26, 2023
    Inventors: Tobias Hinz, Shabnam Ghadar, Richard Zhang, Ratheesh Kalarot, Jingwan Lu, Elya Shechtman
  • Publication number: 20230342884
    Abstract: An image inpainting system is described that receives an input image that includes a masked region. From the input image, the image inpainting system generates a synthesized image that depicts an object in the masked region by selecting a first code that represents a known factor characterizing a visual appearance of the object and a second code that represents an unknown factor characterizing the visual appearance of the object apart from the known factor in latent space. The input image, the first code, and the second code are provided as input to a generative adversarial network that is trained to generate the synthesized image using contrastive losses. Different synthesized images are generated from the same input image using different combinations of first and second codes, and the synthesized images are output for display.
    Type: Application
    Filed: April 21, 2022
    Publication date: October 26, 2023
    Applicant: Adobe Inc.
    Inventors: Krishna Kumar Singh, Yuheng Li, Yijun Li, Jingwan Lu, Elya Shechtman
  • Publication number: 20230316475
    Abstract: An item recommendation system receives a set of recommendable items and a request to select, from the set of recommendable items, a contrast group. The item recommendation system selects a contrast group from the set of recommendable items by applying a image modification model to the set of recommendable items. The image modification model includes an item selection model configured to determine an unbiased conversion rate for each item of the set of recommendable items and select a recommended item from the set of recommendable items having a greatest unbiased conversion rate. The image modification model includes a contrast group selection model configured to select, for the recommended item, a contrast group comprising the recommended item and one or more contrast items. The item recommendation system transmits the contrast group responsive to the request.
    Type: Application
    Filed: March 30, 2022
    Publication date: October 5, 2023
    Inventors: Cameron Smith, Wei-An Lin, Timothy M. Converse, Shabnam Ghadar, Ratheesh Kalarot, John Nack, Jingwan Lu, Hui Qu, Elya Shechtman, Baldo Faieta
  • Publication number: 20230316606
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for latent-based editing of digital images using a generative neural network. In particular, in one or more embodiments, the disclosed systems perform latent-based editing of a digital image by mapping a feature tensor and a set of style vectors for the digital image into a joint feature style space. In one or more implementations, the disclosed systems apply a joint feature style perturbation and/or modification vectors within the joint feature style space to determine modified style vectors and a modified feature tensor. Moreover, in one or more embodiments the disclosed systems generate a modified digital image utilizing a generative neural network from the modified style vectors and the modified feature tensor.
    Type: Application
    Filed: March 21, 2022
    Publication date: October 5, 2023
    Inventors: Hui Qu, Baldo Faieta, Cameron Smith, Elya Shechtman, Jingwan Lu, Ratheesh Kalarot, Richard Zhang, Saeid Motiian, Shabnam Ghadar, Wei-An Lin