Patents by Inventor Elya Shechtman

Elya Shechtman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135610
    Abstract: Systems and methods for image generation are provided. An aspect of the systems and methods for image generation includes obtaining an original image depicting an element and a target prompt describing a modification to the element. The system may then compute a first output and a second output using a diffusion model. The first output is based on a description of the element and the second output is based on the target prompt. The system then computes a difference between the first output and the second output, and generates a modified image including the modification to the element of the original image based on the difference.
    Type: Application
    Filed: February 15, 2023
    Publication date: April 25, 2024
    Inventors: Nicholas Isaac Kolkin, Elya Shechtman
  • Publication number: 20240127411
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.
    Type: Application
    Filed: October 3, 2022
    Publication date: April 18, 2024
    Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
  • Publication number: 20240127410
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.
    Type: Application
    Filed: October 3, 2022
    Publication date: April 18, 2024
    Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
  • Publication number: 20240127412
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.
    Type: Application
    Filed: October 3, 2022
    Publication date: April 18, 2024
    Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
  • Publication number: 20240127452
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.
    Type: Application
    Filed: October 3, 2022
    Publication date: April 18, 2024
    Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
  • Patent number: 11948281
    Abstract: Methods and systems are provided for accurately filling holes, regions, and/or portions of high-resolution images using guided upsampling during image inpainting. For instance, an image inpainting system can apply guided upsampling to an inpainted image result to enable generation of a high-resolution inpainting result from a lower-resolution image that has undergone inpainting. To allow for guided upsampling during image inpainting, one or more neural networks can be used. For instance, a low-resolution result neural network (e.g., comprised of an encoder and a decoder) and a high-resolution input neural network (e.g., comprised of an encoder and a decoder). The image inpainting system can use such networks to generate a high-resolution inpainting image result that fills the hole, region, and/or portion of the image.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: April 2, 2024
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Yu Zeng, Jimei Yang, Jianming Zhang, Elya Shechtman
  • Patent number: 11930303
    Abstract: Systems and techniques for automatic digital parameter adjustment are described that leverage insights learned from an image set to automatically predict parameter values for an input item of digital visual content. To do so, the automatic digital parameter adjustment techniques described herein captures visual and contextual features of digital visual content to determine balanced visual output in a range of visual scenes and settings. The visual and contextual features of digital visual content are used to train a parameter adjustment model through machine learning techniques that captures feature patterns and interactions. The parameter adjustment model exploits these feature interactions to determine visually pleasing parameter values for an input item of digital visual content. The predicted parameter values are output, allowing further adjustment to the parameter values.
    Type: Grant
    Filed: November 15, 2021
    Date of Patent: March 12, 2024
    Assignee: Adobe Inc.
    Inventors: Pulkit Gera, Oliver Wang, Kalyan Krishna Sunkavalli, Elya Shechtman, Chetan Nanda
  • Publication number: 20240070884
    Abstract: An image processing system uses a depth-conditioned autoencoder to generate a modified image from an input image such that the modified image maintains an overall structure from the input image while modifying textural features. An encoder of the depth-conditioned autoencoder extracts a structure latent code from an input image and depth information for the input image. A generator of the depth-conditioned autoencoder generates a modified image using the structure latent code and a texture latent code. The modified image generated by the depth-conditioned autoencoder includes the structural features from the input image while incorporating textural features of the texture latent code. In some aspects, the autoencoder is depth-conditioned during training by augmenting training images with depth information. The autoencoder is trained to preserve the depth information when generating images.
    Type: Application
    Filed: August 26, 2022
    Publication date: February 29, 2024
    Inventors: Taesung PARK, Sylvain PARIS, Richard ZHANG, Elya SHECHTMAN
  • Patent number: 11915133
    Abstract: Systems and methods seamlessly blend edited and unedited regions of an image. A computing system crops an input image around a region to be edited. The system applies an affine transformation to rotate the cropped input image. The system provides the rotated cropped input image as input to a machine learning model to generate a latent space representation of the rotated cropped input image. The system edits the latent space representation and provides the edited latent space representation to a generator neural network to generate a generated edited image. The system applies an inverse affine transformation to rotate the generated edited image and aligns an identified segment of the rotated generated edited image with an identified corresponding segment of the input image to produce an aligned rotated generated edited image. The system blends the aligned rotated generated edited image with the input image to generate an edited output image.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: February 27, 2024
    Assignee: Adobe Inc.
    Inventors: Ratheesh Kalarot, Kevin Wampler, Jingwan Lu, Jakub Fiser, Elya Shechtman, Aliakbar Darabi, Alexandru Vasile Costin
  • Publication number: 20240062495
    Abstract: A scene modeling system receives a video including a plurality of frames corresponding to views of an object and a request to display an editable three-dimensional (3D) scene that corresponds to a particular frame of the plurality of frames. The scene modeling system applies a scene representation model to the particular frame, and includes a deformation model configured to generate, for each pixel of the particular frame based on a pose and an expression of the object, a deformation point using a 3D morphable model (3DMM) guided deformation field. The scene representation model includes a color model configured to determine, for the deformation point, color and volume density values. The scene modeling system receives a modification to one or more of the pose or the expression of the object including a modification to a location of the deformation point and renders an updated video based on the received modification.
    Type: Application
    Filed: August 21, 2022
    Publication date: February 22, 2024
    Inventors: Zhixin Shu, Zexiang Xu, Shahrukh Athar, Kalyan Sunkavalli, Elya Shechtman
  • Patent number: 11907839
    Abstract: Systems and methods combine an input image with an edited image generated using a generator neural network to preserve detail from the original image. A computing system provides an input image to a machine learning model to generate a latent space representation of the input image. The system provides the latent space representation to a generator neural network to generate a generated image. The system generates multiple scale representations of the input image, as well as multiple scale representations of the generated image. The system generates a first combined image based on first scale representations of the images and a first value. The system generates a second combined image based on second scale representations of the images and a second value. The system blends the first combined image with the second combined image to generate an output image.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: February 20, 2024
    Assignee: Adobe Inc.
    Inventors: Ratheesh Kalarot, Kevin Wampler, Jingwan Lu, Jakub Fiser, Elya Shechtman, Aliakbar Darabi, Alexandru Vasile Costin
  • Patent number: 11900519
    Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure encode features of a source image to obtain a source appearance encoding that represents inherent attributes of a face in the source image; encode features of a target image to obtain a target non-appearance encoding that represents contextual attributes of the target image; combine the source appearance encoding and the target non-appearance encoding to obtain combined image features; and generate a modified target image based on the combined image features, wherein the modified target image includes the inherent attributes of the face in the source image together with the contextual attributes of the target image.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: February 13, 2024
    Assignee: ADOBE INC.
    Inventors: Kevin Duarte, Wei-An Lin, Ratheesh Kalarot, Shabnam Ghadar, Jingwan Lu, Elya Shechtman, John Thomas Nack
  • Publication number: 20240046429
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating neural network based perceptual artifact segmentations in synthetic digital image content. The disclosed system utilizing neural networks to detect perceptual artifacts in digital images in connection with generating or modifying digital images. The disclosed system determines a digital image including one or more synthetically modified portions. The disclosed system utilizes an artifact segmentation machine-learning model to detect perceptual artifacts in the synthetically modified portion(s). The artifact segmentation machine-learning model is trained to detect perceptual artifacts based on labeled artifact regions of synthetic training digital images. Additionally, the disclosed system utilizes the artifact segmentation machine-learning model in an iterative inpainting process. The disclosed system utilizes one or more digital image inpainting models to inpaint in a digital image.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 8, 2024
    Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Elya Shechtman, Yuqian Zhou, Connelly Barnes
  • Publication number: 20240046430
    Abstract: One or more processing devices access a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The one or more processing devices determine that a target pixel corresponds to a sub-region within the target region that includes hallucinated content. The one or more processing devices determine gradient constraints using gradient values of neighboring pixels in the hallucinated content, the neighboring pixels being adjacent to the target pixel and corresponding to four cardinal directions. The one or more processing devices update color data of the target pixel subject to the determined gradient constraints.
    Type: Application
    Filed: September 29, 2023
    Publication date: February 8, 2024
    Inventors: Oliver Wang, John Nelson, Geoffrey Oxholm, Elya Shechtman
  • Patent number: 11893763
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a modified digital image from extracted spatial and global codes. For example, the disclosed systems can utilize a global and spatial autoencoder to extract spatial codes and global codes from digital images. The disclosed systems can further utilize the global and spatial autoencoder to generate a modified digital image by combining extracted spatial and global codes in various ways for various applications such as style swapping, style blending, and attribute editing.
    Type: Grant
    Filed: November 22, 2022
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Taesung Park, Richard Zhang, Oliver Wang, Junyan Zhu, Jingwan Lu, Elya Shechtman, Alexei A Efros
  • Publication number: 20240037717
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating neural network based perceptual artifact segmentations in synthetic digital image content. The disclosed system utilizing neural networks to detect perceptual artifacts in digital images in connection with generating or modifying digital images. The disclosed system determines a digital image including one or more synthetically modified portions. The disclosed system utilizes an artifact segmentation machine-learning model to detect perceptual artifacts in the synthetically modified portion(s). The artifact segmentation machine-learning model is trained to detect perceptual artifacts based on labeled artifact regions of synthetic training digital images. Additionally, the disclosed system utilizes the artifact segmentation machine-learning model in an iterative inpainting process. The disclosed system utilizes one or more digital image inpainting models to inpaint in a digital image.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 1, 2024
    Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Elya Shechtman, Yuqian Zhou, Connelly Barnes
  • Publication number: 20240037922
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for adapting generative neural networks to target domains utilizing an image translation neural network. In particular, in one or more embodiments, the disclosed systems utilize an image translation neural network to translate target results to a source domain for input in target neural network adaptation. For instance, in some embodiments, the disclosed systems compare a translated target result with a source result from a pretrained source generative neural network to adjust parameters of a target generative neural network to produce results corresponding in features to source results and corresponding in style to the target domain.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 1, 2024
    Inventors: Yijun Li, Nicholas Kolkin, Jingwan Lu, Elya Shechtman
  • Patent number: 11887216
    Abstract: The present disclosure describes systems and methods for image processing. Embodiments of the present disclosure include an image processing apparatus configured to generate modified images (e.g., synthetic faces) by conditionally changing attributes or landmarks of an input image. A machine learning model of the image processing apparatus encodes the input image to obtain a joint conditional vector that represents attributes and landmarks of the input image in a vector space. The joint conditional vector is then modified, according to the techniques described herein, to form a latent vector used to generate a modified image. In some cases, the machine learning model is trained using a generative adversarial network (GAN) with a normalization technique, followed by joint training of a landmark embedding and attribute embedding (e.g., to reduce inference time).
    Type: Grant
    Filed: November 19, 2021
    Date of Patent: January 30, 2024
    Assignee: ADOBE, INC.
    Inventors: Ratheesh Kalarot, Timothy M. Converse, Shabnam Ghadar, John Thomas Nack, Jingwan Lu, Elya Shechtman, Baldo Faieta, Akhilesh Kumar
  • Publication number: 20240028871
    Abstract: Embodiments are disclosed for performing wire segmentation of images using machine learning. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input image, generating, by a first trained neural network model, a global probability map representation of the input image indicating a probability value of each pixel including a representation of wires, and identifying regions of the input image indicated as including the representation of wires. The disclosed systems and methods further comprise, for each region from the identified regions, concatenating the region and information from the global probability map to create a concatenated input, and generating, by a second trained neural network model, a local probability map representation of the region based on the concatenated input, indicating pixels of the region including representations of wires. The disclosed systems and methods further comprise aggregating local probability maps for each region.
    Type: Application
    Filed: July 21, 2022
    Publication date: January 25, 2024
    Applicant: Adobe Inc.
    Inventors: Mang Tik CHIU, Connelly BARNES, Zijun WEI, Zhe LIN, Yuqian ZHOU, Xuaner ZHANG, Sohrab AMIRGHODSI, Florian KAINZ, Elya SHECHTMAN
  • Patent number: 11880957
    Abstract: One example method involves operations for receiving a request to transform an input image into a target image. Operations further include providing the input image to a machine learning model trained to adapt images. Training the machine learning model includes accessing training data having a source domain of images and a target domain of images with a target style. Training further includes using a pre-trained generative model to generate an adapted source domain of adapted images having the target style. The adapted source domain is generated by determining a rate of change for parameters of the target style, generating weighted parameters by applying a weight to each of the parameters based on their respective rate of change, and applying the weighted parameters to the source domain. Additionally, operations include using the machine learning model to generate the target image by modifying parameters of the input image using the target style.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: January 23, 2024
    Assignee: Adobe Inc.
    Inventors: Yijun Li, Richard Zhang, Jingwan Lu, Elya Shechtman