Patents by Inventor Sohrab Amirghodsi

Sohrab Amirghodsi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240127411
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.
    Type: Application
    Filed: October 3, 2022
    Publication date: April 18, 2024
    Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
  • Publication number: 20240127410
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.
    Type: Application
    Filed: October 3, 2022
    Publication date: April 18, 2024
    Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
  • Publication number: 20240127412
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.
    Type: Application
    Filed: October 3, 2022
    Publication date: April 18, 2024
    Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
  • Publication number: 20240127452
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.
    Type: Application
    Filed: October 3, 2022
    Publication date: April 18, 2024
    Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
  • Patent number: 11935217
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating harmonized digital images utilizing a self-supervised image harmonization neural network. In particular, the disclosed systems can implement, and learn parameters for, a self-supervised image harmonization neural network to extract content from one digital image (disentangled from its appearance) and appearance from another from another digital image (disentangled from its content). For example, the disclosed systems can utilize a dual data augmentation method to generate diverse triplets for parameter learning (including input digital images, reference digital images, and pseudo ground truth digital images), via cropping a digital image with perturbations using three-dimensional color lookup tables (“LUTs”).
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: March 19, 2024
    Assignee: Adobe Inc.
    Inventors: He Zhang, Yifan Jiang, Yilin Wang, Jianming Zhang, Kalyan Sunkavalli, Sarah Kong, Su Chen, Sohrab Amirghodsi, Zhe Lin
  • Publication number: 20240046429
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating neural network based perceptual artifact segmentations in synthetic digital image content. The disclosed system utilizing neural networks to detect perceptual artifacts in digital images in connection with generating or modifying digital images. The disclosed system determines a digital image including one or more synthetically modified portions. The disclosed system utilizes an artifact segmentation machine-learning model to detect perceptual artifacts in the synthetically modified portion(s). The artifact segmentation machine-learning model is trained to detect perceptual artifacts based on labeled artifact regions of synthetic training digital images. Additionally, the disclosed system utilizes the artifact segmentation machine-learning model in an iterative inpainting process. The disclosed system utilizes one or more digital image inpainting models to inpaint in a digital image.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 8, 2024
    Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Elya Shechtman, Yuqian Zhou, Connelly Barnes
  • Publication number: 20240037717
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating neural network based perceptual artifact segmentations in synthetic digital image content. The disclosed system utilizing neural networks to detect perceptual artifacts in digital images in connection with generating or modifying digital images. The disclosed system determines a digital image including one or more synthetically modified portions. The disclosed system utilizes an artifact segmentation machine-learning model to detect perceptual artifacts in the synthetically modified portion(s). The artifact segmentation machine-learning model is trained to detect perceptual artifacts based on labeled artifact regions of synthetic training digital images. Additionally, the disclosed system utilizes the artifact segmentation machine-learning model in an iterative inpainting process. The disclosed system utilizes one or more digital image inpainting models to inpaint in a digital image.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 1, 2024
    Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Elya Shechtman, Yuqian Zhou, Connelly Barnes
  • Publication number: 20240028871
    Abstract: Embodiments are disclosed for performing wire segmentation of images using machine learning. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input image, generating, by a first trained neural network model, a global probability map representation of the input image indicating a probability value of each pixel including a representation of wires, and identifying regions of the input image indicated as including the representation of wires. The disclosed systems and methods further comprise, for each region from the identified regions, concatenating the region and information from the global probability map to create a concatenated input, and generating, by a second trained neural network model, a local probability map representation of the region based on the concatenated input, indicating pixels of the region including representations of wires. The disclosed systems and methods further comprise aggregating local probability maps for each region.
    Type: Application
    Filed: July 21, 2022
    Publication date: January 25, 2024
    Applicant: Adobe Inc.
    Inventors: Mang Tik CHIU, Connelly BARNES, Zijun WEI, Zhe LIN, Yuqian ZHOU, Xuaner ZHANG, Sohrab AMIRGHODSI, Florian KAINZ, Elya SHECHTMAN
  • Patent number: 11869173
    Abstract: Various disclosed embodiments are directed to inpainting one or more portions of a target image based on merging (or selecting) one or more portions of a warped image with (or from) one or more portions of an inpainting candidate (e.g., via a learning model). This, among other functionality described herein, resolves the inaccuracies of existing image inpainting technologies.
    Type: Grant
    Filed: December 27, 2022
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Yuqian Zhou, Elya Shechtman, Connelly Stuart Barnes, Sohrab Amirghodsi
  • Patent number: 11854244
    Abstract: A panoptic labeling system includes a modified panoptic labeling neural network (“modified PLNN”) that is trained to generate labels for pixels in an input image. The panoptic labeling system generates modified training images by combining training images with mask instances from annotated images. The modified PLNN determines a set of labels representing categories of objects depicted in the modified training images. The modified PLNN also determines a subset of the labels representing categories of objects depicted in the input image. For each mask pixel in a modified training image, the modified PLNN calculates a probability indicating whether the mask pixel has the same label as an object pixel. The modified PLNN generates a mask label for each mask pixel, based on the probability. The panoptic labeling system provides the mask label to, for example, a digital graphics editing system that uses the labels to complete an infill operation.
    Type: Grant
    Filed: October 20, 2022
    Date of Patent: December 26, 2023
    Assignee: ADOBE INC.
    Inventors: Sohrab Amirghodsi, Zhe Lin, Yilin Wang, Tianshu Yu, Connelly Barnes, Elya Shechtman
  • Publication number: 20230385992
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that implement an inpainting framework having computer-implemented machine learning models to generate high-resolution inpainting results. For instance, in one or more embodiments, the disclosed systems generate an inpainted digital image utilizing a deep inpainting neural network from a digital image having a replacement region. The disclosed systems further generate, utilizing a visual guide algorithm, at least one deep visual guide from the inpainted digital image. Using a patch match model and the at least one deep visual guide, the disclosed systems generate a plurality of modified digital images from the digital image by replacing the region of pixels of the digital image with replacement pixels. Additionally, the disclosed systems select, utilizing an inpainting curation model, a modified digital image from the plurality of modified digital images to provide to a client device.
    Type: Application
    Filed: May 25, 2022
    Publication date: November 30, 2023
    Inventors: Connelly Barnes, Elya Shechtman, Sohrab Amirghodsi, Zhe Lin
  • Patent number: 11823313
    Abstract: The present disclosure is directed toward systems, methods, and non-transitory computer readable media for generating a modified digital image by identifying patch matches within a digital image utilizing a Gaussian mixture model. For example, the systems described herein can identify sample patches and corresponding matching portions within a digital image. The systems can also identify transformations between the sample patches and the corresponding matching portions. Based on the transformations, the systems can generate a Gaussian mixture model, and the systems can modify a digital image by replacing a target region with target matching portions identified in accordance with the Gaussian mixture model.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: November 21, 2023
    Assignee: Adobe Inc.
    Inventors: Xin Sun, Sohrab Amirghodsi, Nathan Carr, Michal Lukac
  • Publication number: 20230368339
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing class-specific cascaded modulation inpainting neural network. For example, the disclosed systems utilize a class-specific cascaded modulation inpainting neural network that includes cascaded modulation decoder layers to generate replacement pixels portraying a particular target object class. To illustrate, in response to user selection of a replacement region and target object class, the disclosed systems utilize a class-specific cascaded modulation inpainting neural network corresponding to the target object class to generate an inpainted digital image that portrays an instance of the target object class within the replacement region.
    Type: Application
    Filed: May 13, 2022
    Publication date: November 16, 2023
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
  • Publication number: 20230360180
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing a cascaded modulation inpainting neural network. For example, the disclosed systems utilize a cascaded modulation inpainting neural network that includes cascaded modulation decoder layers. For example, in one or more decoder layers, the disclosed systems start with global code modulation that captures the global-range image structures followed by an additional modulation that refines the global predictions. Accordingly, in one or more implementations, the image inpainting system provides a mechanism to correct distorted local details. Furthermore, in one or more implementations, the image inpainting system leverages fast Fourier convolutions block within different resolution layers of the encoder architecture to expand the receptive field of the encoder and to allow the network encoder to better capture global structure.
    Type: Application
    Filed: May 4, 2022
    Publication date: November 9, 2023
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
  • Publication number: 20230259587
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for training a generative inpainting neural network to accurately generate inpainted digital images via object-aware training and/or masked regularization. For example, the disclosed systems utilize an object-aware training technique to learn parameters for a generative inpainting neural network based on masking individual object instances depicted within sample digital images of a training dataset. In some embodiments, the disclosed systems also (or alternatively) utilize a masked regularization technique as part of training to prevent overfitting by penalizing a discriminator neural network utilizing a regularization term that is based on an object mask.
    Type: Application
    Filed: February 14, 2022
    Publication date: August 17, 2023
    Inventors: Zhe Lin, Haitian Zheng, Jingwan Lu, Scott Cohen, Jianming Zhang, Ning Xu, Elya Shechtman, Connelly Barnes, Sohrab Amirghodsi
  • Publication number: 20230214967
    Abstract: Various disclosed embodiments are directed to inpainting one or more portions of a target image based on merging (or selecting) one or more portions of a warped image with (or from) one or more portions of an inpainting candidate (e.g., via a learning model). This, among other functionality described herein, resolves the inaccuracies of existing image inpainting technologies.
    Type: Application
    Filed: December 27, 2022
    Publication date: July 6, 2023
    Inventors: Yuqian Zhou, Elya Shechtman, Connelly Stuart Barnes, Sohrab Amirghodsi
  • Publication number: 20230145498
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for accurately restoring missing pixels within a hole region of a target image utilizing multi-image inpainting techniques based on incorporating geometric depth information. For example, in various implementations, the disclosed systems utilize a depth prediction of a source image as well as camera relative pose parameters. Additionally, in some implementations, the disclosed systems jointly optimize the depth rescaling and camera pose parameters before generating the reprojected image to further increase the accuracy of the reprojected image. Further, in various implementations, the disclosed systems utilize the reprojected image in connection with a content-aware fill model to generate a refined composite image that includes the target image having a hole, where the hole is filled in based on the reprojected image of the source image.
    Type: Application
    Filed: November 5, 2021
    Publication date: May 11, 2023
    Inventors: Yunhan Zhao, Connelly Barnes, Yuqian Zhou, Sohrab Amirghodsi, Elya Shechtman
  • Publication number: 20230141734
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately generating inpainted digital images utilizing a guided inpainting model guided by both plane panoptic segmentation and plane grouping. For example, the disclosed systems utilize a guided inpainting model to fill holes of missing pixels of a digital image as informed or guided by an appearance guide and a geometric guide. Specifically, the disclosed systems generate an appearance guide utilizing plane panoptic segmentation and generate a geometric guide by grouping plane panoptic segments. In some embodiments, the disclosed systems generate a modified digital image by implementing an inpainting model guided by both the appearance guide (e.g., a plane panoptic segmentation map) and the geometric guide (e.g., a plane grouping map).
    Type: Application
    Filed: November 5, 2021
    Publication date: May 11, 2023
    Inventors: Yuqian Zhou, Connelly Barnes, Sohrab Amirghodsi, Elya Shechtman
  • Publication number: 20230079886
    Abstract: A panoptic labeling system includes a modified panoptic labeling neural network (“modified PLNN”) that is trained to generate labels for pixels in an input image. The panoptic labeling system generates modified training images by combining training images with mask instances from annotated images. The modified PLNN determines a set of labels representing categories of objects depicted in the modified training images. The modified PLNN also determines a subset of the labels representing categories of objects depicted in the input image. For each mask pixel in a modified training image, the modified PLNN calculates a probability indicating whether the mask pixel has the same label as an object pixel. The modified PLNN generates a mask label for each mask pixel, based on the probability. The panoptic labeling system provides the mask label to, for example, a digital graphics editing system that uses the labels to complete an infill operation.
    Type: Application
    Filed: October 20, 2022
    Publication date: March 16, 2023
    Inventors: Sohrab AMIRGHODSI, Zhe LIN, Yilin WANG, Tianshu YU, Connelly BARNES, Elya SHECHTMAN
  • Patent number: 11551390
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating deterministic enhanced digital images based on parallel determinations of pixel group offsets arranged in pixel waves. For example, the disclosed systems can utilize a parallel wave analysis to propagate through pixel groups in a pixel wave of a target region within a digital image to determine matching patch offsets for the pixel groups. The disclosed systems can further utilize the matching patch offsets to generate a deterministic enhanced digital image by filling or replacing pixels of the target region with matching pixels indicated by the matching patch offsets.
    Type: Grant
    Filed: August 5, 2020
    Date of Patent: January 10, 2023
    Assignee: Adobe Inc.
    Inventors: Sohrab Amirghodsi, Connelly Barnes, Eric L. Palmer