Patents by Inventor Sohrab Amirghodsi
Sohrab Amirghodsi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12373915Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified digital images by utilizing a patch match algorithm to generate nearest neighbor fields for a second digital image based on a nearest neighbor field associated with a first digital image. For example, the disclosed systems can identify a nearest neighbor field associated with a first digital image of a first resolution. Based on the nearest neighbor field of the first digital image, the disclosed systems can utilize a patch match algorithm to generate a nearest neighbor field for a second digital image of a second resolution larger than the first resolution. The disclosed systems can further generate a modified digital image by filling a target region of the second digital image utilizing the generated nearest neighbor field.Type: GrantFiled: August 18, 2022Date of Patent: July 29, 2025Assignee: Adobe Inc.Inventors: Sohrab Amirghodsi, Aliakbar Darabi, Elya Shechtman
-
Patent number: 12367561Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.Type: GrantFiled: October 3, 2022Date of Patent: July 22, 2025Assignee: Adobe Inc.Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
-
Patent number: 12367586Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.Type: GrantFiled: October 3, 2022Date of Patent: July 22, 2025Assignee: Adobe Inc.Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
-
Patent number: 12367562Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.Type: GrantFiled: October 3, 2022Date of Patent: July 22, 2025Assignee: Adobe Inc.Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
-
Publication number: 20250182355Abstract: Repeated distractor detection techniques for digital images are described. In an implementation, an input is received by a distractor detection system specifying a location within a digital image, e.g., a single input specifying a single set of coordinates with respect to a digital image. An input distractor is identified by the distractor detection system based on the location, e.g., using a machine-learning model. At least one candidate distractor is detected by the distractor detection system based on the input distractor, e.g., using a patch-matching technique. The distractor detection system is then configurable to verify that the at least one candidate distractor corresponds to the input distractor. The verification is performed by comparing candidate distractor image features extracted from the at least one candidate distractor with input distractor image features extracted from the input distractor.Type: ApplicationFiled: December 4, 2023Publication date: June 5, 2025Applicant: Adobe Inc.Inventors: Yuqian Zhou, Zhe Lin, Sohrab Amirghodsi, Elya Schechtman, Connelly Stuart Barnes, Chuong Minh Huynh
-
Patent number: 12299844Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating harmonized digital images utilizing a self-supervised image harmonization neural network. In particular, the disclosed systems can implement, and learn parameters for, a self-supervised image harmonization neural network to extract content from one digital image (disentangled from its appearance) and appearance from another from another digital image (disentangled from its content). For example, the disclosed systems can utilize a dual data augmentation method to generate diverse triplets for parameter learning (including input digital images, reference digital images, and pseudo ground truth digital images), via cropping a digital image with perturbations using three-dimensional color lookup tables (“LUTs”).Type: GrantFiled: February 13, 2024Date of Patent: May 13, 2025Assignee: Adobe Inc.Inventors: He Zhang, Yifan Jiang, Yilin Wang, Jianming Zhang, Kalyan Sunkavalli, Sarah Kong, Su Chen, Sohrab Amirghodsi, Zhe Lin
-
Publication number: 20250139748Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating modified digital images utilizing a guided inpainting approach that implements a patch match model informed by a deep visual guide. In particular, the disclosed systems can utilize a visual guide algorithm to automatically generate guidance maps to help identify replacement pixels for inpainting regions of digital images utilizing a patch match model. For example, the disclosed systems can generate guidance maps in the form of structure maps, depth maps, or segmentation maps that respectively indicate the structure, depth, or segmentation of different portions of digital images. Additionally, the disclosed systems can implement a patch match model to identify replacement pixels for filling regions of digital images according to the structure, depth, and/or segmentation of the digital images.Type: ApplicationFiled: January 6, 2025Publication date: May 1, 2025Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Connelly Barnes, Elya Shechtman
-
Publication number: 20250124544Abstract: Systems and methods for upsampling low-resolution content within a high-resolution image include obtaining a composite image and a mask. The composite image includes a high-resolution region and a low-resolution region. An upsampling network identifies the low-resolution region of the composite image based on the mask and generates an upsampled composite image based on the composite image and the mask. The upsampled composite image comprises higher frequency details in the low-resolution region than the composite image.Type: ApplicationFiled: October 16, 2023Publication date: April 17, 2025Inventors: Taesung Park, Qing Liu, Zhe Lin, Sohrab Amirghodsi, Elya Shechtman
-
Patent number: 12271804Abstract: Embodiments are disclosed for performing wire segmentation of images using machine learning. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input image, generating, by a first trained neural network model, a global probability map representation of the input image indicating a probability value of each pixel including a representation of wires, and identifying regions of the input image indicated as including the representation of wires. The disclosed systems and methods further comprise, for each region from the identified regions, concatenating the region and information from the global probability map to create a concatenated input, and generating, by a second trained neural network model, a local probability map representation of the region based on the concatenated input, indicating pixels of the region including representations of wires. The disclosed systems and methods further comprise aggregating local probability maps for each region.Type: GrantFiled: July 21, 2022Date of Patent: April 8, 2025Assignee: Adobe Inc.Inventors: Mang Tik Chiu, Connelly Barnes, Zijun Wei, Zhe Lin, Yuqian Zhou, Xuaner Zhang, Sohrab Amirghodsi, Florian Kainz, Elya Shechtman
-
Patent number: 12249051Abstract: Techniques are disclosed for filling or otherwise replacing a target region of a primary image with a corresponding region of an auxiliary image. The filling or replacing can be done with an overlay (no subtractive process need be run on the primary image). Because the primary and auxiliary images may not be aligned, both geometric and photometric transformations are applied to the primary and/or auxiliary images. For instance, a geometric transformation of the auxiliary image is performed, to better align features of the auxiliary image with corresponding features of the primary image. Also, a photometric transformation of the auxiliary image is performed, to better match color of one or more pixels of the auxiliary image with color of corresponding one or more pixels of the primary image. The corresponding region of the transformed auxiliary image is then copied and overlaid on the target region of the primary image.Type: GrantFiled: February 17, 2022Date of Patent: March 11, 2025Assignee: Adobe Inc.Inventors: Connelly Barnes, Sohrab Amirghodsi, Elya Shechtman
-
Publication number: 20250069203Abstract: A method, non-transitory computer readable medium, apparatus, and system for image generation are described. An embodiment of the present disclosure includes obtaining an input image, an inpainting mask, and a plurality of content preservation values corresponding to different regions of the inpainting mask, and identifying a plurality of mask bands of the inpainting mask based on the plurality of content preservation values. An image generation model generates an output image based on the input image and the inpainting mask. The output image is generated in a plurality of phases. Each of the plurality of phases uses a corresponding mask band of the plurality of mask bands as an input.Type: ApplicationFiled: August 24, 2023Publication date: February 27, 2025Inventors: Yuqian Zhou, Krishna Kumar Singh, Benjamin Delarre, Zhe Lin, Jingwan Lu, Taesung Park, Sohrab Amirghodsi, Elya Shechtman
-
Publication number: 20250061626Abstract: Techniques for performing a digital operation on a digital image are described along with methods and systems employing such techniques. According to the techniques, an input (e.g., an input stroke) is received by, for example, a processing system. Based upon the input, an area of the digital image upon which a digital operation (e.g., for removal of a distractor within the area) is to be performed is determined. In an implementation, one or more metrics of an input stroke are analyzed, typically in real time, to at least partially determine the area upon which the digital operation is to be performed. In an additional or alternative implementation, the input includes a first point, a second point and a connector, and the area is at least partially determined by a location of the first point relative to a location of the second point and/or by locations of the first point and/or second point relative to one or more edges of the digital image.Type: ApplicationFiled: May 24, 2024Publication date: February 20, 2025Applicant: Adobe Inc.Inventors: Xiaoyang Liu, Zhe Lin, Yuqian Zhou, Sohrab Amirghodsi, Sarah Jane Stuckey, Sakshi Gupta, Guotong Feng, Elya Schechtman, Connelly Stuart Barnes, Betty Leong
-
Publication number: 20250054115Abstract: Various disclosed embodiments are directed to resizing, via down-sampling and up-sampling, a high-resolution input image in order to meet machine learning model low-resolution processing requirements, while also producing a high-resolution output image for image inpainting via a machine learning model. Some embodiments use a refinement model to refine the low-resolution inpainting result from the machine learning model such that there will be clear content with high resolution both inside and outside of the mask region in the output. Some embodiments employ new model architecture for the machine learning model that produces the inpainting result—an advanced Cascaded Modulated Generative Adversarial Network (CM-GAN) that includes Fast Fourier Convolution (FCC) layers at the skip connections between the encoder and decoder.Type: ApplicationFiled: August 9, 2023Publication date: February 13, 2025Inventors: Zhe LIN, Yuqian ZHOU, Sohrab AMIRGHODSI, Qing LIU, Elya SHECHTMAN, Connelly BARNES, Haitian ZHENG
-
Publication number: 20250054116Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing a cascaded modulation inpainting neural network. For example, the disclosed systems utilize a cascaded modulation inpainting neural network that includes cascaded modulation decoder layers. For example, in one or more decoder layers, the disclosed systems start with global code modulation that captures the global-range image structures followed by an additional modulation that refines the global predictions. Accordingly, in one or more implementations, the image inpainting system provides a mechanism to correct distorted local details. Furthermore, in one or more implementations, the image inpainting system leverages fast Fourier convolutions block within different resolution layers of the encoder architecture to expand the receptive field of the encoder and to allow the network encoder to better capture global structure.Type: ApplicationFiled: October 28, 2024Publication date: February 13, 2025Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
-
Patent number: 12204610Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for training a generative inpainting neural network to accurately generate inpainted digital images via object-aware training and/or masked regularization. For example, the disclosed systems utilize an object-aware training technique to learn parameters for a generative inpainting neural network based on masking individual object instances depicted within sample digital images of a training dataset. In some embodiments, the disclosed systems also (or alternatively) utilize a masked regularization technique as part of training to prevent overfitting by penalizing a discriminator neural network utilizing a regularization term that is based on an object mask. In certain cases, the disclosed systems further generate an inpainted digital image utilizing a trained generative inpainting model with parameters learned via the object-aware training and/or the masked regularization.Type: GrantFiled: February 14, 2022Date of Patent: January 21, 2025Assignee: Adobe Inc.Inventors: Zhe Lin, Haitian Zheng, Jingwan Lu, Scott Cohen, Jianming Zhang, Ning Xu, Elya Shechtman, Connelly Barnes, Sohrab Amirghodsi
-
Patent number: 12190484Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating modified digital images utilizing a guided inpainting approach that implements a patch match model informed by a deep visual guide. In particular, the disclosed systems can utilize a visual guide algorithm to automatically generate guidance maps to help identify replacement pixels for inpainting regions of digital images utilizing a patch match model. For example, the disclosed systems can generate guidance maps in the form of structure maps, depth maps, or segmentation maps that respectively indicate the structure, depth, or segmentation of different portions of digital images. Additionally, the disclosed systems can implement a patch match model to identify replacement pixels for filling regions of digital images according to the structure, depth, and/or segmentation of the digital images.Type: GrantFiled: March 15, 2021Date of Patent: January 7, 2025Assignee: Adobe Inc.Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Connelly Barnes, Elya Shechtman
-
Publication number: 20240428384Abstract: Inpainting dispatch techniques for digital images are described. In one or more examples, an inpainting system includes a plurality of inpainting modules. The inpainting modules are configured to employ a variety of different techniques, respectively, as part of performing an inpainting operation. An inpainting dispatch module is also included as part of the inpainting system that is configured to select which of the plurality of inpainting modules are to be used to perform an inpainting operation for one or more regions in a digital image, automatically and without user intervention.Type: ApplicationFiled: June 22, 2023Publication date: December 26, 2024Applicant: Adobe Inc.Inventors: Yuqian Zhou, Zhe Lin, Xiaoyang Liu, Sohrab Amirghodsi, Qing Liu, Lingzhi Zhang, Elya Schechtman, Connelly Stuart Barnes
-
Patent number: 12165295Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing a cascaded modulation inpainting neural network. For example, the disclosed systems utilize a cascaded modulation inpainting neural network that includes cascaded modulation decoder layers. For example, in one or more decoder layers, the disclosed systems start with global code modulation that captures the global-range image structures followed by an additional modulation that refines the global predictions. Accordingly, in one or more implementations, the image inpainting system provides a mechanism to correct distorted local details. Furthermore, in one or more implementations, the image inpainting system leverages fast Fourier convolutions block within different resolution layers of the encoder architecture to expand the receptive field of the encoder and to allow the network encoder to better capture global structure.Type: GrantFiled: May 4, 2022Date of Patent: December 10, 2024Assignee: Adobe Inc.Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
-
Publication number: 20240404013Abstract: Embodiments include systems and methods for generative image filling based on text and a reference image. In one aspect, the system obtains an input image, a reference image, and a text prompt. Then, the system encodes the reference image to obtain an image embedding and encodes the text prompt to obtain a text embedding. Subsequently, a composite image is generated based on the input image, the image embedding, and the text embedding.Type: ApplicationFiled: November 21, 2023Publication date: December 5, 2024Inventors: Yuqian Zhou, Krishna Kumar Singh, Zhe Lin, Qing Liu, Zhifei Zhang, Sohrab Amirghodsi, Elya Shechtman, Jingwan Lu
-
Patent number: 12159380Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that implement an inpainting framework having computer-implemented machine learning models to generate high-resolution inpainting results. For instance, in one or more embodiments, the disclosed systems generate an inpainted digital image utilizing a deep inpainting neural network from a digital image having a replacement region. The disclosed systems further generate, utilizing a visual guide algorithm, at least one deep visual guide from the inpainted digital image. Using a patch match model and the at least one deep visual guide, the disclosed systems generate a plurality of modified digital images from the digital image by replacing the region of pixels of the digital image with replacement pixels. Additionally, the disclosed systems select, utilizing an inpainting curation model, a modified digital image from the plurality of modified digital images to provide to a client device.Type: GrantFiled: May 25, 2022Date of Patent: December 3, 2024Assignee: Adobe Inc.Inventors: Connelly Barnes, Elya Shechtman, Sohrab Amirghodsi, Zhe Lin