Patents by Inventor Connelly Barnes

Connelly Barnes has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240046429
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating neural network based perceptual artifact segmentations in synthetic digital image content. The disclosed system utilizing neural networks to detect perceptual artifacts in digital images in connection with generating or modifying digital images. The disclosed system determines a digital image including one or more synthetically modified portions. The disclosed system utilizes an artifact segmentation machine-learning model to detect perceptual artifacts in the synthetically modified portion(s). The artifact segmentation machine-learning model is trained to detect perceptual artifacts based on labeled artifact regions of synthetic training digital images. Additionally, the disclosed system utilizes the artifact segmentation machine-learning model in an iterative inpainting process. The disclosed system utilizes one or more digital image inpainting models to inpaint in a digital image.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 8, 2024
    Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Elya Shechtman, Yuqian Zhou, Connelly Barnes
  • Publication number: 20240037717
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating neural network based perceptual artifact segmentations in synthetic digital image content. The disclosed system utilizing neural networks to detect perceptual artifacts in digital images in connection with generating or modifying digital images. The disclosed system determines a digital image including one or more synthetically modified portions. The disclosed system utilizes an artifact segmentation machine-learning model to detect perceptual artifacts in the synthetically modified portion(s). The artifact segmentation machine-learning model is trained to detect perceptual artifacts based on labeled artifact regions of synthetic training digital images. Additionally, the disclosed system utilizes the artifact segmentation machine-learning model in an iterative inpainting process. The disclosed system utilizes one or more digital image inpainting models to inpaint in a digital image.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 1, 2024
    Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Elya Shechtman, Yuqian Zhou, Connelly Barnes
  • Publication number: 20240028871
    Abstract: Embodiments are disclosed for performing wire segmentation of images using machine learning. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input image, generating, by a first trained neural network model, a global probability map representation of the input image indicating a probability value of each pixel including a representation of wires, and identifying regions of the input image indicated as including the representation of wires. The disclosed systems and methods further comprise, for each region from the identified regions, concatenating the region and information from the global probability map to create a concatenated input, and generating, by a second trained neural network model, a local probability map representation of the region based on the concatenated input, indicating pixels of the region including representations of wires. The disclosed systems and methods further comprise aggregating local probability maps for each region.
    Type: Application
    Filed: July 21, 2022
    Publication date: January 25, 2024
    Applicant: Adobe Inc.
    Inventors: Mang Tik CHIU, Connelly BARNES, Zijun WEI, Zhe LIN, Yuqian ZHOU, Xuaner ZHANG, Sohrab AMIRGHODSI, Florian KAINZ, Elya SHECHTMAN
  • Patent number: 11854244
    Abstract: A panoptic labeling system includes a modified panoptic labeling neural network (“modified PLNN”) that is trained to generate labels for pixels in an input image. The panoptic labeling system generates modified training images by combining training images with mask instances from annotated images. The modified PLNN determines a set of labels representing categories of objects depicted in the modified training images. The modified PLNN also determines a subset of the labels representing categories of objects depicted in the input image. For each mask pixel in a modified training image, the modified PLNN calculates a probability indicating whether the mask pixel has the same label as an object pixel. The modified PLNN generates a mask label for each mask pixel, based on the probability. The panoptic labeling system provides the mask label to, for example, a digital graphics editing system that uses the labels to complete an infill operation.
    Type: Grant
    Filed: October 20, 2022
    Date of Patent: December 26, 2023
    Assignee: ADOBE INC.
    Inventors: Sohrab Amirghodsi, Zhe Lin, Yilin Wang, Tianshu Yu, Connelly Barnes, Elya Shechtman
  • Publication number: 20230385992
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that implement an inpainting framework having computer-implemented machine learning models to generate high-resolution inpainting results. For instance, in one or more embodiments, the disclosed systems generate an inpainted digital image utilizing a deep inpainting neural network from a digital image having a replacement region. The disclosed systems further generate, utilizing a visual guide algorithm, at least one deep visual guide from the inpainted digital image. Using a patch match model and the at least one deep visual guide, the disclosed systems generate a plurality of modified digital images from the digital image by replacing the region of pixels of the digital image with replacement pixels. Additionally, the disclosed systems select, utilizing an inpainting curation model, a modified digital image from the plurality of modified digital images to provide to a client device.
    Type: Application
    Filed: May 25, 2022
    Publication date: November 30, 2023
    Inventors: Connelly Barnes, Elya Shechtman, Sohrab Amirghodsi, Zhe Lin
  • Publication number: 20230368339
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing class-specific cascaded modulation inpainting neural network. For example, the disclosed systems utilize a class-specific cascaded modulation inpainting neural network that includes cascaded modulation decoder layers to generate replacement pixels portraying a particular target object class. To illustrate, in response to user selection of a replacement region and target object class, the disclosed systems utilize a class-specific cascaded modulation inpainting neural network corresponding to the target object class to generate an inpainted digital image that portrays an instance of the target object class within the replacement region.
    Type: Application
    Filed: May 13, 2022
    Publication date: November 16, 2023
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
  • Publication number: 20230360180
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing a cascaded modulation inpainting neural network. For example, the disclosed systems utilize a cascaded modulation inpainting neural network that includes cascaded modulation decoder layers. For example, in one or more decoder layers, the disclosed systems start with global code modulation that captures the global-range image structures followed by an additional modulation that refines the global predictions. Accordingly, in one or more implementations, the image inpainting system provides a mechanism to correct distorted local details. Furthermore, in one or more implementations, the image inpainting system leverages fast Fourier convolutions block within different resolution layers of the encoder architecture to expand the receptive field of the encoder and to allow the network encoder to better capture global structure.
    Type: Application
    Filed: May 4, 2022
    Publication date: November 9, 2023
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
  • Publication number: 20230259587
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for training a generative inpainting neural network to accurately generate inpainted digital images via object-aware training and/or masked regularization. For example, the disclosed systems utilize an object-aware training technique to learn parameters for a generative inpainting neural network based on masking individual object instances depicted within sample digital images of a training dataset. In some embodiments, the disclosed systems also (or alternatively) utilize a masked regularization technique as part of training to prevent overfitting by penalizing a discriminator neural network utilizing a regularization term that is based on an object mask.
    Type: Application
    Filed: February 14, 2022
    Publication date: August 17, 2023
    Inventors: Zhe Lin, Haitian Zheng, Jingwan Lu, Scott Cohen, Jianming Zhang, Ning Xu, Elya Shechtman, Connelly Barnes, Sohrab Amirghodsi
  • Publication number: 20230145498
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for accurately restoring missing pixels within a hole region of a target image utilizing multi-image inpainting techniques based on incorporating geometric depth information. For example, in various implementations, the disclosed systems utilize a depth prediction of a source image as well as camera relative pose parameters. Additionally, in some implementations, the disclosed systems jointly optimize the depth rescaling and camera pose parameters before generating the reprojected image to further increase the accuracy of the reprojected image. Further, in various implementations, the disclosed systems utilize the reprojected image in connection with a content-aware fill model to generate a refined composite image that includes the target image having a hole, where the hole is filled in based on the reprojected image of the source image.
    Type: Application
    Filed: November 5, 2021
    Publication date: May 11, 2023
    Inventors: Yunhan Zhao, Connelly Barnes, Yuqian Zhou, Sohrab Amirghodsi, Elya Shechtman
  • Publication number: 20230141734
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately generating inpainted digital images utilizing a guided inpainting model guided by both plane panoptic segmentation and plane grouping. For example, the disclosed systems utilize a guided inpainting model to fill holes of missing pixels of a digital image as informed or guided by an appearance guide and a geometric guide. Specifically, the disclosed systems generate an appearance guide utilizing plane panoptic segmentation and generate a geometric guide by grouping plane panoptic segments. In some embodiments, the disclosed systems generate a modified digital image by implementing an inpainting model guided by both the appearance guide (e.g., a plane panoptic segmentation map) and the geometric guide (e.g., a plane grouping map).
    Type: Application
    Filed: November 5, 2021
    Publication date: May 11, 2023
    Inventors: Yuqian Zhou, Connelly Barnes, Sohrab Amirghodsi, Elya Shechtman
  • Publication number: 20230079886
    Abstract: A panoptic labeling system includes a modified panoptic labeling neural network (“modified PLNN”) that is trained to generate labels for pixels in an input image. The panoptic labeling system generates modified training images by combining training images with mask instances from annotated images. The modified PLNN determines a set of labels representing categories of objects depicted in the modified training images. The modified PLNN also determines a subset of the labels representing categories of objects depicted in the input image. For each mask pixel in a modified training image, the modified PLNN calculates a probability indicating whether the mask pixel has the same label as an object pixel. The modified PLNN generates a mask label for each mask pixel, based on the probability. The panoptic labeling system provides the mask label to, for example, a digital graphics editing system that uses the labels to complete an infill operation.
    Type: Application
    Filed: October 20, 2022
    Publication date: March 16, 2023
    Inventors: Sohrab AMIRGHODSI, Zhe LIN, Yilin WANG, Tianshu YU, Connelly BARNES, Elya SHECHTMAN
  • Patent number: 11551390
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating deterministic enhanced digital images based on parallel determinations of pixel group offsets arranged in pixel waves. For example, the disclosed systems can utilize a parallel wave analysis to propagate through pixel groups in a pixel wave of a target region within a digital image to determine matching patch offsets for the pixel groups. The disclosed systems can further utilize the matching patch offsets to generate a deterministic enhanced digital image by filling or replacing pixels of the target region with matching pixels indicated by the matching patch offsets.
    Type: Grant
    Filed: August 5, 2020
    Date of Patent: January 10, 2023
    Assignee: Adobe Inc.
    Inventors: Sohrab Amirghodsi, Connelly Barnes, Eric L. Palmer
  • Patent number: 11507777
    Abstract: A panoptic labeling system includes a modified panoptic labeling neural network (“modified PLNN”) that is trained to generate labels for pixels in an input image. The panoptic labeling system generates modified training images by combining training images with mask instances from annotated images. The modified PLNN determines a set of labels representing categories of objects depicted in the modified training images. The modified PLNN also determines a subset of the labels representing categories of objects depicted in the input image. For each mask pixel in a modified training image, the modified PLNN calculates a probability indicating whether the mask pixel has the same label as an object pixel. The modified PLNN generates a mask label for each mask pixel, based on the probability. The panoptic labeling system provides the mask label to, for example, a digital graphics editing system that uses the labels to complete an infill operation.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: November 22, 2022
    Assignee: ADOBE INC.
    Inventors: Sohrab Amirghodsi, Zhe Lin, Yilin Wang, Tianshu Yu, Connelly Barnes, Elya Shechtman
  • Publication number: 20220292650
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating modified digital images utilizing a guided inpainting approach that implements a patch match model informed by a deep visual guide. In particular, the disclosed systems can utilize a visual guide algorithm to automatically generate guidance maps to help identify replacement pixels for inpainting regions of digital images utilizing a patch match model. For example, the disclosed systems can generate guidance maps in the form of structure maps, depth maps, or segmentation maps that respectively indicate the structure, depth, or segmentation of different portions of digital images. Additionally, the disclosed systems can implement a patch match model to identify replacement pixels for filling regions of digital images according to the structure, depth, and/or segmentation of the digital images.
    Type: Application
    Filed: March 15, 2021
    Publication date: September 15, 2022
    Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Connelly Barnes, Elya Shechtman
  • Publication number: 20220292341
    Abstract: Systems and methods for signal processing are described. Embodiments receive a digital signal comprising original signal values corresponding to a discrete set of original sample locations, generate modulation parameters based on the digital signal using a modulator network, wherein each of a plurality of modulator layers of the modulator network outputs a set of the modulation parameters, and generate a predicted signal value of the digital signal at an additional location using a synthesizer network, wherein each of a plurality of synthesizer layers of the synthesizer network operates based on the set of the modulation parameters from a corresponding modulator layer of the modulator network.
    Type: Application
    Filed: March 11, 2021
    Publication date: September 15, 2022
    Inventors: lshit bhadresh Mehta, Michaël Gharbi, Connelly Barnes, Elya Shechtman
  • Publication number: 20220172331
    Abstract: Techniques are disclosed for filling or otherwise replacing a target region of a primary image with a corresponding region of an auxiliary image. The filling or replacing can be done with an overlay (no subtractive process need be run on the primary image). Because the primary and auxiliary images may not be aligned, both geometric and photometric transformations are applied to the primary and/or auxiliary images. For instance, a geometric transformation of the auxiliary image is performed, to better align features of the auxiliary image with corresponding features of the primary image. Also, a photometric transformation of the auxiliary image is performed, to better match color of one or more pixels of the auxiliary image with color of corresponding one or more pixels of the primary image. The corresponding region of the transformed auxiliary image is then copied and overlaid on the target region of the primary image.
    Type: Application
    Filed: February 17, 2022
    Publication date: June 2, 2022
    Applicant: Adobe Inc.
    Inventors: Connelly Barnes, Sohrab Amirghodsi, Elya Shechtman
  • Patent number: 11321847
    Abstract: In some embodiments, an image manipulation application receives an incomplete image that includes a hole area lacking image content. The image manipulation application applies a contour detection operation to the incomplete image to detect an incomplete contour of a foreground object in the incomplete image. The hole area prevents the contour detection operation from detecting a completed contour of the foreground object. The image manipulation application further applies a contour completion model to the incomplete contour and the incomplete image to generate the completed contour for the foreground object. Based on the completed contour and the incomplete image, the image manipulation application generates image content for the hole area to generate a completed image.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: May 3, 2022
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Wei Xiong, Connelly Barnes, Jimei Yang, Xin Lu
  • Patent number: 11270415
    Abstract: Techniques are disclosed for filling or otherwise replacing a target region of a primary image with a corresponding region of an auxiliary image. The filling or replacing can be done with an overlay (no subtractive process need be run on the primary image). Because the primary and auxiliary images may not be aligned, both geometric and photometric transformations are applied to the primary and/or auxiliary images. For instance, a geometric transformation of the auxiliary image is performed, to better align features of the auxiliary image with corresponding features of the primary image. Also, a photometric transformation of the auxiliary image is performed, to better match color of one or more pixels of the auxiliary image with color of corresponding one or more pixels of the primary image. The corresponding region of the transformed auxiliary image is then copied and overlaid on the target region of the primary image.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: March 8, 2022
    Assignee: Adobe Inc.
    Inventors: Connelly Barnes, Sohrab Amirghodsi, Elya Shechtman
  • Publication number: 20210357684
    Abstract: A panoptic labeling system includes a modified panoptic labeling neural network (“modified PLNN”) that is trained to generate labels for pixels in an input image. The panoptic labeling system generates modified training images by combining training images with mask instances from annotated images. The modified PLNN determines a set of labels representing categories of objects depicted in the modified training images. The modified PLNN also determines a subset of the labels representing categories of objects depicted in the input image. For each mask pixel in a modified training image, the modified PLNN calculates a probability indicating whether the mask pixel has the same label as an object pixel. The modified PLNN generates a mask label for each mask pixel, based on the probability. The panoptic labeling system provides the mask label to, for example, a digital graphics editing system that uses the labels to complete an infill operation.
    Type: Application
    Filed: May 13, 2020
    Publication date: November 18, 2021
    Inventors: Sohrab Amirghodsi, Zhe Lin, Yilin Wang, Tianshu Yu, Connelly Barnes, Elya Shechtman
  • Patent number: 11080833
    Abstract: A method for manipulating a target image includes generating a query of the target image and keys and values of a first reference image. The method also includes generating matching costs by comparing the query of the target image with each key of the reference image and generating a set of weights from the matching costs. Further, the method includes generating a set of weighted values by applying each weight of the set of weights to a corresponding value of the values of the reference image and generating a weighted patch by adding each weighted value of the set of weighted values together. Additionally, the method includes generating a combined weighted patch by combining the weighted patch with additional weighted patches associated with additional queries of the target image and generating a manipulated image by applying the combined weighted patch to an image processing algorithm.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: August 3, 2021
    Assignee: Adobe Inc.
    Inventors: Connelly Barnes, Utkarsh Singhal, Elya Shechtman, Michael Gharbi