Patents by Inventor Brian Lynn Price

Brian Lynn Price has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210158139
    Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.
    Type: Application
    Filed: November 21, 2019
    Publication date: May 27, 2021
    Inventors: Long MAI, Yannick HOLD-GEOFFROY, Naoto INOUE, Daichi ITO, Brian Lynn PRICE
  • Patent number: 11004208
    Abstract: Techniques are disclosed for deep neural network (DNN) based interactive image matting. A methodology implementing the techniques according to an embodiment includes generating, by the DNN, an alpha matte associated with an image, based on user-specified foreground region locations in the image. The method further includes applying a first DNN subnetwork to the image, the first subnetwork trained to generate a binary mask based on the user input, the binary mask designating pixels of the image as background or foreground. The method further includes applying a second DNN subnetwork to the generated binary mask, the second subnetwork trained to generate a trimap based on the user input, the trimap designating pixels of the image as background, foreground, or uncertain status. The method further includes applying a third DNN subnetwork to the generated trimap, the third subnetwork trained to generate the alpha matte based on the user input.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: May 11, 2021
    Assignee: Adobe Inc.
    Inventors: Brian Lynn Price, Scott Cohen, Marco Forte, Ning Xu
  • Patent number: 10846524
    Abstract: A table layout determination system implemented on a computing device obtains an image of a table having multiple cells. The table layout determination system includes a row prediction machine learning system that generates, for each of multiple rows of pixels in the image of the table, a probability of the row being a row separator, and a column prediction machine learning system generates, for each of multiple columns of pixels in the image of the table, a probability of the column being a column separator. An inference system uses these probabilities of the rows being row separators and the columns being column separators to identify the row separators and column separators for the table. These row separators and column separators are the layout of the table.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: November 24, 2020
    Assignee: Adobe Inc.
    Inventors: Brian Lynn Price, Vlad Ion Morariu, Scott David Cohen, Christopher Alan Tensmeyer
  • Publication number: 20200349189
    Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.
    Type: Application
    Filed: July 15, 2020
    Publication date: November 5, 2020
    Applicant: Adobe Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Kalyan Krishna Sunkavalli, Hengshuang Zhao, Brian Lynn Price
  • Publication number: 20200311946
    Abstract: Techniques are disclosed for deep neural network (DNN) based interactive image matting. A methodology implementing the techniques according to an embodiment includes generating, by the DNN, an alpha matte associated with an image, based on user-specified foreground region locations in the image. The method further includes applying a first DNN subnetwork to the image, the first subnetwork trained to generate a binary mask based on the user input, the binary mask designating pixels of the image as background or foreground. The method further includes applying a second DNN subnetwork to the generated binary mask, the second subnetwork trained to generate a trimap based on the user input, the trimap designating pixels of the image as background, foreground, or uncertain status. The method further includes applying a third DNN subnetwork to the generated trimap, the third subnetwork trained to generate the alpha matte based on the user input.
    Type: Application
    Filed: March 26, 2019
    Publication date: October 1, 2020
    Applicant: Adobe Inc.
    Inventors: Brian Lynn Price, Scott Cohen, Marco Forte, Ning Xu
  • Patent number: 10747811
    Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: August 18, 2020
    Assignee: Adobe Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Kalyan Krishna Sunkavalli, Hengshuang Zhao, Brian Lynn Price
  • Publication number: 20200242822
    Abstract: Techniques and systems are described for style-aware patching of a digital image in a digital medium environment. For example, a digital image creation system generates style data for a portion to be filled of a digital image, indicating a style of an area surrounding the portion. The digital image creation system also generates content data for the portion indicating content of the digital image of the area surrounding the portion. The digital image creation system selects a source digital image based on similarity of both style and content of the source digital image at a location of the patch to the style data and content data. The digital image creation system transforms the style of the source digital image based on the style data and generates the patch from the source digital image in the transformed style for incorporation into the portion to be filled of the digital image.
    Type: Application
    Filed: April 6, 2020
    Publication date: July 30, 2020
    Applicant: Adobe Inc.
    Inventors: Hailin Jin, John Philip Collomosse, Brian Lynn Price
  • Publication number: 20200226725
    Abstract: Fill techniques as implemented by a computing device are described to perform hole filling of a digital image. In one example, deeply learned features of a digital image using machine learning are used by a computing device as a basis to search a digital image repository to locate the guidance digital image. Once located, machine learning techniques are then used to align the guidance digital image with the hole to be filled in the digital image. Once aligned, the guidance digital image is then used to guide generation of fill for the hole in the digital image. Machine learning techniques are used to determine which parts of the guidance digital image are to be blended to fill the hole in the digital image and which parts of the hole are to receive new content that is synthesized by the computing device.
    Type: Application
    Filed: March 25, 2020
    Publication date: July 16, 2020
    Applicant: Adobe Inc.
    Inventors: Brian Lynn Price, Yinan Zhao, Scott David Cohen
  • Patent number: 10699388
    Abstract: Fill techniques as implemented by a computing device are described to perform hole filling of a digital image. In one example, deeply learned features of a digital image using machine learning are used by a computing device as a basis to search a digital image repository to locate the guidance digital image. Once located, machine learning techniques are then used to align the guidance digital image with the hole to be filled in the digital image. Once aligned, the guidance digital image is then used to guide generation of fill for the hole in the digital image. Machine learning techniques are used to determine which parts of the guidance digital image are to be blended to fill the hole in the digital image and which parts of the hole are to receive new content that is synthesized by the computing device.
    Type: Grant
    Filed: January 24, 2018
    Date of Patent: June 30, 2020
    Assignee: Adobe Inc.
    Inventors: Brian Lynn Price, Yinan Zhao, Scott David Cohen
  • Patent number: 10699111
    Abstract: Disclosed systems and methods generate page segmented documents from unstructured vector graphics documents. The page segmentation application executing on a computing device receives as input an unstructured vector graphics document. The application generates an element proposal for each of many areas on a page of the input document tentatively identified as being page elements. The page segmentation application classifies each of the element proposals into one of a plurality of defined type of categories of page elements. The page segmentation application may further refine at least one of the element proposals and select a final element proposal for each element within the unstructured vector document. One or more of the page segmentation steps may be performed using a neural network.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: June 30, 2020
    Assignee: Adobe Inc.
    Inventors: Scott Cohen, Brian Lynn Price, Dafang He, Michael F. Kraley, Paul Asente
  • Patent number: 10657652
    Abstract: Methods and systems are provided for generating mattes for input images. A neural network system can be trained where the training includes training a first neural network that generates mattes for input images where the input images are synthetic composite images. Such a neural network system can further be trained where the training includes training a second neural network that generates refined mattes from the mattes produced by the first neural network. Such a trained neural network system can be used to input an image and trimap pair for which the trained system will output a matte. Such a matte can be used to extract an object from the input image. Upon extracting the object, a user can manipulate the object, for example, to composite the object onto a new background.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: May 19, 2020
    Assignee: Adobe Inc.
    Inventors: Brian Lynn Price, Stephen Schiller, Scott Cohen, Ning Xu
  • Publication number: 20200151444
    Abstract: A table layout determination system implemented on a computing device obtains an image of a table having multiple cells. The table layout determination system includes a row prediction machine learning system that generates, for each of multiple rows of pixels in the image of the table, a probability of the row being a row separator, and a column prediction machine learning system generates, for each of multiple columns of pixels in the image of the table, a probability of the column being a column separator. An inference system uses these probabilities of the rows being row separators and the columns being column separators to identify the row separators and column separators for the table. These row separators and column separators are the layout of the table.
    Type: Application
    Filed: November 14, 2018
    Publication date: May 14, 2020
    Applicant: Adobe Inc.
    Inventors: Brian Lynn Price, Vlad Ion Morariu, Scott David Cohen, Christopher Alan Tensmeyer
  • Patent number: 10613726
    Abstract: Systems and techniques are described herein for directing a user conversation to obtain an editing query, and removing and replacing objects in an image based on the editing query. Pixels corresponding to an object in the image indicated by the editing query are ascertained. The editing query is processed to determine whether it includes a remove request or a replace request. A search query is constructed to obtain images, such as from a database of stock images, including fill material or replacement material to fulfill the remove request or replace request, respectively. Composite images are generated from the fill material or the replacement material and the image to be edited. Composite images are harmonized to remove editing artifacts and make the images look natural. A user interface exposes images, and the user interface accepts multi-modal user input during the directed user conversation.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: April 7, 2020
    Assignee: Adobe Inc.
    Inventors: Scott David Cohen, Brian Lynn Price, Abhinav Gupta
  • Publication number: 20190377987
    Abstract: A discriminative captioning system generates captions for digital images that can be used to tell two digital images apart. The discriminative captioning system includes a machine learning system that is trained by a discriminative captioning training system that includes a retrieval machine learning system. For training, a digital image is input to the caption generation machine learning system, which generates a caption for the digital image. The digital image and the generated caption, as well as a set of additional images, are input to the retrieval machine learning system. The retrieval machine learning system generates a discriminability loss that indicates how well the retrieval machine learning system is able to use the caption to discriminate between the digital image and each image in the set of additional digital images. This discriminability loss is used to train the caption generation machine learning system.
    Type: Application
    Filed: June 10, 2018
    Publication date: December 12, 2019
    Applicant: Adobe Inc.
    Inventors: Brian Lynn Price, Ruotian Luo, Scott David Cohen
  • Publication number: 20190361994
    Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.
    Type: Application
    Filed: May 22, 2018
    Publication date: November 28, 2019
    Applicant: Adobe Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Kalyan Krishna Sunkavalli, Hengshuang Zhao, Brian Lynn Price
  • Publication number: 20190228508
    Abstract: Fill techniques as implemented by a computing device are described to perform hole filling of a digital image. In one example, deeply learned features of a digital image using machine learning are used by a computing device as a basis to search a digital image repository to locate the guidance digital image. Once located, machine learning techniques are then used to align the guidance digital image with the hole to be filled in the digital image. Once aligned, the guidance digital image is then used to guide generation of fill for the hole in the digital image. Machine learning techniques are used to determine which parts of the guidance digital image are to be blended to fill the hole in the digital image and which parts of the hole are to receive new content that is synthesized by the computing device.
    Type: Application
    Filed: January 24, 2018
    Publication date: July 25, 2019
    Applicant: Adobe Inc.
    Inventors: Brian Lynn Price, Yinan Zhao, Scott David Cohen
  • Publication number: 20190220983
    Abstract: Methods and systems are provided for generating mattes for input images. A neural network system can be trained where the training includes training a first neural network that generates mattes for input images where the input images are synthetic composite images. Such a neural network system can further be trained where the training includes training a second neural network that generates refined mattes from the mattes produced by the first neural network. Such a trained neural network system can be used to input an image and trimap pair for which the trained system will output a matte. Such a matte can be used to extract an object from the input image. Upon extracting the object, a user can manipulate the object, for example, to composite the object onto a new background.
    Type: Application
    Filed: March 20, 2019
    Publication date: July 18, 2019
    Inventors: Brian Lynn Price, Stephen Schiller, Scott Cohen, Ning Xu
  • Publication number: 20190196698
    Abstract: Systems and techniques are described herein for directing a user conversation to obtain an editing query, and removing and replacing objects in an image based on the editing query. Pixels corresponding to an object in the image indicated by the editing query are ascertained. The editing query is processed to determine whether it includes a remove request or a replace request. A search query is constructed to obtain images, such as from a database of stock images, including fill material or replacement material to fulfill the remove request or replace request, respectively. Composite images are generated from the fill material or the replacement material and the image to be edited. Composite images are harmonized to remove editing artifacts and make the images look natural. A user interface exposes images, and the user interface accepts multi-modal user input during the directed user conversation.
    Type: Application
    Filed: December 22, 2017
    Publication date: June 27, 2019
    Applicant: Adobe Inc.
    Inventors: Scott David Cohen, Brian Lynn Price, Abhinav Gupta
  • Publication number: 20190156115
    Abstract: Disclosed systems and methods generate page segmented documents from unstructured vector graphics documents. The page segmentation application executing on a computing device receives as input an unstructured vector graphics document. The application generates an element proposal for each of many areas on a page of the input document tentatively identified as being page elements. The page segmentation application classifies each of the element proposals into one of a plurality of defined type of categories of page elements. The page segmentation application may further refine at least one of the element proposals and select a final element proposal for each element within the unstructured vector document. One or more of the page segmentation steps may be performed using a neural network.
    Type: Application
    Filed: January 18, 2019
    Publication date: May 23, 2019
    Inventors: Scott Cohen, Brian Lynn Price, Dafang He, Michael F. Kraley, Paul Asente
  • Patent number: 10255681
    Abstract: Methods and systems are provided for generating mattes for input images. A neural network system can be trained where the training includes training a first neural network that generates mattes for input images where the input images are synthetic composite images. Such a neural network system can further be trained where the training includes training a second neural network that generates refined mattes from the mattes produced by the first neural network. Such a trained neural network system can be used to input an image and trimap pair for which the trained system will output a matte. Such a matte can be used to extract an object from the input image. Upon extracting the object, a user can manipulate the object, for example, to composite the object onto a new background.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: April 9, 2019
    Assignee: Adobe Inc.
    Inventors: Brian Lynn Price, Stephen Schiller, Scott Cohen, Ning Xu