Patents by Inventor Brian Lynn Price

Brian Lynn Price has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12147896
    Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.
    Type: Grant
    Filed: April 6, 2023
    Date of Patent: November 19, 2024
    Assignee: Adobe Inc.
    Inventors: Long Mai, Yannick Hold-Geoffroy, Naoto Inoue, Daichi Ito, Brian Lynn Price
  • Publication number: 20240249413
    Abstract: In implementations of systems for performing multiple segmentation tasks, a computing device implements a segment system to receive input data describing a digital image depicting an object. The segment system computes per-pixel embeddings for the digital image using a pixel decoder of a machine learning model. Output embeddings are generated using a transformer decoder of the machine learning model based on the per-pixel embeddings for the digital image, input embeddings for a first segmentation task and input embeddings for a second segmentation task. The segment system outputs a first digital image and a second digital image. The first digital image depicts the object segmented based on the first segmentation task and the second digital image depicts the object segmented based on the second segmentation task.
    Type: Application
    Filed: January 23, 2023
    Publication date: July 25, 2024
    Applicant: Adobe Inc.
    Inventors: Jason Wen Yong Kuen, Zhe Lin, Sukjun Hwang, Jianming Zhang, Brian Lynn Price
  • Publication number: 20240168625
    Abstract: Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.
    Type: Application
    Filed: January 23, 2024
    Publication date: May 23, 2024
    Applicant: Adobe Inc.
    Inventors: Christopher Alan Tensmeyer, Rajiv Jain, Curtis Michael Wigington, Brian Lynn Price, Brian Lafayette Davis
  • Patent number: 11899927
    Abstract: Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: February 13, 2024
    Assignee: Adobe Inc.
    Inventors: Christopher Alan Tensmeyer, Rajiv Jain, Curtis Michael Wigington, Brian Lynn Price, Brian Lafayette Davis
  • Patent number: 11880977
    Abstract: Techniques are disclosed for deep neural network (DNN) based interactive image matting. A methodology implementing the techniques according to an embodiment includes generating, by the DNN, an alpha matte associated with an image, based on user-specified foreground region locations in the image. The method further includes applying a first DNN subnetwork to the image, the first subnetwork trained to generate a binary mask based on the user input, the binary mask designating pixels of the image as background or foreground. The method further includes applying a second DNN subnetwork to the generated binary mask, the second subnetwork trained to generate a trimap based on the user input, the trimap designating pixels of the image as background, foreground, or uncertain status. The method further includes applying a third DNN subnetwork to the generated trimap, the third subnetwork trained to generate the alpha matte based on the user input.
    Type: Grant
    Filed: May 6, 2021
    Date of Patent: January 23, 2024
    Assignee: Adobe Inc.
    Inventors: Brian Lynn Price, Scott Cohen, Marco Forte, Ning Xu
  • Publication number: 20230360177
    Abstract: In implementations of systems for joint trimap estimation and alpha matte prediction, a computing device implements a matting system to estimate a trimap for a frame of a digital video using a first stage of a machine learning model. An alpha matte is predicted for the frame based on the trimap and the frame using a second stage of the machine learning model. The matting system generates a refined trimap and a refined alpha matte for the frame based on the alpha matte, the trimap, and the frame using a third stage of the machine learning model. An additional trimap is estimated for an additional frame of the digital video based on the refined trimap and the refined alpha matte using the first stage of the machine learning model.
    Type: Application
    Filed: May 4, 2022
    Publication date: November 9, 2023
    Applicant: Adobe Inc.
    Inventors: Joon-Young Lee, Seoungwug Oh, Brian Lynn Price, Hongje Seong
  • Patent number: 11756208
    Abstract: In implementations of object boundary generation, a computing device implements a boundary system to receive a mask defining a contour of an object depicted in a digital image, the mask having a lower resolution than the digital image. The boundary system maps a curve to the contour of the object and extracts strips of pixels from the digital image which are normal to points of the curve. A sample of the digital image is generated using the extracted strips of pixels which is input to a machine learning model. The machine learning model outputs a representation of a boundary of the object by processing the sample of the digital image.
    Type: Grant
    Filed: December 7, 2021
    Date of Patent: September 12, 2023
    Assignee: Adobe Inc.
    Inventors: Brian Lynn Price, Peng Zhou, Scott David Cohen, Gregg Darryl Wilensky
  • Publication number: 20230244940
    Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.
    Type: Application
    Filed: April 6, 2023
    Publication date: August 3, 2023
    Inventors: Long MAI, Yannick Hold-Geoffroy, Naoto Inoue, Daichi Ito, Brian Lynn Price
  • Patent number: 11663467
    Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: May 30, 2023
    Assignee: ADOBE INC.
    Inventors: Long Mai, Yannick Hold-Geoffroy, Naoto Inoue, Daichi Ito, Brian Lynn Price
  • Patent number: 11631162
    Abstract: Fill techniques as implemented by a computing device are described to perform hole filling of a digital image. In one example, deeply learned features of a digital image using machine learning are used by a computing device as a basis to search a digital image repository to locate the guidance digital image. Once located, machine learning techniques are then used to align the guidance digital image with the hole to be filled in the digital image. Once aligned, the guidance digital image is then used to guide generation of fill for the hole in the digital image. Machine learning techniques are used to determine which parts of the guidance digital image are to be blended to fill the hole in the digital image and which parts of the hole are to receive new content that is synthesized by the computing device.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: April 18, 2023
    Assignee: Adobe Inc.
    Inventors: Brian Lynn Price, Yinan Zhao, Scott David Cohen
  • Patent number: 11538170
    Abstract: Methods and systems are provided for optimal segmentation of an image based on multiple segmentations. In particular, multiple segmentation methods can be combined by taking into account previous segmentations. For instance, an optimal segmentation can be generated by iteratively integrating a previous segmentation (e.g., using an image segmentation method) with a current segmentation (e.g., using the same or different image segmentation method). To allow for optimal segmentation of an image based on multiple segmentations, one or more neural networks can be used. For instance, a convolutional RNN can be used to maintain information related to one or more previous segmentations when transitioning from one segmentation method to the next. The convolutional RNN can combine the previous segmentation(s) with the current segmentation without requiring any information about the image segmentation method(s) used to generate the segmentations.
    Type: Grant
    Filed: April 3, 2020
    Date of Patent: December 27, 2022
    Assignee: ADOBE INC.
    Inventors: Brian Lynn Price, Scott David Cohen, Henghui Ding
  • Patent number: 11514252
    Abstract: A discriminative captioning system generates captions for digital images that can be used to tell two digital images apart. The discriminative captioning system includes a machine learning system that is trained by a discriminative captioning training system that includes a retrieval machine learning system. For training, a digital image is input to the caption generation machine learning system, which generates a caption for the digital image. The digital image and the generated caption, as well as a set of additional images, are input to the retrieval machine learning system. The retrieval machine learning system generates a discriminability loss that indicates how well the retrieval machine learning system is able to use the caption to discriminate between the digital image and each image in the set of additional digital images. This discriminability loss is used to train the caption generation machine learning system.
    Type: Grant
    Filed: June 10, 2018
    Date of Patent: November 29, 2022
    Assignee: Adobe Inc.
    Inventors: Brian Lynn Price, Ruotian Luo, Scott David Cohen
  • Publication number: 20220148326
    Abstract: Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.
    Type: Application
    Filed: January 24, 2022
    Publication date: May 12, 2022
    Applicant: Adobe Inc.
    Inventors: Christopher Alan Tensmeyer, Rajiv Jain, Curtis Michael Wigington, Brian Lynn Price, Brian Lafayette Davis
  • Publication number: 20220114705
    Abstract: Fill techniques as implemented by a computing device are described to perform hole filling of a digital image. In one example, deeply learned features of a digital image using machine learning are used by a computing device as a basis to search a digital image repository to locate the guidance digital image. Once located, machine learning techniques are then used to align the guidance digital image with the hole to be filled in the digital image. Once aligned, the guidance digital image is then used to guide generation of fill for the hole in the digital image. Machine learning techniques are used to determine which parts of the guidance digital image are to be blended to fill the hole in the digital image and which parts of the hole are to receive new content that is synthesized by the computing device.
    Type: Application
    Filed: December 21, 2021
    Publication date: April 14, 2022
    Applicant: Adobe Inc.
    Inventors: Brian Lynn Price, Yinan Zhao, Scott David Cohen
  • Publication number: 20220092790
    Abstract: In implementations of object boundary generation, a computing device implements a boundary system to receive a mask defining a contour of an object depicted in a digital image, the mask having a lower resolution than the digital image. The boundary system maps a curve to the contour of the object and extracts strips of pixels from the digital image which are normal to points of the curve. A sample of the digital image is generated using the extracted strips of pixels which is input to a machine learning model. The machine learning model outputs a representation of a boundary of the object by processing the sample of the digital image.
    Type: Application
    Filed: December 7, 2021
    Publication date: March 24, 2022
    Applicant: Adobe Inc.
    Inventors: Brian Lynn Price, Peng Zhou, Scott David Cohen, Gregg Darryl Wilensky
  • Patent number: 11263259
    Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.
    Type: Grant
    Filed: July 15, 2020
    Date of Patent: March 1, 2022
    Assignee: Adobe Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Kalyan Krishna Sunkavalli, Hengshuang Zhao, Brian Lynn Price
  • Patent number: 11250252
    Abstract: Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: February 15, 2022
    Assignee: ADOBE INC.
    Inventors: Christopher Alan Tensmeyer, Rajiv Jain, Curtis Michael Wigington, Brian Lynn Price, Brian Lafayette Davis
  • Patent number: 11244460
    Abstract: In implementations of object boundary generation, a computing device implements a boundary system to receive a mask defining a contour of an object depicted in a digital image, the mask having a lower resolution than the digital image. The boundary system maps a curve to the contour of the object and extracts strips of pixels from the digital image which are normal to points of the curve. A sample of the digital image is generated using the extracted strips of pixels which is input to a machine learning model. The machine learning model outputs a representation of a boundary of the object by processing the sample of the digital image.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: February 8, 2022
    Assignee: Adobe Inc.
    Inventors: Brian Lynn Price, Peng Zhou, Scott David Cohen, Gregg Darryl Wilensky
  • Patent number: 11244430
    Abstract: Fill techniques as implemented by a computing device are described to perform hole filling of a digital image. In one example, deeply learned features of a digital image using machine learning are used by a computing device as a basis to search a digital image repository to locate the guidance digital image. Once located, machine learning techniques are then used to align the guidance digital image with the hole to be filled in the digital image. Once aligned, the guidance digital image is then used to guide generation of fill for the hole in the digital image. Machine learning techniques are used to determine which parts of the guidance digital image are to be blended to fill the hole in the digital image and which parts of the hole are to receive new content that is synthesized by the computing device.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: February 8, 2022
    Assignee: Adobe Inc.
    Inventors: Brian Lynn Price, Yinan Zhao, Scott David Cohen
  • Publication number: 20210312635
    Abstract: Methods and systems are provided for optimal segmentation of an image based on multiple segmentations. In particular, multiple segmentation methods can be combined by taking into account previous segmentations. For instance, an optimal segmentation can be generated by iteratively integrating a previous segmentation (e.g., using an image segmentation method) with a current segmentation (e.g., using the same or different image segmentation method). To allow for optimal segmentation of an image based on multiple segmentations, one or more neural networks can be used. For instance, a convolutional RNN can be used to maintain information related to one or more previous segmentations when transitioning from one segmentation method to the next. The convolutional RNN can combine the previous segmentation(s) with the current segmentation without requiring any information about the image segmentation method(s) used to generate the segmentations.
    Type: Application
    Filed: April 3, 2020
    Publication date: October 7, 2021
    Inventors: Brian Lynn PRICE, Scott David COHEN, Henghui DING