Patents by Inventor Long Mai

Long Mai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11871145
    Abstract: Embodiments are disclosed for video image interpolation. In some embodiments, video image interpolation includes receiving a pair of input images from a digital video, determining, using a neural network, a plurality of spatially varying kernels each corresponding to a pixel of an output image, convolving a first set of spatially varying kernels with a first input image from the pair of input images and a second set of spatially varying kernels with a second input image from the pair of input images to generate filtered images, and generating the output image by performing kernel normalization on the filtered images.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Simon Niklaus, Oliver Wang, Long Mai
  • Publication number: 20230244940
    Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.
    Type: Application
    Filed: April 6, 2023
    Publication date: August 3, 2023
    Inventors: Long MAI, Yannick Hold-Geoffroy, Naoto Inoue, Daichi Ito, Brian Lynn Price
  • Patent number: 11663467
    Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: May 30, 2023
    Assignee: ADOBE INC.
    Inventors: Long Mai, Yannick Hold-Geoffroy, Naoto Inoue, Daichi Ito, Brian Lynn Price
  • Patent number: 11645328
    Abstract: Systems and methods for performing image search are described. An image search method may include generating a feature vector for each of a plurality of stored images using a machine learning model trained using a rotation loss term, receiving a search query comprising a search image with object having an orientation, generating a query feature vector for the search image using the machine learning model, wherein the query feature vector is based at least in part on the orientation, comparing the query feature vector to the feature vector for each of the plurality of stored images, and selecting at least one stored image of the plurality of stored images based on the comparison, wherein the at least one stored image comprises a similar orientation to the orientation of the object in the search image.
    Type: Grant
    Filed: March 17, 2020
    Date of Patent: May 9, 2023
    Assignee: ADOBE INC.
    Inventors: Long Mai, Michael Alcorn, Baldo Faieta, Vladimir Kim
  • Patent number: 11468318
    Abstract: Systems, methods, and computer-readable media for context-aware synthesis for video frame interpolation are provided. A convolutional neural network (ConvNet) may, given two input video or image frames, interpolate a frame temporarily in the middle of the two input frames by combining motion estimation and pixel synthesis into a single step and formulating pixel interpolation as a local convolution over patches in the input images. The ConvNet may estimate a convolution kernel based on a first receptive field patch of a first input image frame and a second receptive field patch of a second input image frame. The ConvNet may then convolve the convolutional kernel over a first pixel patch of the first input image frame and a second pixel patch of the second input image frame to obtain color data of an output pixel of the interpolation frame. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: October 11, 2022
    Assignee: PORTLAND STATE UNIVERSITY
    Inventors: Feng Liu, Simon Niklaus, Long Mai
  • Publication number: 20220321830
    Abstract: Embodiments are disclosed for video image interpolation. In some embodiments, video image interpolation includes receiving a pair of input images from a digital video, determining, using a neural network, a plurality of spatially varying kernels each corresponding to a pixel of an output image, convolving a first set of spatially varying kernels with a first input image from the pair of input images and a second set of spatially varying kernels with a second input image from the pair of input images to generate filtered images, and generating the output image by performing kernel normalization on the filtered images.
    Type: Application
    Filed: April 6, 2021
    Publication date: October 6, 2022
    Inventors: Simon NIKLAUS, Oliver WANG, Long MAI
  • Publication number: 20220207745
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and utilizing scale-diverse segmentation neural networks to analyze digital images at different scales and identify different target objects portrayed in the digital images. For example, in one or more embodiments, the disclosed systems analyze a digital image and corresponding user indicators (e.g., foreground indicators, background indicators, edge indicators, boundary region indicators, and/or voice indicators) at different scales utilizing a scale-diverse segmentation neural network. In particular, the disclosed systems can utilize the scale-diverse segmentation neural network to generate a plurality of semantically meaningful object segmentation outputs. Furthermore, the disclosed systems can provide the plurality of object segmentation outputs for display and selection to improve the efficiency and accuracy of identifying target objects and modifying the digital image.
    Type: Application
    Filed: March 18, 2022
    Publication date: June 30, 2022
    Inventors: Scott Cohen, Long Mai, Jun Hao Liew, Brian Price
  • Patent number: 11282208
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and utilizing scale-diverse segmentation neural networks to analyze digital images at different scales and identify different target objects portrayed in the digital images. For example, in one or more embodiments, the disclosed systems analyze a digital image and corresponding user indicators (e.g., foreground indicators, background indicators, edge indicators, boundary region indicators, and/or voice indicators) at different scales utilizing a scale-diverse segmentation neural network. In particular, the disclosed systems can utilize the scale-diverse segmentation neural network to generate a plurality of semantically meaningful object segmentation outputs. Furthermore, the disclosed systems can provide the plurality of object segmentation outputs for display and selection to improve the efficiency and accuracy of identifying target objects and modifying the digital image.
    Type: Grant
    Filed: December 24, 2018
    Date of Patent: March 22, 2022
    Assignee: Adobe Inc.
    Inventors: Scott Cohen, Long Mai, Jun Hao Liew, Brian Price
  • Publication number: 20210294834
    Abstract: Systems and methods for performing image search are described. An image search method may include generating a feature vector for each of a plurality of stored images using a machine learning model trained using a rotation loss term, receiving a search query comprising a search image with object having an orientation, generating a query feature vector for the search image using the machine learning model, wherein the query feature vector is based at least in part on the orientation, comparing the query feature vector to the feature vector for each of the plurality of stored images, and selecting at least one stored image of the plurality of stored images based on the comparison, wherein the at least one stored image comprises a similar orientation to the orientation of the object in the search image.
    Type: Application
    Filed: March 17, 2020
    Publication date: September 23, 2021
    Inventors: Long Mai, Michael Alcorn, Baldo Faieta, Vladimir Kim
  • Publication number: 20210158139
    Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.
    Type: Application
    Filed: November 21, 2019
    Publication date: May 27, 2021
    Inventors: Long MAI, Yannick HOLD-GEOFFROY, Naoto INOUE, Daichi ITO, Brian Lynn PRICE
  • Publication number: 20200202533
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and utilizing scale-diverse segmentation neural networks to analyze digital images at different scales and identify different target objects portrayed in the digital images. For example, in one or more embodiments, the disclosed systems analyze a digital image and corresponding user indicators (e.g., foreground indicators, background indicators, edge indicators, boundary region indicators, and/or voice indicators) at different scales utilizing a scale-diverse segmentation neural network. In particular, the disclosed systems can utilize the scale-diverse segmentation neural network to generate a plurality of semantically meaningful object segmentation outputs. Furthermore, the disclosed systems can provide the plurality of object segmentation outputs for display and selection to improve the efficiency and accuracy of identifying target objects and modifying the digital image.
    Type: Application
    Filed: December 24, 2018
    Publication date: June 25, 2020
    Inventors: Scott Cohen, Long Mai, Jun Hao Liew, Brian Price
  • Publication number: 20200012940
    Abstract: Systems, methods, and computer-readable media for context-aware synthesis for video frame interpolation are provided. A convolutional neural network (ConvNet) may, given two input video or image frames, interpolate a frame temporarily in the middle of the two input frames by combining motion estimation and pixel synthesis into a single step and formulating pixel interpolation as a local convolution over patches in the input images. The ConvNet may estimate a convolution kernel based on a first receptive field patch of a first input image frame and a second receptive field patch of a second input image frame. The ConvNet may then convolve the convolutional kernel over a first pixel patch of the first input image frame and a second pixel patch of the second input image frame to obtain color data of an output pixel of the interpolation frame. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: March 16, 2018
    Publication date: January 9, 2020
    Applicant: Portland State University
    Inventors: Feng Liu, Simon Niklaus, Long Mai
  • Patent number: D853529
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: July 9, 2019
    Inventor: Long Mai