Patents by Inventor Mai Long

Mai Long has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11798180
    Abstract: This disclosure describes one or more implementations of a depth prediction system that generates accurate depth images from single input digital images. In one or more implementations, the depth prediction system enforces different sets of loss functions across mix-data sources to generate a multi-branch architecture depth prediction model. For instance, in one or more implementations, the depth prediction model utilizes different data sources having different granularities of ground truth depth data to robustly train a depth prediction model. Further, given the different ground truth depth data granularities from the different data sources, the depth prediction model enforces different combinations of loss functions including an image-level normalized regression loss function and/or a pair-wise normal loss among other loss functions.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: October 24, 2023
    Assignee: Adobe Inc.
    Inventors: Wei Yin, Jianming Zhang, Oliver Wang, Simon Niklaus, Mai Long, Su Chen
  • Publication number: 20230177824
    Abstract: Systems and methods are disclosed for selecting target objects within digital images utilizing a multi-modal object selection neural network trained to accommodate multiple input modalities. In particular, in one or more embodiments, the disclosed systems and methods generate a trained neural network based on training digital images and training indicators corresponding to various input modalities. Moreover, one or more embodiments of the disclosed systems and methods utilize a trained neural network and iterative user inputs corresponding to different input modalities to select target objects in digital images. Specifically, the disclosed systems and methods can transform user inputs into distance maps that can be utilized in conjunction with color channels and a trained neural network to identify pixels that reflect the target object.
    Type: Application
    Filed: January 30, 2023
    Publication date: June 8, 2023
    Inventors: Brian Price, Scott Cohen, Mai Long, Jun Hao Liew
  • Patent number: 11568627
    Abstract: Systems and methods are disclosed for selecting target objects within digital images utilizing a multi-modal object selection neural network trained to accommodate multiple input modalities. In particular, in one or more embodiments, the disclosed systems and methods generate a trained neural network based on training digital images and training indicators corresponding to various input modalities. Moreover, one or more embodiments of the disclosed systems and methods utilize a trained neural network and iterative user inputs corresponding to different input modalities to select target objects in digital images. Specifically, the disclosed systems and methods can transform user inputs into distance maps that can be utilized in conjunction with color channels and a trained neural network to identify pixels that reflect the target object.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: January 31, 2023
    Assignee: Adobe Inc.
    Inventors: Brian Price, Scott Cohen, Mai Long, Jun Hao Liew
  • Patent number: 11443481
    Abstract: This disclosure describes implementations of a three-dimensional (3D) scene recovery system that reconstructs a 3D scene representation of a scene portrayed in a single digital image. For instance, the 3D scene recovery system trains and utilizes a 3D point cloud model to recover accurate intrinsic camera parameters from a depth map of the digital image. Additionally, the 3D point cloud model may include multiple neural networks that target specific intrinsic camera parameters. For example, the 3D point cloud model may include a depth 3D point cloud neural network that recovers the depth shift as well as include a focal length 3D point cloud neural network that recovers the camera focal length. Further, the 3D scene recovery system may utilize the recovered intrinsic camera parameters to transform the single digital image into an accurate and realistic 3D scene representation, such as a 3D point cloud.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: September 13, 2022
    Assignee: Adobe Inc.
    Inventors: Wei Yin, Jianming Zhang, Oliver Wang, Simon Niklaus, Mai Long, Su Chen
  • Publication number: 20220284613
    Abstract: This disclosure describes one or more implementations of a depth prediction system that generates accurate depth images from single input digital images. In one or more implementations, the depth prediction system enforces different sets of loss functions across mix-data sources to generate a multi-branch architecture depth prediction model. For instance, in one or more implementations, the depth prediction model utilizes different data sources having different granularities of ground truth depth data to robustly train a depth prediction model. Further, given the different ground truth depth data granularities from the different data sources, the depth prediction model enforces different combinations of loss functions including an image-level normalized regression loss function and/or a pair-wise normal loss among other loss functions.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 8, 2022
    Inventors: Wei Yin, Jianming Zhang, Oliver Wang, Simon Niklaus, Mai Long, Su Chen
  • Publication number: 20220277514
    Abstract: This disclosure describes implementations of a three-dimensional (3D) scene recovery system that reconstructs a 3D scene representation of a scene portrayed in a single digital image. For instance, the 3D scene recovery system trains and utilizes a 3D point cloud model to recover accurate intrinsic camera parameters from a depth map of the digital image. Additionally, the 3D point cloud model may include multiple neural networks that target specific intrinsic camera parameters. For example, the 3D point cloud model may include a depth 3D point cloud neural network that recovers the depth shift as well as include a focal length 3D point cloud neural network that recovers the camera focal length. Further, the 3D scene recovery system may utilize the recovered intrinsic camera parameters to transform the single digital image into an accurate and realistic 3D scene representation, such as a 3D point cloud.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: Wei Yin, Jianming Zhang, Oliver Wang, Simon Niklaus, Mai Long, Su Chen
  • Patent number: 11367206
    Abstract: In order to provide monocular depth prediction, a trained neural network may be used. To train the neural network, edge detection on a digital image may be performed to determine at least one edge of the digital image, and then a first point and a second point of the digital image may be sampled, based on the at least one edge. A relative depth between the first point and the second point may be predicted, and the neural network may be trained to perform monocular depth prediction using a loss function that compares the predicted relative depth with a ground truth relative depth between the first point and the second point.
    Type: Grant
    Filed: February 13, 2020
    Date of Patent: June 21, 2022
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Oliver Wang, Mai Long, Ke Xian, Jianming Zhang
  • Publication number: 20210256717
    Abstract: In order to provide monocular depth prediction, a trained neural network may be used. To train the neural network, edge detection on a digital image may be performed to determine at least one edge of the digital image, and then a first point and a second point of the digital image may be sampled, based on the at least one edge. A relative depth between the first point and the second point may be predicted, and the neural network may be trained to perform monocular depth prediction using a loss function that compares the predicted relative depth with a ground truth relative depth between the first point and the second point.
    Type: Application
    Filed: February 13, 2020
    Publication date: August 19, 2021
    Inventors: Zhe Lin, Oliver Wang, Mai Long, Ke Xian, Jianming Zhang
  • Patent number: 11055828
    Abstract: Techniques of inpainting video content include training a neural network to perform an inpainting operation on a video using only content from that video. For example, upon receiving video content including a sequence of initial frames, a computer generates a sequence of inputs corresponding to at least some of the sequence of initial frames and each input including, for example, a uniform noise map. The computer then generates a convolutional neural network (CNN) using the sequence of input as the initial layer. The parameters of the CNN are adjusted according to a cost function, which has components including a flow generation loss component and a consistency loss component. The CNN then outputs, on a final layer, estimated image values in a sequence of final frames.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: July 6, 2021
    Assignee: ADOBE INC.
    Inventors: Mai Long, Zhaowen Wang, Ning Xu, John Philip Collomosse, Haotian Zhang, Hailin Jin
  • Patent number: 11017586
    Abstract: Systems and methods are described for generating a three dimensional (3D) effect from a two dimensional (2D) image. The methods may include generating a depth map based on a 2D image, identifying a camera path, generating one or more extremal views based on the 2D image and the camera path, generating a global point cloud by inpainting occlusion gaps in the one or more extremal views, generating one or more intermediate views based on the global point cloud and the camera path, and combining the one or more extremal views and the one or more intermediate views to produce a 3D motion effect.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: May 25, 2021
    Assignee: ADOBE INC.
    Inventors: Mai Long, Simon Niklaus, Jimei Yang
  • Patent number: 10963759
    Abstract: The present disclosure includes methods and systems for searching for digital visual media based on semantic and spatial information. In particular, one or more embodiments of the disclosed systems and methods identify digital visual media displaying targeted visual content in a targeted region based on a query term and a query area provide via a digital canvas. Specifically, the disclosed systems and methods can receive user input of a query term and a query area and provide the query term and query area to a query neural network to generate a query feature set. Moreover, the disclosed systems and methods can compare the query feature set to digital visual media feature sets. Further, based on the comparison, the disclosed systems and methods can identify digital visual media portraying targeted visual content corresponding to the query term within a targeted region corresponding to the query area.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: March 30, 2021
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Mai Long, Jonathan Brandt, Hailin Jin, Chen Fang
  • Publication number: 20200357099
    Abstract: Techniques of inpainting video content include training a neural network to perform an inpainting operation on a video using only content from that video. For example, upon receiving video content including a sequence of initial frames, a computer generates a sequence of inputs corresponding to at least some of the sequence of initial frames and each input including, for example, a uniform noise map. The computer then generates a convolutional neural network (CNN) using the sequence of input as the initial layer. The parameters of the CNN are adjusted according to a cost function, which has components including a flow generation loss component and a consistency loss component. The CNN then outputs, on a final layer, estimated image values in a sequence of final frames.
    Type: Application
    Filed: May 9, 2019
    Publication date: November 12, 2020
    Inventors: Mai Long, Zhaowen Wang, Ning Xu, John Philip Collomosse, Haotian Zhang, Hailin Jin
  • Publication number: 20200334894
    Abstract: Systems and methods are described for generating a three dimensional (3D) effect from a two dimensional (2D) image. The methods may include generating a depth map based on a 2D image, identifying a camera path, generating one or more extremal views based on the 2D image and the camera path, generating a global point cloud by inpainting occlusion gaps in the one or more extremal views, generating one or more intermediate views based on the global point cloud and the camera path, and combining the one or more extremal views and the one or more intermediate views to produce a 3D motion effect.
    Type: Application
    Filed: April 18, 2019
    Publication date: October 22, 2020
    Inventors: MAI LONG, Simon Niklaus, Jimei Yang
  • Publication number: 20190272451
    Abstract: The present disclosure includes methods and systems for searching for digital visual media based on semantic and spatial information. In particular, one or more embodiments of the disclosed systems and methods identify digital visual media displaying targeted visual content in a targeted region based on a query term and a query area provide via a digital canvas. Specifically, the disclosed systems and methods can receive user input of a query term and a query area and provide the query term and query area to a query neural network to generate a query feature set. Moreover, the disclosed systems and methods can compare the query feature set to digital visual media feature sets. Further, based on the comparison, the disclosed systems and methods can identify digital visual media portraying targeted visual content corresponding to the query term within a targeted region corresponding to the query area.
    Type: Application
    Filed: May 20, 2019
    Publication date: September 5, 2019
    Inventors: Zhe Lin, Mai Long, Jonathan Brandt, Hailin Jin, Chen Fang
  • Publication number: 20190236394
    Abstract: Systems and methods are disclosed for selecting target objects within digital images utilizing a multi-modal object selection neural network trained to accommodate multiple input modalities. In particular, in one or more embodiments, the disclosed systems and methods generate a trained neural network based on training digital images and training indicators corresponding to various input modalities. Moreover, one or more embodiments of the disclosed systems and methods utilize a trained neural network and iterative user inputs corresponding to different input modalities to select target objects in digital images. Specifically, the disclosed systems and methods can transform user inputs into distance maps that can be utilized in conjunction with color channels and a trained neural network to identify pixels that reflect the target object.
    Type: Application
    Filed: April 5, 2019
    Publication date: August 1, 2019
    Inventors: Brian Price, Scott Cohen, Mai Long, Jun Hao Liew
  • Patent number: 10346727
    Abstract: The present disclosure includes methods and systems for searching for digital visual media based on semantic and spatial information. In particular, one or more embodiments of the disclosed systems and methods identify digital visual media displaying targeted visual content in a targeted region based on a query term and a query area provide via a digital canvas. Specifically, the disclosed systems and methods can receive user input of a query term and a query area and provide the query term and query area to a query neural network to generate a query feature set. Moreover, the disclosed systems and methods can compare the query feature set to digital visual media feature sets. Further, based on the comparison, the disclosed systems and methods can identify digital visual media portraying targeted visual content corresponding to the query term within a targeted region corresponding to the query area.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: July 9, 2019
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Mai Long, Jonathan Brandt, Hailin Jin, Chen Fang
  • Publication number: 20180121768
    Abstract: The present disclosure includes methods and systems for searching for digital visual media based on semantic and spatial information. In particular, one or more embodiments of the disclosed systems and methods identify digital visual media displaying targeted visual content in a targeted region based on a query term and a query area provide via a digital canvas. Specifically, the disclosed systems and methods can receive user input of a query term and a query area and provide the query term and query area to a query neural network to generate a query feature set. Moreover, the disclosed systems and methods can compare the query feature set to digital visual media feature sets. Further, based on the comparison, the disclosed systems and methods can identify digital visual media portraying targeted visual content corresponding to the query term within a targeted region corresponding to the query area.
    Type: Application
    Filed: February 10, 2017
    Publication date: May 3, 2018
    Inventors: Zhe Lin, Mai Long, Jonathan Brandt, Hailin Jin, Chen Fang