Patents by Inventor Seoungwug Oh

Seoungwug Oh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240013809
    Abstract: Systems and methods for video processing are described. Embodiments of the present disclosure identify an image that depicts an expression of a face; encode the image to obtain a latent code representing the image; edit the latent code to obtain an edited latent code that represents the face with a target attribute that is different from an original attribute of the face and with an edited expression that is different from the expression of the face; modify the edited latent code to obtain a modified latent code that represents the face with the target attribute and a modified expression, wherein a difference between the expression and the modified expression is less than a difference between the expression and the edited expression; and generate a modified image based on the modified latent code, wherein the modified image depicts the face with the target attribute and the modified expression.
    Type: Application
    Filed: July 5, 2022
    Publication date: January 11, 2024
    Inventors: Seoungwug Oh, Joon-Young Lee, Jingwan Lu, Kwanggyoon Seo
  • Publication number: 20230360177
    Abstract: In implementations of systems for joint trimap estimation and alpha matte prediction, a computing device implements a matting system to estimate a trimap for a frame of a digital video using a first stage of a machine learning model. An alpha matte is predicted for the frame based on the trimap and the frame using a second stage of the machine learning model. The matting system generates a refined trimap and a refined alpha matte for the frame based on the alpha matte, the trimap, and the frame using a third stage of the machine learning model. An additional trimap is estimated for an additional frame of the digital video based on the refined trimap and the refined alpha matte using the first stage of the machine learning model.
    Type: Application
    Filed: May 4, 2022
    Publication date: November 9, 2023
    Applicant: Adobe Inc.
    Inventors: Joon-Young Lee, Seoungwug Oh, Brian Lynn Price, Hongje Seong
  • Publication number: 20230316536
    Abstract: Systems and methods for object tracking are described. One or more aspects of the systems and methods include receiving a video depicting an object; generating object tracking information for the object using a student network, wherein the student network is trained in a second training phase based on a teacher network using an object tracking training set and a knowledge distillation loss that is based on an output of the student network and the teacher network, and wherein the teacher network is trained in a first training phase using an object detection training set that is augmented with object tracking supervision data; and transmitting the object tracking information in response to receiving the video.
    Type: Application
    Filed: March 31, 2022
    Publication date: October 5, 2023
    Inventors: Joon-Young Lee, Seoungwug Oh, Sanghyun Woo, Kwanyong Park
  • Patent number: 11200424
    Abstract: Certain aspects involve using a space-time memory network to locate one or more target objects in video content for segmentation or other object classification. In one example, a video editor generates a query key map and a query value map by applying a space-time memory network to features of a query frame from video content. The video editor retrieves a memory key map and a memory value map that are computed, with the space-time memory network, from a set of memory frames from the video content. The video editor computes memory weights by applying a similarity function to the memory key map and the query key map. The video editor classifies content in the query frame as depicting the target feature using a weighted summation that includes the memory weights applied to memory locations in the memory value map.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: December 14, 2021
    Assignee: Adobe Inc.
    Inventors: Joon-Young Lee, Ning Xu, Seoungwug Oh
  • Patent number: 11176381
    Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: November 16, 2021
    Assignee: Adobe Inc.
    Inventors: Joon-Young Lee, Seoungwug Oh, Kalyan Krishna Sunkavalli
  • Patent number: 10810435
    Abstract: In implementations of segmenting objects in video sequences, user annotations designate an object in any image frame of a video sequence, without requiring user annotations for all image frames. An interaction network generates a mask for an object in an image frame annotated by a user, and is coupled both internally and externally to a propagation network that propagates the mask to other image frames of the video sequence. Feature maps are aggregated for each round of user annotations and couple the interaction network and the propagation network internally. The interaction network and the propagation network are trained jointly using synthetic annotations in a multi-round training scenario, in which weights of the interaction network and the propagation network are adjusted after multiple synthetic annotations are processed, resulting in a trained object segmentation system that can reliably generate realistic object masks.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: October 20, 2020
    Assignee: Adobe Inc.
    Inventors: Joon-Young Lee, Seoungwug Oh, Ning Xu
  • Publication number: 20200250436
    Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.
    Type: Application
    Filed: April 23, 2020
    Publication date: August 6, 2020
    Inventors: Joon-Young Lee, Seoungwug Oh, Kalyan Krishna Sunkavalli
  • Patent number: 10671855
    Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: June 2, 2020
    Assignee: Adobe Inc.
    Inventors: Joon-Young Lee, Seoungwug Oh, Kalyan Krishna Sunkavalli
  • Publication number: 20200143171
    Abstract: In implementations of segmenting objects in video sequences, user annotations designate an object in any image frame of a video sequence, without requiring user annotations for all image frames. An interaction network generates a mask for an object in an image frame annotated by a user, and is coupled both internally and externally to a propagation network that propagates the mask to other image frames of the video sequence. Feature maps are aggregated for each round of user annotations and couple the interaction network and the propagation network internally. The interaction network and the propagation network are trained jointly using synthetic annotations in a multi-round training scenario, in which weights of the interaction network and the propagation network are adjusted after multiple synthetic annotations are processed, resulting in a trained object segmentation system that can reliably generate realistic object masks.
    Type: Application
    Filed: November 7, 2018
    Publication date: May 7, 2020
    Applicant: Adobe Inc.
    Inventors: Joon-Young Lee, Seoungwug Oh, Ning Xu
  • Publication number: 20200117906
    Abstract: Certain aspects involve using a space-time memory network to locate one or more target objects in video content for segmentation or other object classification. In one example, a video editor generates a query key map and a query value map by applying a space-time memory network to features of a query frame from video content. The video editor retrieves a memory key map and a memory value map that are computed, with the space-time memory network, from a set of memory frames from the video content. The video editor computes memory weights by applying a similarity function to the memory key map and the query key map. The video editor classifies content in the query frame as depicting the target feature using a weighted summation that includes the memory weights applied to memory locations in the memory value map.
    Type: Application
    Filed: March 5, 2019
    Publication date: April 16, 2020
    Inventors: Joon-Young Lee, Ning Xu, Seoungwug Oh
  • Publication number: 20190311202
    Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.
    Type: Application
    Filed: April 10, 2018
    Publication date: October 10, 2019
    Inventors: Joon-Young Lee, Seoungwug Oh, Kalyan Krishna Sunkavalli