Patents by Inventor Jun Hao Liew

Jun Hao Liew has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12340565
    Abstract: Embodiments of the present disclosure relate to validation of unsupervised adaptive models. According to example embodiments of the present disclosure, unlike methods validating with the seen target data, the present disclosure synthesizes new samples by mixing the target samples and pseudo labels. The accuracy between model predictions of mixed samples and the mixed labels are measured for model selection, and the accuracy score may be called PseudoMix. PseudoMix enjoys the combined inductive bias of previous methods. Experiments demonstrate that PseudoMix can keep state-of-the-art performance across different validation settings.
    Type: Grant
    Filed: November 28, 2022
    Date of Patent: June 24, 2025
    Assignee: LEMON INC.
    Inventors: Song Bai, Dapeng Hu, Jun Hao Liew, Chuhui Xue
  • Patent number: 12333431
    Abstract: Generating a multi-dimensional video using a multi-dimensional video generative model for, including, but not limited to, at least one of static portrait animation, video reconstruction, or motion editing. The method including providing data into the multi-dimensionally aware generator of the multi-dimensional video generative model, and generating the multi-dimensional video from the data by the multi-dimensionally aware generator.
    Type: Grant
    Filed: December 9, 2022
    Date of Patent: June 17, 2025
    Assignee: Lemon Inc.
    Inventors: Song Bai, Zhongcong Xu, Jiashi Feng, Jun Hao Liew, Wenqing Zhang
  • Publication number: 20250173838
    Abstract: A computing system is described herein that implements a diffusion-based framework for animating reference images. The computing system includes a video diffusion model that is utilized to encode temporal information. The computing system further includes a novel appearance encoder that is utilized to retain the intricate details of the reference image and maintain appearance coherence across frames. The computing system further employs a video fusion technique to smooth transitions between animated segments in long video animation. Potential benefits of the computing system include enhanced temporal consistency, faithful preservation of reference images, and improved animation fidelity in the generated animation sequences.
    Type: Application
    Filed: October 22, 2024
    Publication date: May 29, 2025
    Inventors: Zhongcong Xu, Jianfeng Zhang, Jun Hao Liew, Hanshu Yan, Chenxu Zhang, Jiashi Feng
  • Patent number: 12254633
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and utilizing scale-diverse segmentation neural networks to analyze digital images at different scales and identify different target objects portrayed in the digital images. For example, in one or more embodiments, the disclosed systems analyze a digital image and corresponding user indicators (e.g., foreground indicators, background indicators, edge indicators, boundary region indicators, and/or voice indicators) at different scales utilizing a scale-diverse segmentation neural network. In particular, the disclosed systems can utilize the scale-diverse segmentation neural network to generate a plurality of semantically meaningful object segmentation outputs. Furthermore, the disclosed systems can provide the plurality of object segmentation outputs for display and selection to improve the efficiency and accuracy of identifying target objects and modifying the digital image.
    Type: Grant
    Filed: March 18, 2022
    Date of Patent: March 18, 2025
    Assignee: Adobe Inc.
    Inventors: Scott Cohen, Long Mai, Jun Hao Liew, Brian Price
  • Publication number: 20250014233
    Abstract: Methods of customizing generation of objects using diffusion models are provided. One or more parameters (e.g., a conditioning signal, network weights, or an initial or starting noise) of the diffusion model can be optimized by a backpropagation process, which can be performed by solving an augmented adjoint ordinary differential equation (ODE) based on an adjoint sensitivity method. The customized diffusion model can generate stylized objects, generate objects with specific visual effect(s), and provide adversary examples to audit security of an object generation system.
    Type: Application
    Filed: July 5, 2023
    Publication date: January 9, 2025
    Inventors: Jiachun Pan, Hanshu Yan, Jiashi Feng, Jun Hao Liew
  • Publication number: 20240193412
    Abstract: Generating a multi-dimensional video using a multi-dimensional video generative model for, including, but not limited to, at least one of static portrait animation, video reconstruction, or motion editing. The method including providing data into the multi-dimensionally aware generator of the multi-dimensional video generative model, and generating the multi-dimensional video from the data by the multi-dimensionally aware generator.
    Type: Application
    Filed: December 9, 2022
    Publication date: June 13, 2024
    Inventors: Song Bai, Zhongcong Xu, Jiashi Feng, Jun Hao Liew, Wenqing Zhang
  • Publication number: 20240177460
    Abstract: Embodiments of the present disclosure relate to validation of unsupervised adaptive models. According to example embodiments of the present disclosure, unlike methods validating with the seen target data, the present disclosure synthesizes new samples by mixing the target samples and pseudo labels. The accuracy between model predictions of mixed samples and the mixed labels are measured for model selection, and the accuracy score may be called PseudoMix. PseudoMix enjoys the combined inductive bias of previous methods. Experiments demonstrate that PseudoMix can keep state-of-the-art performance across different validation settings.
    Type: Application
    Filed: November 28, 2022
    Publication date: May 30, 2024
    Inventors: Song BAI, Dapeng HU, Jun Hao LIEW, Chuhui XUE
  • Publication number: 20240144544
    Abstract: Generating an object using a diffusion model includes obtaining a first input and a second input, and synthesizing an output object from the first input and the second input. The synthesizing of the output object includes generating a layout of the output object from the first input, injecting the second input as a content conditioner to the layout of the output object, and de-noising the layout of the output object injected with the content conditioner to generate a content of the output object.
    Type: Application
    Filed: October 27, 2022
    Publication date: May 2, 2024
    Inventors: Jun Hao Liew, Hanshu Yan, Daquan Zhou, Jiashi Feng
  • Publication number: 20230177824
    Abstract: Systems and methods are disclosed for selecting target objects within digital images utilizing a multi-modal object selection neural network trained to accommodate multiple input modalities. In particular, in one or more embodiments, the disclosed systems and methods generate a trained neural network based on training digital images and training indicators corresponding to various input modalities. Moreover, one or more embodiments of the disclosed systems and methods utilize a trained neural network and iterative user inputs corresponding to different input modalities to select target objects in digital images. Specifically, the disclosed systems and methods can transform user inputs into distance maps that can be utilized in conjunction with color channels and a trained neural network to identify pixels that reflect the target object.
    Type: Application
    Filed: January 30, 2023
    Publication date: June 8, 2023
    Inventors: Brian Price, Scott Cohen, Mai Long, Jun Hao Liew
  • Patent number: 11568627
    Abstract: Systems and methods are disclosed for selecting target objects within digital images utilizing a multi-modal object selection neural network trained to accommodate multiple input modalities. In particular, in one or more embodiments, the disclosed systems and methods generate a trained neural network based on training digital images and training indicators corresponding to various input modalities. Moreover, one or more embodiments of the disclosed systems and methods utilize a trained neural network and iterative user inputs corresponding to different input modalities to select target objects in digital images. Specifically, the disclosed systems and methods can transform user inputs into distance maps that can be utilized in conjunction with color channels and a trained neural network to identify pixels that reflect the target object.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: January 31, 2023
    Assignee: Adobe Inc.
    Inventors: Brian Price, Scott Cohen, Mai Long, Jun Hao Liew
  • Publication number: 20220207745
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and utilizing scale-diverse segmentation neural networks to analyze digital images at different scales and identify different target objects portrayed in the digital images. For example, in one or more embodiments, the disclosed systems analyze a digital image and corresponding user indicators (e.g., foreground indicators, background indicators, edge indicators, boundary region indicators, and/or voice indicators) at different scales utilizing a scale-diverse segmentation neural network. In particular, the disclosed systems can utilize the scale-diverse segmentation neural network to generate a plurality of semantically meaningful object segmentation outputs. Furthermore, the disclosed systems can provide the plurality of object segmentation outputs for display and selection to improve the efficiency and accuracy of identifying target objects and modifying the digital image.
    Type: Application
    Filed: March 18, 2022
    Publication date: June 30, 2022
    Inventors: Scott Cohen, Long Mai, Jun Hao Liew, Brian Price
  • Patent number: 11282208
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and utilizing scale-diverse segmentation neural networks to analyze digital images at different scales and identify different target objects portrayed in the digital images. For example, in one or more embodiments, the disclosed systems analyze a digital image and corresponding user indicators (e.g., foreground indicators, background indicators, edge indicators, boundary region indicators, and/or voice indicators) at different scales utilizing a scale-diverse segmentation neural network. In particular, the disclosed systems can utilize the scale-diverse segmentation neural network to generate a plurality of semantically meaningful object segmentation outputs. Furthermore, the disclosed systems can provide the plurality of object segmentation outputs for display and selection to improve the efficiency and accuracy of identifying target objects and modifying the digital image.
    Type: Grant
    Filed: December 24, 2018
    Date of Patent: March 22, 2022
    Assignee: Adobe Inc.
    Inventors: Scott Cohen, Long Mai, Jun Hao Liew, Brian Price
  • Publication number: 20200202533
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and utilizing scale-diverse segmentation neural networks to analyze digital images at different scales and identify different target objects portrayed in the digital images. For example, in one or more embodiments, the disclosed systems analyze a digital image and corresponding user indicators (e.g., foreground indicators, background indicators, edge indicators, boundary region indicators, and/or voice indicators) at different scales utilizing a scale-diverse segmentation neural network. In particular, the disclosed systems can utilize the scale-diverse segmentation neural network to generate a plurality of semantically meaningful object segmentation outputs. Furthermore, the disclosed systems can provide the plurality of object segmentation outputs for display and selection to improve the efficiency and accuracy of identifying target objects and modifying the digital image.
    Type: Application
    Filed: December 24, 2018
    Publication date: June 25, 2020
    Inventors: Scott Cohen, Long Mai, Jun Hao Liew, Brian Price
  • Publication number: 20190236394
    Abstract: Systems and methods are disclosed for selecting target objects within digital images utilizing a multi-modal object selection neural network trained to accommodate multiple input modalities. In particular, in one or more embodiments, the disclosed systems and methods generate a trained neural network based on training digital images and training indicators corresponding to various input modalities. Moreover, one or more embodiments of the disclosed systems and methods utilize a trained neural network and iterative user inputs corresponding to different input modalities to select target objects in digital images. Specifically, the disclosed systems and methods can transform user inputs into distance maps that can be utilized in conjunction with color channels and a trained neural network to identify pixels that reflect the target object.
    Type: Application
    Filed: April 5, 2019
    Publication date: August 1, 2019
    Inventors: Brian Price, Scott Cohen, Mai Long, Jun Hao Liew