Patents by Inventor Jun Hao Liew
Jun Hao Liew has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12340565Abstract: Embodiments of the present disclosure relate to validation of unsupervised adaptive models. According to example embodiments of the present disclosure, unlike methods validating with the seen target data, the present disclosure synthesizes new samples by mixing the target samples and pseudo labels. The accuracy between model predictions of mixed samples and the mixed labels are measured for model selection, and the accuracy score may be called PseudoMix. PseudoMix enjoys the combined inductive bias of previous methods. Experiments demonstrate that PseudoMix can keep state-of-the-art performance across different validation settings.Type: GrantFiled: November 28, 2022Date of Patent: June 24, 2025Assignee: LEMON INC.Inventors: Song Bai, Dapeng Hu, Jun Hao Liew, Chuhui Xue
-
Patent number: 12333431Abstract: Generating a multi-dimensional video using a multi-dimensional video generative model for, including, but not limited to, at least one of static portrait animation, video reconstruction, or motion editing. The method including providing data into the multi-dimensionally aware generator of the multi-dimensional video generative model, and generating the multi-dimensional video from the data by the multi-dimensionally aware generator.Type: GrantFiled: December 9, 2022Date of Patent: June 17, 2025Assignee: Lemon Inc.Inventors: Song Bai, Zhongcong Xu, Jiashi Feng, Jun Hao Liew, Wenqing Zhang
-
Publication number: 20250173838Abstract: A computing system is described herein that implements a diffusion-based framework for animating reference images. The computing system includes a video diffusion model that is utilized to encode temporal information. The computing system further includes a novel appearance encoder that is utilized to retain the intricate details of the reference image and maintain appearance coherence across frames. The computing system further employs a video fusion technique to smooth transitions between animated segments in long video animation. Potential benefits of the computing system include enhanced temporal consistency, faithful preservation of reference images, and improved animation fidelity in the generated animation sequences.Type: ApplicationFiled: October 22, 2024Publication date: May 29, 2025Inventors: Zhongcong Xu, Jianfeng Zhang, Jun Hao Liew, Hanshu Yan, Chenxu Zhang, Jiashi Feng
-
Patent number: 12254633Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and utilizing scale-diverse segmentation neural networks to analyze digital images at different scales and identify different target objects portrayed in the digital images. For example, in one or more embodiments, the disclosed systems analyze a digital image and corresponding user indicators (e.g., foreground indicators, background indicators, edge indicators, boundary region indicators, and/or voice indicators) at different scales utilizing a scale-diverse segmentation neural network. In particular, the disclosed systems can utilize the scale-diverse segmentation neural network to generate a plurality of semantically meaningful object segmentation outputs. Furthermore, the disclosed systems can provide the plurality of object segmentation outputs for display and selection to improve the efficiency and accuracy of identifying target objects and modifying the digital image.Type: GrantFiled: March 18, 2022Date of Patent: March 18, 2025Assignee: Adobe Inc.Inventors: Scott Cohen, Long Mai, Jun Hao Liew, Brian Price
-
Publication number: 20250014233Abstract: Methods of customizing generation of objects using diffusion models are provided. One or more parameters (e.g., a conditioning signal, network weights, or an initial or starting noise) of the diffusion model can be optimized by a backpropagation process, which can be performed by solving an augmented adjoint ordinary differential equation (ODE) based on an adjoint sensitivity method. The customized diffusion model can generate stylized objects, generate objects with specific visual effect(s), and provide adversary examples to audit security of an object generation system.Type: ApplicationFiled: July 5, 2023Publication date: January 9, 2025Inventors: Jiachun Pan, Hanshu Yan, Jiashi Feng, Jun Hao Liew
-
Publication number: 20240193412Abstract: Generating a multi-dimensional video using a multi-dimensional video generative model for, including, but not limited to, at least one of static portrait animation, video reconstruction, or motion editing. The method including providing data into the multi-dimensionally aware generator of the multi-dimensional video generative model, and generating the multi-dimensional video from the data by the multi-dimensionally aware generator.Type: ApplicationFiled: December 9, 2022Publication date: June 13, 2024Inventors: Song Bai, Zhongcong Xu, Jiashi Feng, Jun Hao Liew, Wenqing Zhang
-
Publication number: 20240177460Abstract: Embodiments of the present disclosure relate to validation of unsupervised adaptive models. According to example embodiments of the present disclosure, unlike methods validating with the seen target data, the present disclosure synthesizes new samples by mixing the target samples and pseudo labels. The accuracy between model predictions of mixed samples and the mixed labels are measured for model selection, and the accuracy score may be called PseudoMix. PseudoMix enjoys the combined inductive bias of previous methods. Experiments demonstrate that PseudoMix can keep state-of-the-art performance across different validation settings.Type: ApplicationFiled: November 28, 2022Publication date: May 30, 2024Inventors: Song BAI, Dapeng HU, Jun Hao LIEW, Chuhui XUE
-
Publication number: 20240144544Abstract: Generating an object using a diffusion model includes obtaining a first input and a second input, and synthesizing an output object from the first input and the second input. The synthesizing of the output object includes generating a layout of the output object from the first input, injecting the second input as a content conditioner to the layout of the output object, and de-noising the layout of the output object injected with the content conditioner to generate a content of the output object.Type: ApplicationFiled: October 27, 2022Publication date: May 2, 2024Inventors: Jun Hao Liew, Hanshu Yan, Daquan Zhou, Jiashi Feng
-
Publication number: 20230177824Abstract: Systems and methods are disclosed for selecting target objects within digital images utilizing a multi-modal object selection neural network trained to accommodate multiple input modalities. In particular, in one or more embodiments, the disclosed systems and methods generate a trained neural network based on training digital images and training indicators corresponding to various input modalities. Moreover, one or more embodiments of the disclosed systems and methods utilize a trained neural network and iterative user inputs corresponding to different input modalities to select target objects in digital images. Specifically, the disclosed systems and methods can transform user inputs into distance maps that can be utilized in conjunction with color channels and a trained neural network to identify pixels that reflect the target object.Type: ApplicationFiled: January 30, 2023Publication date: June 8, 2023Inventors: Brian Price, Scott Cohen, Mai Long, Jun Hao Liew
-
Patent number: 11568627Abstract: Systems and methods are disclosed for selecting target objects within digital images utilizing a multi-modal object selection neural network trained to accommodate multiple input modalities. In particular, in one or more embodiments, the disclosed systems and methods generate a trained neural network based on training digital images and training indicators corresponding to various input modalities. Moreover, one or more embodiments of the disclosed systems and methods utilize a trained neural network and iterative user inputs corresponding to different input modalities to select target objects in digital images. Specifically, the disclosed systems and methods can transform user inputs into distance maps that can be utilized in conjunction with color channels and a trained neural network to identify pixels that reflect the target object.Type: GrantFiled: April 5, 2019Date of Patent: January 31, 2023Assignee: Adobe Inc.Inventors: Brian Price, Scott Cohen, Mai Long, Jun Hao Liew
-
Publication number: 20220207745Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and utilizing scale-diverse segmentation neural networks to analyze digital images at different scales and identify different target objects portrayed in the digital images. For example, in one or more embodiments, the disclosed systems analyze a digital image and corresponding user indicators (e.g., foreground indicators, background indicators, edge indicators, boundary region indicators, and/or voice indicators) at different scales utilizing a scale-diverse segmentation neural network. In particular, the disclosed systems can utilize the scale-diverse segmentation neural network to generate a plurality of semantically meaningful object segmentation outputs. Furthermore, the disclosed systems can provide the plurality of object segmentation outputs for display and selection to improve the efficiency and accuracy of identifying target objects and modifying the digital image.Type: ApplicationFiled: March 18, 2022Publication date: June 30, 2022Inventors: Scott Cohen, Long Mai, Jun Hao Liew, Brian Price
-
Patent number: 11282208Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and utilizing scale-diverse segmentation neural networks to analyze digital images at different scales and identify different target objects portrayed in the digital images. For example, in one or more embodiments, the disclosed systems analyze a digital image and corresponding user indicators (e.g., foreground indicators, background indicators, edge indicators, boundary region indicators, and/or voice indicators) at different scales utilizing a scale-diverse segmentation neural network. In particular, the disclosed systems can utilize the scale-diverse segmentation neural network to generate a plurality of semantically meaningful object segmentation outputs. Furthermore, the disclosed systems can provide the plurality of object segmentation outputs for display and selection to improve the efficiency and accuracy of identifying target objects and modifying the digital image.Type: GrantFiled: December 24, 2018Date of Patent: March 22, 2022Assignee: Adobe Inc.Inventors: Scott Cohen, Long Mai, Jun Hao Liew, Brian Price
-
Publication number: 20200202533Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and utilizing scale-diverse segmentation neural networks to analyze digital images at different scales and identify different target objects portrayed in the digital images. For example, in one or more embodiments, the disclosed systems analyze a digital image and corresponding user indicators (e.g., foreground indicators, background indicators, edge indicators, boundary region indicators, and/or voice indicators) at different scales utilizing a scale-diverse segmentation neural network. In particular, the disclosed systems can utilize the scale-diverse segmentation neural network to generate a plurality of semantically meaningful object segmentation outputs. Furthermore, the disclosed systems can provide the plurality of object segmentation outputs for display and selection to improve the efficiency and accuracy of identifying target objects and modifying the digital image.Type: ApplicationFiled: December 24, 2018Publication date: June 25, 2020Inventors: Scott Cohen, Long Mai, Jun Hao Liew, Brian Price
-
Publication number: 20190236394Abstract: Systems and methods are disclosed for selecting target objects within digital images utilizing a multi-modal object selection neural network trained to accommodate multiple input modalities. In particular, in one or more embodiments, the disclosed systems and methods generate a trained neural network based on training digital images and training indicators corresponding to various input modalities. Moreover, one or more embodiments of the disclosed systems and methods utilize a trained neural network and iterative user inputs corresponding to different input modalities to select target objects in digital images. Specifically, the disclosed systems and methods can transform user inputs into distance maps that can be utilized in conjunction with color channels and a trained neural network to identify pixels that reflect the target object.Type: ApplicationFiled: April 5, 2019Publication date: August 1, 2019Inventors: Brian Price, Scott Cohen, Mai Long, Jun Hao Liew