Patents by Inventor Wentian Zhao

Wentian Zhao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230274400
    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for accurately and efficiently removing objects from digital images taken from a camera viewfinder stream. For example, the disclosed systems access digital images from a camera viewfinder stream in connection with an undesired moving object depicted in the digital images. The disclosed systems generate a temporal window of the digital images concatenated with binary masks indicating the undesired moving object in each digital image. The disclosed systems further utilizes a generator as part of a 3D to 2D generative adversarial neural network in connection with the temporal window to generate a target digital image with the region associated with the undesired moving object in-painted. In at least one embodiment, the disclosed systems provide the target digital image to a camera viewfinder display to show a user how a future digital photograph will look without the undesired moving object.
    Type: Application
    Filed: April 10, 2023
    Publication date: August 31, 2023
    Inventors: Sheng-Wei Huang, Wentian Zhao, Kun Wan, Zichuan Liu, Xin Lu, Jen-Chan Jeff Chien
  • Publication number: 20230262189
    Abstract: Methods, systems, and non-transitory computer readable media are disclosed for generating artistic images by applying an artistic-effect to one or more frames of a video stream or digital images. In one or more embodiments, the disclosed system captures a video stream utilizing a camera of a computing device. The disclosed system deploys a distilled artistic-effect neural network on the computing device to generate an artistic version of the captured video stream at a first resolution in real time. The disclosed system can provide the artistic video stream for display via the computing device. Based on an indication of a capture event, the disclosed system utilizes the distilled artistic-effect neural network to generate an artistic image at a higher resolution than the artistic video stream. Furthermore, the disclosed system tunes and utilizes an artistic-effect patch generative adversarial neural network to modify parameters for the distilled artistic-effect neural network.
    Type: Application
    Filed: April 28, 2023
    Publication date: August 17, 2023
    Inventors: Wentian Zhao, Kun Wan, Xin Lu, Jen-Chan Jeff Chien
  • Patent number: 11676283
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate refined segmentation masks for digital visual media items. For example, in one or more embodiments, the disclosed systems utilize a segmentation refinement neural network to generate an initial segmentation mask for a digital visual media item. The disclosed systems further utilize the segmentation refinement neural network to generate one or more refined segmentation masks based on uncertainly classified pixels identified from the initial segmentation mask. To illustrate, in some implementations, the disclosed systems utilize the segmentation refinement neural network to redetermine whether a set of uncertain pixels corresponds to one or more objects depicted in the digital visual media item based on low-level (e.g., local) feature values extracted from feature maps generated for the digital visual media item.
    Type: Grant
    Filed: April 22, 2022
    Date of Patent: June 13, 2023
    Assignee: Adobe Inc.
    Inventors: Zichuan Liu, Wentian Zhao, Shitong Wang, He Qin, Yumin Jia, Yeojin Kim, Xin Lu, Jen-Chan Chien
  • Patent number: 11677897
    Abstract: Methods, systems, and non-transitory computer readable media are disclosed for generating artistic images by applying an artistic-effect to one or more frames of a video stream or digital images. In one or more embodiments, the disclosed system captures a video stream utilizing a camera of a computing device. The disclosed system deploys a distilled artistic-effect neural network on the computing device to generate an artistic version of the captured video stream at a first resolution in real time. The disclosed system can provide the artistic video stream for display via the computing device. Based on an indication of a capture event, the disclosed system utilizes the distilled artistic-effect neural network to generate an artistic image at a higher resolution than the artistic video stream. Furthermore, the disclosed system tunes and utilizes an artistic-effect patch generative adversarial neural network to modify parameters for the distilled artistic-effect neural network.
    Type: Grant
    Filed: October 19, 2020
    Date of Patent: June 13, 2023
    Assignee: Adobe Inc.
    Inventors: Wentian Zhao, Kun Wan, Xin Lu, Jen-Chan Jeff Chien
  • Patent number: 11625813
    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for accurately and efficiently removing objects from digital images taken from a camera viewfinder stream. For example, the disclosed systems access digital images from a camera viewfinder stream in connection with an undesired moving object depicted in the digital images. The disclosed systems generate a temporal window of the digital images concatenated with binary masks indicating the undesired moving object in each digital image. The disclosed systems further utilizes a 3D to 2D generator as part of a 3D to 2D generative adversarial neural network in connection with the temporal window to generate a target digital image with the region associated with the undesired moving object in-painted. In at least one embodiment, the disclosed systems provide the target digital image to a camera viewfinder display to show a user how a future digital photograph will look without the undesired moving object.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: April 11, 2023
    Assignee: Adobe, Inc.
    Inventors: Sheng-Wei Huang, Wentian Zhao, Kun Wan, Zichuan Liu, Xin Lu, Jen-Chan Jeff Chien
  • Patent number: 11615308
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating a response to a question received from a user during display or playback of a video segment by utilizing a query-response-neural network. The disclosed systems can extract a query vector from a question corresponding to the video segment using the query-response-neural network. The disclosed systems further generate context vectors representing both visual cues and transcript cues corresponding to the video segment using context encoders or other layers from the query-response-neural network. By utilizing additional layers from the query-response-neural network, the disclosed systems generate (i) a query-context vector based on the query vector and the context vectors, and (ii) candidate-response vectors representing candidate responses to the question from a domain-knowledge base or other source.
    Type: Grant
    Filed: December 28, 2021
    Date of Patent: March 28, 2023
    Assignee: Adobe Inc.
    Inventors: Wentian Zhao, Seokhwan Kim, Ning Xu, Hailin Jin
  • Publication number: 20220245824
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate refined segmentation masks for digital visual media items. For example, in one or more embodiments, the disclosed systems utilize a segmentation refinement neural network to generate an initial segmentation mask for a digital visual media item. The disclosed systems further utilize the segmentation refinement neural network to generate one or more refined segmentation masks based on uncertainly classified pixels identified from the initial segmentation mask. To illustrate, in some implementations, the disclosed systems utilize the segmentation refinement neural network to redetermine whether a set of uncertain pixels corresponds to one or more objects depicted in the digital visual media item based on low-level (e.g., local) feature values extracted from feature maps generated for the digital visual media item.
    Type: Application
    Filed: April 22, 2022
    Publication date: August 4, 2022
    Inventors: Zichuan Liu, Wentian Zhao, Shitong Wang, He Qin, Yumin Jia, Yeojin Kim, Xin Lu, Jen-Chan Chien
  • Patent number: 11335004
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate refined segmentation masks for digital visual media items. For example, in one or more embodiments, the disclosed systems utilize a segmentation refinement neural network to generate an initial segmentation mask for a digital visual media item. The disclosed systems further utilize the segmentation refinement neural network to generate one or more refined segmentation masks based on uncertainly classified pixels identified from the initial segmentation mask. To illustrate, in some implementations, the disclosed systems utilize the segmentation refinement neural network to redetermine whether a set of uncertain pixels corresponds to one or more objects depicted in the digital visual media item based on low-level (e.g., local) feature values extracted from feature maps generated for the digital visual media item.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: May 17, 2022
    Assignee: Adobe Inc.
    Inventors: Zichuan Liu, Wentian Zhao, Shitong Wang, He Qin, Yumin Jia, Yeojin Kim, Xin Lu, Jen-Chan Chien
  • Publication number: 20220138913
    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for accurately and efficiently removing objects from digital images taken from a camera viewfinder stream. For example, the disclosed systems access digital images from a camera viewfinder stream in connection with an undesired moving object depicted in the digital images. The disclosed systems generate a temporal window of the digital images concatenated with binary masks indicating the undesired moving object in each digital image. The disclosed systems further utilizes a 3D to 2D generator as part of a 3D to 2D generative adversarial neural network in connection with the temporal window to generate a target digital image with the region associated with the undesired moving object in-painted. In at least one embodiment, the disclosed systems provide the target digital image to a camera viewfinder display to show a user how a future digital photograph will look without the undesired moving object.
    Type: Application
    Filed: October 30, 2020
    Publication date: May 5, 2022
    Inventors: Sheng-Wei Huang, Wentian Zhao, Kun Wan, Zichuan Liu, Xin Lu, Jen-Chan Jeff Chien
  • Publication number: 20220122357
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating a response to a question received from a user during display or playback of a video segment by utilizing a query-response-neural network. The disclosed systems can extract a query vector from a question corresponding to the video segment using the query-response-neural network. The disclosed systems further generate context vectors representing both visual cues and transcript cues corresponding to the video segment using context encoders or other layers from the query-response-neural network. By utilizing additional layers from the query-response-neural network, the disclosed systems generate (i) a query-context vector based on the query vector and the context vectors, and (ii) candidate-response vectors representing candidate responses to the question from a domain-knowledge base or other source.
    Type: Application
    Filed: December 28, 2021
    Publication date: April 21, 2022
    Inventors: Wentian Zhao, Seokhwan Kim, Ning Xu, Hailin Jin
  • Publication number: 20220124257
    Abstract: Methods, systems, and non-transitory computer readable media are disclosed for generating artistic images by applying an artistic-effect to one or more frames of a video stream or digital images. In one or more embodiments, the disclosed system captures a video stream utilizing a camera of a computing device. The disclosed system deploys a distilled artistic-effect neural network on the computing device to generate an artistic version of the captured video stream at a first resolution in real time. The disclosed system can provide the artistic video stream for display via the computing device. Based on an indication of a capture event, the disclosed system utilizes the distilled artistic-effect neural network to generate an artistic image at a higher resolution than the artistic video stream. Furthermore, the disclosed system tunes and utilizes an artistic-effect patch generative adversarial neural network to modify parameters for the distilled artistic-effect neural network.
    Type: Application
    Filed: October 19, 2020
    Publication date: April 21, 2022
    Inventors: Wentian Zhao, Kun Wan, Xin Lu, Jen-Chan Jeff Chien
  • Publication number: 20220044407
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate refined segmentation masks for digital visual media items. For example, in one or more embodiments, the disclosed systems utilize a segmentation refinement neural network to generate an initial segmentation mask for a digital visual media item. The disclosed systems further utilize the segmentation refinement neural network to generate one or more refined segmentation masks based on uncertainly classified pixels identified from the initial segmentation mask. To illustrate, in some implementations, the disclosed systems utilize the segmentation refinement neural network to redetermine whether a set of uncertain pixels corresponds to one or more objects depicted in the digital visual media item based on low-level (e.g., local) feature values extracted from feature maps generated for the digital visual media item.
    Type: Application
    Filed: August 7, 2020
    Publication date: February 10, 2022
    Inventors: Zichuan Liu, Wentian Zhao, Shitong Wang, He Qin, Yumin Jia, Yeojin Kim, Xin Lu, Jen-Chan Chien
  • Patent number: 11244167
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating a response to a question received from a user during display or playback of a video segment by utilizing a query-response-neural network. The disclosed systems can extract a query vector from a question corresponding to the video segment using the query-response-neural network. The disclosed systems further generate context vectors representing both visual cues and transcript cues corresponding to the video segment using context encoders or other layers from the query-response-neural network. By utilizing additional layers from the query-response-neural network, the disclosed systems generate (i) a query-context vector based on the query vector and the context vectors, and (ii) candidate-response vectors representing candidate responses to the question from a domain-knowledge base or other source.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: February 8, 2022
    Assignee: Adobe Inc.
    Inventors: Wentian Zhao, Seokhwan Kim, Ning Xu, Hailin Jin
  • Publication number: 20210248376
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating a response to a question received from a user during display or playback of a video segment by utilizing a query-response-neural network. The disclosed systems can extract a query vector from a question corresponding to the video segment using the query-response-neural network. The disclosed systems further generate context vectors representing both visual cues and transcript cues corresponding to the video segment using context encoders or other layers from the query-response-neural network. By utilizing additional layers from the query-response-neural network, the disclosed systems generate (i) a query-context vector based on the query vector and the context vectors, and (ii) candidate-response vectors representing candidate responses to the question from a domain-knowledge base or other source.
    Type: Application
    Filed: February 6, 2020
    Publication date: August 12, 2021
    Inventors: Wentian Zhao, Seokhwan Kim, Ning Xu, Hailin Jin