Patents by Inventor Jen-Chan Jeff Chien

Jen-Chan Jeff Chien has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11625813
    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for accurately and efficiently removing objects from digital images taken from a camera viewfinder stream. For example, the disclosed systems access digital images from a camera viewfinder stream in connection with an undesired moving object depicted in the digital images. The disclosed systems generate a temporal window of the digital images concatenated with binary masks indicating the undesired moving object in each digital image. The disclosed systems further utilizes a 3D to 2D generator as part of a 3D to 2D generative adversarial neural network in connection with the temporal window to generate a target digital image with the region associated with the undesired moving object in-painted. In at least one embodiment, the disclosed systems provide the target digital image to a camera viewfinder display to show a user how a future digital photograph will look without the undesired moving object.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: April 11, 2023
    Assignee: Adobe, Inc.
    Inventors: Sheng-Wei Huang, Wentian Zhao, Kun Wan, Zichuan Liu, Xin Lu, Jen-Chan Jeff Chien
  • Patent number: 11544743
    Abstract: Application personalization techniques and systems are described that leverage an embedded machine learning module to preserve a user's privacy while still supporting rich personalization with improved accuracy and efficiency of use of computational resources over conventional techniques and systems. The machine learning module, for instance, may be embedded as part of an application to execute within a context of the application to learn user preferences to train a model using machine learning. This model is then used within the context of execution of the application to personalize the application, such as control access to digital content, make recommendations, control which items of digital marketing content are exposed to a user via the application, and so on.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: January 3, 2023
    Assignee: Adobe Inc.
    Inventors: Thomas William Randall Jacobs, Peter Raymond Fransen, Kevin Gary Smith, Kent Andrew Edmonds, Jen-Chan Jeff Chien, Gavin Stuart Peter Miller
  • Publication number: 20220138913
    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for accurately and efficiently removing objects from digital images taken from a camera viewfinder stream. For example, the disclosed systems access digital images from a camera viewfinder stream in connection with an undesired moving object depicted in the digital images. The disclosed systems generate a temporal window of the digital images concatenated with binary masks indicating the undesired moving object in each digital image. The disclosed systems further utilizes a 3D to 2D generator as part of a 3D to 2D generative adversarial neural network in connection with the temporal window to generate a target digital image with the region associated with the undesired moving object in-painted. In at least one embodiment, the disclosed systems provide the target digital image to a camera viewfinder display to show a user how a future digital photograph will look without the undesired moving object.
    Type: Application
    Filed: October 30, 2020
    Publication date: May 5, 2022
    Inventors: Sheng-Wei Huang, Wentian Zhao, Kun Wan, Zichuan Liu, Xin Lu, Jen-Chan Jeff Chien
  • Publication number: 20220121841
    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for utilizing a machine learning model trained to determine subtle pose differentiations to analyze a repository of captured digital images of a particular user to automatically capture digital images portraying the user. For example, the disclosed systems can utilize a convolutional neural network to determine a pose/facial expression similarity metric between a sample digital image from a camera viewfinder stream of a client device and one or more previously captured digital images portraying the user. The disclosed systems can determine that the similarity metric satisfies a similarity threshold, and automatically capture a digital image utilizing a camera device of the client device. Thus, the disclosed systems can automatically and efficiently capture digital images, such as selfies, that accurately match previous digital images portraying a variety of unique facial expressions specific to individual users.
    Type: Application
    Filed: October 20, 2020
    Publication date: April 21, 2022
    Inventors: Jinoh Oh, Xin Lu, Gahye Park, Jen-Chan Jeff Chien, Yumin Jia
  • Publication number: 20220124257
    Abstract: Methods, systems, and non-transitory computer readable media are disclosed for generating artistic images by applying an artistic-effect to one or more frames of a video stream or digital images. In one or more embodiments, the disclosed system captures a video stream utilizing a camera of a computing device. The disclosed system deploys a distilled artistic-effect neural network on the computing device to generate an artistic version of the captured video stream at a first resolution in real time. The disclosed system can provide the artistic video stream for display via the computing device. Based on an indication of a capture event, the disclosed system utilizes the distilled artistic-effect neural network to generate an artistic image at a higher resolution than the artistic video stream. Furthermore, the disclosed system tunes and utilizes an artistic-effect patch generative adversarial neural network to modify parameters for the distilled artistic-effect neural network.
    Type: Application
    Filed: October 19, 2020
    Publication date: April 21, 2022
    Inventors: Wentian Zhao, Kun Wan, Xin Lu, Jen-Chan Jeff Chien
  • Patent number: 11243747
    Abstract: Application personalization techniques and systems are described that leverage an embedded machine learning module to preserve a user's privacy while still supporting rich personalization with improved accuracy and efficiency of use of computational resources over conventional techniques and systems. The machine learning module, for instance, may be embedded as part of an application to execute within a context of the application to learn user preferences to train a model using machine learning. This model is then used within the context of execution of the application to personalize the application, such as control access to digital content, make recommendations, control which items of digital marketing content are exposed to a user via the application, and so on.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: February 8, 2022
    Assignee: Adobe Inc.
    Inventors: Thomas William Randall Jacobs, Peter Raymond Fransen, Kevin Gary Smith, Kent Andrew Edmonds, Jen-Chan Jeff Chien, Gavin Stuart Peter Miller
  • Publication number: 20220019412
    Abstract: Application personalization techniques and systems are described that leverage an embedded machine learning module to preserve a user's privacy while still supporting rich personalization with improved accuracy and efficiency of use of computational resources over conventional techniques and systems. The machine learning module, for instance, may be embedded as part of an application to execute within a context of the application to learn user preferences to train a model using machine learning. This model is then used within the context of execution of the application to personalize the application, such as control access to digital content, make recommendations, control which items of digital marketing content are exposed to a user via the application, and so on.
    Type: Application
    Filed: September 30, 2021
    Publication date: January 20, 2022
    Applicant: Adobe Inc.
    Inventors: Thomas William Randall Jacobs, Peter Raymond Fransen, Kevin Gary Smith, Kent Andrew Edmonds, Jen-Chan Jeff Chien, Gavin Stuart Peter Miller
  • Patent number: 11222399
    Abstract: Image cropping suggestion using multiple saliency maps is described. In one or more implementations, component scores, indicative of visual characteristics established for visually-pleasing croppings, are computed for candidate image croppings using multiple different saliency maps. The visual characteristics on which a candidate image cropping is scored may be indicative of its composition quality, an extent to which it preserves content appearing in the scene, and a simplicity of its boundary. Based on the component scores, the croppings may be ranked with regard to each of the visual characteristics. The rankings may be used to cluster the candidate croppings into groups of similar croppings, such that croppings in a group are different by less than a threshold amount and croppings in different groups are different by at least the threshold amount. Based on the clustering, croppings may then be chosen, e.g., to present them to a user for selection.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: January 11, 2022
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Radomir Mech, Xiaohui Shen, Brian L. Price, Jianming Zhang, Anant Gilra, Jen-Chan Jeff Chien
  • Patent number: 10878021
    Abstract: Content search and geographical consideration techniques and system employed as part of a digital environment are described. In one or more implementations, a digital medium environment is described for configuring image searches by one or more computing devices. Data is received by the one or more computing devices that identifies images obtained by users and used as part of content creation, indicates geographical locations of respective said users that obtained the images or associated with the content that includes the images, and indicates times associated with the users as obtaining the images or use of the images as part of the content. A map is built by the one or more computing devices that describes how use of the images as part of the content creation is diffused over the geographical locations over the indicated times. An image search is controlled by the one or more computing devices based on the map and a geographic location associated with the image search.
    Type: Grant
    Filed: August 17, 2015
    Date of Patent: December 29, 2020
    Assignee: Adobe Inc.
    Inventors: Zeke Koch, Baldo Faieta, Jen-Chan Jeff Chien, Mark M. Randall, Olivier Sirven, Philipp Koch, Dennis G. Nicholson
  • Publication number: 20200401380
    Abstract: Application personalization techniques and systems are described that leverage an embedded machine learning module to preserve a user's privacy while still supporting rich personalization with improved accuracy and efficiency of use of computational resources over conventional techniques and systems. The machine learning module, for instance, may be embedded as part of an application to execute within a context of the application to learn user preferences to train a model using machine learning. This model is then used within the context of execution of the application to personalize the application, such as control access to digital content, make recommendations, control which items of digital marketing content are exposed to a user via the application, and so on.
    Type: Application
    Filed: August 31, 2020
    Publication date: December 24, 2020
    Applicant: Adobe Inc.
    Inventors: Thomas William Randall Jacobs, Peter Raymond Fransen, Kevin Gary Smith, Kent Andrew Edmonds, Jen-Chan Jeff Chien, Gavin Stuart Peter Miller
  • Patent number: 10795647
    Abstract: Application personalization techniques and systems are described that leverage an embedded machine learning module to preserve a user's privacy while still supporting rich personalization with improved accuracy and efficiency of use of computational resources over conventional techniques and systems. The machine learning module, for instance, may be embedded as part of an application to execute within a context of the application to learn user preferences to train a model using machine learning. This model is then used within the context of execution of the application to personalize the application, such as control access to digital content, make recommendations, control which items of digital marketing content are exposed to a user via the application, and so on.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: October 6, 2020
    Assignee: Adobe, Inc.
    Inventors: Thomas William Randall Jacobs, Peter Raymond Fransen, Kevin Gary Smith, Kent Andrew Edmonds, Jen-Chan Jeff Chien, Gavin Stuart Peter Miller
  • Patent number: 10789456
    Abstract: Techniques are disclosed for a facial expression classification. In an embodiment, a multi-class classifier is trained using labelled training images, each training image including a facial expression. The trained classifier is then used to predict expressions for unlabelled video frames, whereby each frame is effectively labelled with a predicted expression. In addition, each predicted expression can be associated with a confidence score. Anchor frames can then be identified in the labelled video frames, based on the confidence scores of those frames (anchor frames are frames having a confidence score above an established threshold). Then, for each labelled video frame between two anchor frames, the predicted expression is refined or otherwise updated using interpolation, thereby providing a set of video frames having calibrated expression labels.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: September 29, 2020
    Assignee: Adobe Inc.
    Inventors: Yu Luo, Xin Lu, Jen-Chan Jeff Chien
  • Patent number: 10521705
    Abstract: The present disclosure is directed toward systems, methods, and non-transitory computer readable media that automatically select an image from a plurality of images based on the multi-context aware rating of the image. In particular, systems described herein can generate a plurality of probability context scores for an image. Moreover, the disclosed systems can generate a plurality of context-specific scores for an image. Utilizing each of the probability context scores and each of the corresponding context-specific scores for an image, the disclosed systems can generate a multi-context aware rating for the image. Thereafter, the disclosed systems can select an image from the plurality of images with the highest multi-context aware rating for delivery to the user. The disclosed system can utilize one or more neural networks to both generate the probability context scores for an image and to generate the context-specific scores for an image.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: December 31, 2019
    Assignee: Adobe Inc.
    Inventors: Xin Lu, Zejun Huang, Jen-Chan Jeff Chien
  • Patent number: 10460214
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for segmenting objects in digital visual media utilizing one or more salient content neural networks. In particular, in one or more embodiments, the disclosed systems and methods train one or more salient content neural networks to efficiently identify foreground pixels in digital visual media. Moreover, in one or more embodiments, the disclosed systems and methods provide a trained salient content neural network to a mobile device, allowing the mobile device to directly select salient objects in digital visual media utilizing a trained neural network. Furthermore, in one or more embodiments, the disclosed systems and methods train and provide multiple salient content neural networks, such that mobile devices can identify objects in real-time digital visual media feeds (utilizing a first salient content neural network) and identify objects in static digital images (utilizing a second salient content neural network).
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: October 29, 2019
    Assignee: Adobe Inc.
    Inventors: Xin Lu, Zhe Lin, Xiaohui Shen, Jimei Yang, Jianming Zhang, Jen-Chan Jeff Chien, Chenxi Liu
  • Publication number: 20190244327
    Abstract: Image cropping suggestion using multiple saliency maps is described. In one or more implementations, component scores, indicative of visual characteristics established for visually-pleasing croppings, are computed for candidate image croppings using multiple different saliency maps. The visual characteristics on which a candidate image cropping is scored may be indicative of its composition quality, an extent to which it preserves content appearing in the scene, and a simplicity of its boundary. Based on the component scores, the croppings may be ranked with regard to each of the visual characteristics. The rankings may be used to cluster the candidate croppings into groups of similar croppings, such that croppings in a group are different by less than a threshold amount and croppings in different groups are different by at least the threshold amount. Based on the clustering, croppings may then be chosen, e.g., to present them to a user for selection.
    Type: Application
    Filed: April 15, 2019
    Publication date: August 8, 2019
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Radomir Mech, Xiaohui Shen, Brian L. Price, Jianming Zhang, Anant Gilra, Jen-Chan Jeff Chien
  • Patent number: 10346951
    Abstract: Image cropping suggestion using multiple saliency maps is described. In one or more implementations, component scores, indicative of visual characteristics established for visually-pleasing croppings, are computed for candidate image croppings using multiple different saliency maps. The visual characteristics on which a candidate image cropping is scored may be indicative of its composition quality, an extent to which it preserves content appearing in the scene, and a simplicity of its boundary. Based on the component scores, the croppings may be ranked with regard to each of the visual characteristics. The rankings may be used to cluster the candidate croppings into groups of similar croppings, such that croppings in a group are different by less than a threshold amount and croppings in different groups are different by at least the threshold amount. Based on the clustering, croppings may then be chosen, e.g., to present them to a user for selection.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: July 9, 2019
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Radomir Mech, Xiaohui Shen, Brian L. Price, Jianming Zhang, Anant Gilra, Jen-Chan Jeff Chien
  • Publication number: 20190205625
    Abstract: Techniques are disclosed for a facial expression classification. In an embodiment, a multi-class classifier is trained using labelled training images, each training image including a facial expression. The trained classifier is then used to predict expressions for unlabelled video frames, whereby each frame is effectively labelled with a predicted expression. In addition, each predicted expression can be associated with a confidence score. Anchor frames can then be identified in the labelled video frames, based on the confidence scores of those frames (anchor frames are frames having a confidence score above an established threshold). Then, for each labelled video frame between two anchor frames, the predicted expression is refined or otherwise updated using interpolation, thereby providing a set of video frames having calibrated expression labels.
    Type: Application
    Filed: December 28, 2017
    Publication date: July 4, 2019
    Applicant: Adobe Inc.
    Inventors: Yu Luo, Xin Lu, Jen-Chan Jeff Chien
  • Publication number: 20190147305
    Abstract: The present disclosure is directed toward systems, methods, and non-transitory computer readable media that automatically select an image from a plurality of images based on the multi-context aware rating of the image. In particular, systems described herein can generate a plurality of probability context scores for an image. Moreover, the disclosed systems can generate a plurality of context-specific scores for an image. Utilizing each of the probability context scores and each of the corresponding context-specific scores for an image, the disclosed systems can generate a multi-context aware rating for the image. Thereafter, the disclosed systems can select an image from the plurality of images with the highest multi-context aware rating for delivery to the user. The disclosed system can utilize one or more neural networks to both generate the probability context scores for an image and to generate the context-specific scores for an image.
    Type: Application
    Filed: November 14, 2017
    Publication date: May 16, 2019
    Inventors: Xin Lu, Zejun Huang, Jen-Chan Jeff Chien
  • Publication number: 20190130229
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for segmenting objects in digital visual media utilizing one or more salient content neural networks. In particular, in one or more embodiments, the disclosed systems and methods train one or more salient content neural networks to efficiently identify foreground pixels in digital visual media. Moreover, in one or more embodiments, the disclosed systems and methods provide a trained salient content neural network to a mobile device, allowing the mobile device to directly select salient objects in digital visual media utilizing a trained neural network. Furthermore, in one or more embodiments, the disclosed systems and methods train and provide multiple salient content neural networks, such that mobile devices can identify objects in real-time digital visual media feeds (utilizing a first salient content neural network) and identify objects in static digital images (utilizing a second salient content neural network).
    Type: Application
    Filed: October 31, 2017
    Publication date: May 2, 2019
    Inventors: Xin Lu, Zhe Lin, Xiaohui Shen, Jimei Yang, Jianming Zhang, Jen-Chan Jeff Chien, Chenxi Liu
  • Publication number: 20190114680
    Abstract: Techniques and system are described to control output of digital marketing content with respect to a digital video that address the added complexities of digital video over other types of digital content, such as webpages. In one example, the techniques and systems are configured to control a time, at which, digital marketing content is to be output with respect to the digital video, e.g., by selecting a commercial break or output as a banner ad in conjunction with the video.
    Type: Application
    Filed: October 13, 2017
    Publication date: April 18, 2019
    Applicant: Adobe Systems Incorporated
    Inventors: Jen-Chan Jeff Chien, Thomas William Randall Jacobs, Kent Andrew Edmonds, Kevin Gary Smith, Peter Raymond Fransen, Gavin Stuart Peter Miller, Ashley Manning Still