Patents by Inventor Zhaowen Wang

Zhaowen Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10528649
    Abstract: Font recognition and similarity determination techniques and systems are described. For example, a computing device receives an image including a font and extracts font features corresponding to the font. The computing device computes font feature distances between the font and fonts from a set of training fonts. The computing device calculates, based on the font feature distances, similarity scores for the font and the training fonts used for calculating features distances. The computing device determines, based on the similarity scores, final similarity scores for the font relative to the training fonts.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: January 7, 2020
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Hailin Jin
  • Patent number: 10515295
    Abstract: The present disclosure relates to a font recognition system that employs a multi-task learning framework to jointly improve font classification and remove negative side effects caused by intra-class variances of glyph content. For example, in one or more embodiments, the font recognition system can jointly train a font recognition neural network using a font classification loss model and triplet loss model to generate a deep learning neural network that provides improved font classifications. In addition, the font recognition system can employ the trained font recognition neural network to efficiently recognize fonts within input images as well as provide other suggested fonts.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: December 24, 2019
    Assignee: Adobe Inc.
    Inventors: Yang Liu, Zhaowen Wang, Hailin Jin
  • Patent number: 10515296
    Abstract: The present disclosure relates to a font recognition system that employs a multi-task learning framework and training to improve font classification and remove negative side effects caused by intra-class variances of glyph content. For example, in one or more embodiments, the font recognition system trains a hybrid font recognition neural network that includes two or more font recognition neural networks and a weight prediction neural network. The hybrid font recognition neural network determines and generates classification weights based on which font recognition neural network within the hybrid font recognition neural network is best suited to classify the font in an input text image. By employing a hybrid trained font classification neural network, the font recognition system can improve overall font recognition as well as remove the negative side effects from diverse glyph content.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: December 24, 2019
    Assignee: Adobe Inc.
    Inventors: Yang Liu, Zhaowen Wang, I-Ming Pao, Hailin Jin
  • Publication number: 20190385346
    Abstract: Techniques are disclosed for the synthesis of a full set of slotted content, based upon only partial observations of the slotted content. With respect to a font, the slots may comprise particular letters or symbols or glyphs in an alphabet. Based upon partial observations of a subset of glyphs from a font, a full set of the glyphs corresponding to the font may be synthesized and may further be ornamented.
    Type: Application
    Filed: June 15, 2018
    Publication date: December 19, 2019
    Applicant: Adobe Inc.
    Inventors: Matthew David Fisher, Samaneh Azadi, Vladimir Kim, Elya Shechtman, Zhaowen Wang
  • Publication number: 20190378242
    Abstract: In implementations of super-resolution with reference images, a super-resolution image is generated based on reference images. Reference images are not constrained to have same or similar content as a low-resolution image being super-resolved. Texture features indicating high-frequency content are extracted into texture feature maps, and patches of texture feature maps of reference images are determined based on texture feature similarity. A content feature map indicating low-frequency content of an image is adaptively fused with a swapped texture feature map including patches of reference images with a neural network based on similarity of texture features. A user interfaces allows a user to select regions of multiple reference images to use for super-resolution. Hence, a super-resolution image can be generated with rich texture details incorporated from multiple reference images, even in the absence of reference images having similar content to an image being upscaled.
    Type: Application
    Filed: June 6, 2018
    Publication date: December 12, 2019
    Applicant: Adobe Inc.
    Inventors: Zhifei Zhang, Zhe Lin, Zhaowen Wang
  • Publication number: 20190370936
    Abstract: High resolution style transfer techniques and systems are described that overcome the challenges of transferring high resolution style features from one image to another image, and of the limited availability of training data to perform high resolution style transfer. In an example, a neural network is trained using high resolution style features which are extracted from a style image and are used in conjunction with an input image to apply the style features to the input image to generate a version of the input image transformed using the high resolution style features.
    Type: Application
    Filed: June 4, 2018
    Publication date: December 5, 2019
    Applicant: Adobe Inc.
    Inventors: Zhifei Zhang, Zhe Lin, Zhaowen Wang
  • Publication number: 20190362524
    Abstract: Oil painting simulation techniques are disclosed which simulate painting brush strokes using a trained neural network. In some examples, a method may include inferring a new height map of existing paint on a canvas after a new painting brush stroke is applied based on a bristle trajectory map that represents the new painting brush stroke and a height map of existing paint on the canvas prior to the application of the new painting brush stroke, and generating a rendering of the new painting brush stroke based on the new height map of existing paint on the canvas after the new painting brush stroke is applied to the canvas and a color map.
    Type: Application
    Filed: August 13, 2019
    Publication date: November 28, 2019
    Applicant: Adobe Inc.
    Inventors: Zhili Chen, Zhaowen Wang, Rundong Wu, Jimei Yang
  • Patent number: 10482639
    Abstract: In some embodiments, techniques for synthesizing an image style based on a plurality of neural networks are described. A computer system selects a style image based on user input that identifies the style image. The computer system generates an image based on a generator neural network and a loss neural network. The generator neural network outputs the synthesized image based on a noise vector and the style image and is trained based on style features generated from the loss neural network. The loss neural network outputs the style features based on a training image. The training image and the style image have a same resolution. The style features are generated at different resolutions of the training image. The computer system provides the synthesized image to a user device in response to the user input.
    Type: Grant
    Filed: February 21, 2017
    Date of Patent: November 19, 2019
    Assignee: Adobe Inc.
    Inventors: Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu
  • Patent number: 10467508
    Abstract: Font recognition and similarity determination techniques and systems are described. In a first example, localization techniques are described to train a model using machine learning (e.g., a convolutional neural network) using training images. The model is then used to localize text in a subsequently received image, and may do so automatically and without user intervention, e.g., without specifying any of the edges of a bounding box. In a second example, a deep neural network is directly learned as an embedding function of a model that is usable to determine font similarity. In a third example, techniques are described that leverage attributes described in metadata associated with fonts as part of font recognition and similarity determinations.
    Type: Grant
    Filed: April 25, 2018
    Date of Patent: November 5, 2019
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Luoqi Liu, Hailin Jin
  • Publication number: 20190333198
    Abstract: The present disclosure relates to training and utilizing an image exposure transformation network to generate a long-exposure image from a single short-exposure image (e.g., still image). In various embodiments, the image exposure transformation network is trained using adversarial learning, long-exposure ground truth images, and a multi-term loss function. In some embodiments, the image exposure transformation network includes an optical flow prediction network and/or an appearance guided attention network. Trained embodiments of the image exposure transformation network generate realistic long-exposure images from single short-exposure images without additional information.
    Type: Application
    Filed: April 25, 2018
    Publication date: October 31, 2019
    Inventors: Yilin Wang, Zhe Lin, Zhaowen Wang, Xin Lu, Xiaohui Shen, Chih-Yao Hsieh
  • Publication number: 20190325277
    Abstract: Font replacement based on visual similarity is described. In one or more embodiments, a font descriptor includes multiple font features derived from a visual appearance of a font by a font visual similarity model. The font visual similarity model can be trained using a machine learning system that recognizes similarity between visual appearances of two different fonts. A source computing device embeds a font descriptor in a document, which is transmitted to a destination computing device. The destination compares the embedded font descriptor to font descriptors corresponding to local fonts. Based on distances between the embedded and the local font descriptors, at least one matching font descriptor is determined. The local font corresponding to the matching font descriptor is deemed similar to the original font. The destination computing device controls presentations of the document using the similar local font. Computation of font descriptors can be outsourced to a remote location.
    Type: Application
    Filed: July 3, 2019
    Publication date: October 24, 2019
    Applicant: Adobe Inc.
    Inventors: Hailin Jin, Zhaowen Wang, Gavin Stuart Peter Miller
  • Patent number: 10453204
    Abstract: The present disclosure is directed towards systems and methods for generating a new aligned image from a plurality of burst image. The systems and methods subdivide a reference image into a plurality of local regions and a subsequent image into a plurality of corresponding local regions. Additionally, the systems and methods detect a plurality of feature points in each of the reference image and the subsequent image and determine matching feature point pairs between the reference image and the subsequent image. Based on the matching feature point pairs, the systems and methods determine at least one homography of the reference image to the subsequent image. Based on the homography, the systems and methods generate a new aligned image that is that is pixel-wise aligned to the reference image. Furthermore, the systems and methods refines boundaries between local regions of the new aligned image.
    Type: Grant
    Filed: August 14, 2017
    Date of Patent: October 22, 2019
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Hailin Jin
  • Patent number: 10445921
    Abstract: Transferring motion between consecutive frames to a digital image is leveraged in a digital medium environment. A digital image and at least a portion of the digital video are exposed to a motion transfer model. The portion of the digital video includes at least a first digital video frame and a second digital video frame that is consecutive to the first digital video frame. Flow data between the first digital video frame and the second digital image frame is extracted, and the flow data is then processed to generate motion features representing motion between the first digital video frame and the second digital video frame. The digital image is processed to generate image features of the digital image. Motion of the digital video is then transferred to the digital image by combining the motion features with the image features to generate a next digital image frame for the digital image.
    Type: Grant
    Filed: June 13, 2018
    Date of Patent: October 15, 2019
    Assignee: Adobe Inc.
    Inventors: Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu
  • Patent number: 10430661
    Abstract: Techniques and systems are described to generate a compact video feature representation for sequences of frames in a video. In one example, values of features are extracted from each frame of a plurality of frames of a video using machine learning, e.g., through use of a convolutional neural network. A video feature representation is generated of temporal order dynamics of the video, e.g., through use of a recurrent neural network. For example, a maximum value is maintained of each feature of the plurality of features that has been reached for the plurality of frames in the video. A timestamp is also maintained as indicative of when the maximum value is reached for each feature of the plurality of features. The video feature representation is then output as a basis to determine similarity of the video with at least one other video based on the video feature representation.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: October 1, 2019
    Assignee: Adobe Inc.
    Inventors: Hao Hu, Zhaowen Wang, Joon-Young Lee, Zhe Lin
  • Patent number: 10424086
    Abstract: Oil painting simulation techniques are disclosed which simulate painting brush strokes using a trained neural network. In some examples, a method may include inferring a new height map of existing paint on a canvas after a new painting brush stroke is applied based on a bristle trajectory map that represents the new painting brush stroke and a height map of existing paint on the canvas prior to the application of the new painting brush stroke, and generating a rendering of the new painting brush stroke based on the new height map of existing paint on the canvas after the new painting brush stroke is applied to the canvas and a color map.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: September 24, 2019
    Assignee: Adobe Inc.
    Inventors: Zhili Chen, Zhaowen Wang, Rundong Wu, Jimei Yang
  • Publication number: 20190251612
    Abstract: The present disclosure relates to a personalized fashion generation system that synthesizes user-customized images using deep learning techniques based on visually-aware user preferences. In particular, the personalized fashion generation system employs an image generative adversarial neural network and a personalized preference network to synthesize new fashion items that are individually customized for a user. Additionally, the personalized fashion generation system can modify existing fashion items to tailor the fashion items to a user's tastes and preferences.
    Type: Application
    Filed: February 15, 2018
    Publication date: August 15, 2019
    Inventors: Chen Fang, Zhaowen Wang, Wangcheng Kang, Julian McAuley
  • Publication number: 20190251446
    Abstract: The present disclosure relates to a fashion recommendation system that employs a task-guided learning framework to jointly train a visually-aware personalized preference ranking network. In addition, the fashion recommendation system employs implicit feedback and generated user-based triplets to learn variances in the user's fashion preferences for items with which the user has not yet interacted. In particular, the fashion recommendation system uses triplets generated from implicit user data to jointly train a Siamese convolutional neural network and a personalized ranking model, which together produce a user preference predictor that determines personalized fashion recommendations for a user.
    Type: Application
    Filed: February 15, 2018
    Publication date: August 15, 2019
    Inventors: Chen Fang, Zhaowen Wang, Wangcheng Kang, Julian McAuley
  • Patent number: 10380462
    Abstract: Font replacement based on visual similarity is described. In one or more embodiments, a font descriptor includes multiple font features derived from a visual appearance of a font by a font visual similarity model. The font visual similarity model can be trained using a machine learning system that recognizes similarity between visual appearances of two different fonts. A source computing device embeds a font descriptor in a document, which is transmitted to a destination computing device. The destination compares the embedded font descriptor to font descriptors corresponding to local fonts. Based on distances between the embedded and the local font descriptors, at least one matching font descriptor is determined. The local font corresponding to the matching font descriptor is deemed similar to the original font. The destination computing device controls presentations of the document using the similar local font. Computation of font descriptors can be outsourced to a remote location.
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: August 13, 2019
    Assignee: Adobe Inc.
    Inventors: Hailin Jin, Zhaowen Wang, Gavin Stuart Peter Miller
  • Patent number: 10334202
    Abstract: Techniques are disclosed for generating audio based on visual information. In some examples, an audio generation system is trained using supervised learning using a training set generated from videos. The trained audio generation system is able to infer audio for provided silent video based on the visual contents of the silent video, and generate raw waveform samples that represent the inferred audio.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: June 25, 2019
    Assignee: Adobe Inc.
    Inventors: Yipin Zhou, Zhaowen Wang, Chen Fang, Trung Huu Bui
  • Publication number: 20190147304
    Abstract: The present disclosure relates to a font recognition system that employs a multi-task learning framework and training to improve font classification and remove negative side effects caused by intra-class variances of glyph content. For example, in one or more embodiments, the font recognition system trains a hybrid font recognition neural network that includes two or more font recognition neural networks and a weight prediction neural network. The hybrid font recognition neural network determines and generates classification weights based on which font recognition neural network within the hybrid font recognition neural network is best suited to classify the font in an input text image. By employing a hybrid trained font classification neural network, the font recognition system can improve overall font recognition as well as remove the negative side effects from diverse glyph content.
    Type: Application
    Filed: November 14, 2017
    Publication date: May 16, 2019
    Inventors: Yang Liu, Zhaowen Wang, I-Ming Pao, Hailin Jin