Patents by Inventor Zhaowen Wang

Zhaowen Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210264236
    Abstract: Embodiments of the present disclosure are directed towards improved models trained using unsupervised domain adaptation. In particular, a style-content adaptation system provides improved translation during unsupervised domain adaptation by controlling the alignment of conditional distributions of a model during training such that content (e.g., a class) from a target domain is correctly mapped to content (e.g., the same class) in a source domain. The style-content adaptation system improves unsupervised domain adaptation using independent control over content (e.g., related to a class) as well as style (e.g., related to a domain) to control alignment when translating between the source and target domain. This independent control over content and style can also allow for images to be generated using the style-content adaptation system that contain desired content and/or style.
    Type: Application
    Filed: February 26, 2020
    Publication date: August 26, 2021
    Inventors: Ning XU, Bayram Safa CICEK, Hailin JIN, Zhaowen WANG
  • Patent number: 11100400
    Abstract: The present disclosure relates to a fashion recommendation system that employs a task-guided learning framework to jointly train a visually-aware personalized preference ranking network. In addition, the fashion recommendation system employs implicit feedback and generated user-based triplets to learn variances in the user's fashion preferences for items with which the user has not yet interacted. In particular, the fashion recommendation system uses triplets generated from implicit user data to jointly train a Siamese convolutional neural network and a personalized ranking model, which together produce a user preference predictor that determines personalized fashion recommendations for a user.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: August 24, 2021
    Assignees: Adobe Inc., The Regents of the University of California
    Inventors: Chen Fang, Zhaowen Wang, Wangcheng Kang, Julian McAuley
  • Publication number: 20210248432
    Abstract: Systems and methods provide for generating glyph initiations using a generative font system. A glyph variant may be generated based on an input vector glyph. A plurality of line segments may be approximated using a differentiable rasterizer with the plurality of line segments representing the contours of the glyph variant. A bitmap of the glyph variant may then be generated based on the line segments. The image loss between the bitmap and a rasterized representation of a vector glyph may be calculated and provided to the generative font system. Based on the image loss, a refined glyph variant may be provided to a user.
    Type: Application
    Filed: February 12, 2020
    Publication date: August 12, 2021
    Inventors: Zhaowen Wang, Zhifei Zhang, Xuan Li, Matthew Fisher, Hailin Jin
  • Publication number: 20210241032
    Abstract: In implementations of recognizing text in images, text recognition systems are trained using noisy images that have nuisance factors applied, and corresponding clean images (e.g., without nuisance factors). Clean images serve as supervision at both feature and pixel levels, so that text recognition systems are trained to be feature invariant (e.g., by requiring features extracted from a noisy image to match features extracted from a clean image), and feature complete (e.g., by requiring that features extracted from a noisy image be sufficient to generate a clean image). Accordingly, text recognition systems generalize to text not included in training images, and are robust to nuisance factors. Furthermore, since clean images are provided as supervision at feature and pixel levels, training requires fewer training images than text recognition systems that are not trained with a supervisory clean image, thus saving time and resources.
    Type: Application
    Filed: April 26, 2021
    Publication date: August 5, 2021
    Applicant: Adobe Inc.
    Inventors: Zhaowen Wang, Hailin Jin, Yang Liu
  • Patent number: 11055828
    Abstract: Techniques of inpainting video content include training a neural network to perform an inpainting operation on a video using only content from that video. For example, upon receiving video content including a sequence of initial frames, a computer generates a sequence of inputs corresponding to at least some of the sequence of initial frames and each input including, for example, a uniform noise map. The computer then generates a convolutional neural network (CNN) using the sequence of input as the initial layer. The parameters of the CNN are adjusted according to a cost function, which has components including a flow generation loss component and a consistency loss component. The CNN then outputs, on a final layer, estimated image values in a sequence of final frames.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: July 6, 2021
    Assignee: ADOBE INC.
    Inventors: Mai Long, Zhaowen Wang, Ning Xu, John Philip Collomosse, Haotian Zhang, Hailin Jin
  • Publication number: 20210192594
    Abstract: The present disclosure relates to a personalized fashion generation system that synthesizes user-customized images using deep learning techniques based on visually-aware user preferences. In particular, the personalized fashion generation system employs an image generative adversarial neural network and a personalized preference network to synthesize new fashion items that are individually customized for a user. Additionally, the personalized fashion generation system can modify existing fashion items to tailor the fashion items to a user's tastes and preferences.
    Type: Application
    Filed: March 4, 2021
    Publication date: June 24, 2021
    Inventors: Chen Fang, Zhaowen Wang, Wangcheng Kang, Julian McAuley
  • Patent number: 11036915
    Abstract: Embodiments of the present invention are directed at providing a font similarity system. In one embodiment, a new font is detected on a computing device. In response to the detection of the new font, a pre-computed font list is checked to determine whether the new font is included therein. The pre-computed font list including feature representations, generated independently of the computing device, for corresponding fonts. In response to a determination that the new font is absent from the pre-computed font list, a feature representation for the new font is generated. The generated feature representation capable of being utilized for a similarity analysis of the new font. The feature representation is then stored in a supplemental font list to enable identification of one or more fonts installed on the computing device that are similar to the new font. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: March 10, 2016
    Date of Patent: June 15, 2021
    Assignee: Adobe Inc.
    Inventors: I-Ming Pao, Zhaowen Wang, Hailin Jin, Alan Lee Erickson
  • Patent number: 11003831
    Abstract: The present disclosure relates to an asymmetric font pairing system that efficiently pairs digital fonts. For example, in one or more embodiments, the asymmetric font pairing system automatically identifies and provides users with visually aesthetic font pairs for use in different sections of an electronic document. In particular, the asymmetric font pairing system learns visually aesthetic font pairs using joint symmetric and asymmetric compatibility metric learning. In addition, the asymmetric font pairing system provides compact compatibility spaces (e.g., a symmetric compatibility space and an asymmetric compatibility space) to computing devices (e.g., client devices and server devices), which enable the computing devices to quickly and efficiently provide font pairs to users.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: May 11, 2021
    Assignee: ADOBE INC.
    Inventors: Zhaowen Wang, Hailin Jin, Aaron Phillip Hertzmann, Shuhui Jiang
  • Patent number: 10997463
    Abstract: In implementations of recognizing text in images, text recognition systems are trained using noisy images that have nuisance factors applied, and corresponding clean images (e.g., without nuisance factors). Clean images serve as supervision at both feature and pixel levels, so that text recognition systems are trained to be feature invariant (e.g., by requiring features extracted from a noisy image to match features extracted from a clean image), and feature complete (e.g., by requiring that features extracted from a noisy image be sufficient to generate a clean image). Accordingly, text recognition systems generalize to text not included in training images, and are robust to nuisance factors. Furthermore, since clean images are provided as supervision at feature and pixel levels, training requires fewer training images than text recognition systems that are not trained with a supervisory clean image, thus saving time and resources.
    Type: Grant
    Filed: November 8, 2018
    Date of Patent: May 4, 2021
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Hailin Jin, Yang Liu
  • Publication number: 20210118207
    Abstract: Automatic font synthesis for modifying a local font to have an appearance that is visually similar to a source font is described. A font modification system receives an electronic document including the source font together with an indication of a font descriptor for the source font. The font descriptor includes information describing various font attributes for the source font, which define a visual appearance of the source font. Using the source font descriptor, the font modification system identifies a local font that is visually similar in appearance to the source font by comparing local font descriptors to the source font descriptor. A visually similar font is then synthesized by modifying glyph outlines of the local font to achieve the visual appearance defined by the source font descriptor. The synthesized font is then used to replace the source font and output in the electronic document at the computing device.
    Type: Application
    Filed: October 17, 2019
    Publication date: April 22, 2021
    Applicant: Adobe Inc.
    Inventors: Nirmal Kumawat, Zhaowen Wang
  • Patent number: 10984295
    Abstract: Font recognition and similarity determination techniques and systems are described. In a first example, localization techniques are described to train a model using machine learning (e.g., a convolutional neural network) using training images. The model is then used to localize text in a subsequently received image, and may do so automatically and without user intervention, e.g., without specifying any of the edges of a bounding box. In a second example, a deep neural network is directly learned as an embedding function of a model that is usable to determine font similarity. In a third example, techniques are described that leverage attributes described in metadata associated with fonts as part of font recognition and similarity determinations.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: April 20, 2021
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Luoqi Liu, Hailin Jin
  • Publication number: 20210103783
    Abstract: The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.
    Type: Application
    Filed: November 23, 2020
    Publication date: April 8, 2021
    Inventors: Zhaowen Wang, Tianlang Chen, Ning Xu, Hailin Jin
  • Patent number: 10970765
    Abstract: The present disclosure relates to a personalized fashion generation system that synthesizes user-customized images using deep learning techniques based on visually-aware user preferences. In particular, the personalized fashion generation system employs an image generative adversarial neural network and a personalized preference network to synthesize new fashion items that are individually customized for a user. Additionally, the personalized fashion generation system can modify existing fashion items to tailor the fashion items to a user's tastes and preferences.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: April 6, 2021
    Assignees: ADOBE INC., THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Chen Fang, Zhaowen Wang, Wangcheng Kang, Julian McAuley
  • Patent number: 10922852
    Abstract: Oil painting simulation techniques are disclosed which simulate painting brush strokes using a trained neural network. In some examples, a method may include inferring a new height map of existing paint on a canvas after a new painting brush stroke is applied based on a bristle trajectory map that represents the new painting brush stroke and a height map of existing paint on the canvas prior to the application of the new painting brush stroke, and generating a rendering of the new painting brush stroke based on the new height map of existing paint on the canvas after the new painting brush stroke is applied to the canvas and a color map.
    Type: Grant
    Filed: August 13, 2019
    Date of Patent: February 16, 2021
    Assignee: Adobe Inc.
    Inventors: Zhili Chen, Zhaowen Wang, Rundong Wu, Jimei Yang
  • Patent number: 10885608
    Abstract: In implementations of super-resolution with reference images, a super-resolution image is generated based on reference images. Reference images are not constrained to have same or similar content as a low-resolution image being super-resolved. Texture features indicating high-frequency content are extracted into texture feature maps, and patches of texture feature maps of reference images are determined based on texture feature similarity. A content feature map indicating low-frequency content of an image is adaptively fused with a swapped texture feature map including patches of reference images with a neural network based on similarity of texture features. A user interfaces allows a user to select regions of multiple reference images to use for super-resolution. Hence, a super-resolution image can be generated with rich texture details incorporated from multiple reference images, even in the absence of reference images having similar content to an image being upscaled.
    Type: Grant
    Filed: June 6, 2018
    Date of Patent: January 5, 2021
    Assignee: Adobe Inc.
    Inventors: Zhifei Zhang, Zhe Lin, Zhaowen Wang
  • Patent number: 10878298
    Abstract: The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: December 29, 2020
    Assignee: ADOBE INC.
    Inventors: Zhaowen Wang, Tianlang Chen, Ning Xu, Hailin Jin
  • Publication number: 20200382612
    Abstract: Methods and systems are provided for generating interpretable user modeling system. The interpretable user modeling system can use an intent neural network to implement one or more tasks. The intent neural network can bridge a semantic gap between log data and human language by leveraging tutorial data to understand user logs in a semantically meaningful way. A memory unit of the intent neural network can capture information from the tutorial data. Such a memory unit can be queried to identify human readable sentences related to actions received by the intent neural network. The human readable sentences can be used to interpret the user log data in a semantically meaningful way.
    Type: Application
    Filed: May 29, 2019
    Publication date: December 3, 2020
    Inventors: Handong Zhao, Zhiqiang Tao, Zhaowen Wang, Sheng Li, Chen Fang
  • Publication number: 20200372622
    Abstract: The present disclosure relates to training and utilizing an image exposure transformation network to generate a long-exposure image from a single short-exposure image (e.g., still image). In various embodiments, the image exposure transformation network is trained using adversarial learning, long-exposure ground truth images, and a multi-term loss function. In some embodiments, the image exposure transformation network includes an optical flow prediction network and/or an appearance guided attention network. Trained embodiments of the image exposure transformation network generate realistic long-exposure images from single short- exposure images without additional information.
    Type: Application
    Filed: August 4, 2020
    Publication date: November 26, 2020
    Inventors: Yilin Wang, Zhe Lin, Zhaowen Wang, Xin Lu, Xiaohui Shen, Chih-Yao Hsieh
  • Patent number: 10839493
    Abstract: In implementations of transferring image style to content of a digital image, an image editing system includes an encoder that extracts features from a content image and features from a style image. A whitening and color transform generates coarse features from the content and style features extracted by the encoder for one pass of encoding and decoding. Hence, the processing delay and memory requirements are low. A feature transfer module iteratively transfers style features to the coarse feature map and generates a fine feature map. The image editing system fuses the fine features with the coarse features, and a decoder generates an output image with content of the content image in a style of the style image from the fused features. Accordingly, the image editing system efficiently transfers an image style to image content in real-time, without undesirable artifacts in the output image.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: November 17, 2020
    Assignee: Adobe Inc.
    Inventors: Chen Fang, Zhe Lin, Zhaowen Wang, Yulun Zhang, Yilin Wang, Jimei Yang
  • Publication number: 20200357099
    Abstract: Techniques of inpainting video content include training a neural network to perform an inpainting operation on a video using only content from that video. For example, upon receiving video content including a sequence of initial frames, a computer generates a sequence of inputs corresponding to at least some of the sequence of initial frames and each input including, for example, a uniform noise map. The computer then generates a convolutional neural network (CNN) using the sequence of input as the initial layer. The parameters of the CNN are adjusted according to a cost function, which has components including a flow generation loss component and a consistency loss component. The CNN then outputs, on a final layer, estimated image values in a sequence of final frames.
    Type: Application
    Filed: May 9, 2019
    Publication date: November 12, 2020
    Inventors: Mai Long, Zhaowen Wang, Ning Xu, John Philip Collomosse, Haotian Zhang, Hailin Jin