Patents by Inventor Zhaowen Wang
Zhaowen Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11823059Abstract: The present disclosure relates to a fashion recommendation system that employs a task-guided learning framework to jointly train a visually-aware personalized preference ranking network. In addition, the fashion recommendation system employs implicit feedback and generated user-based triplets to learn variances in the user's fashion preferences for items with which the user has not yet interacted. In particular, the fashion recommendation system uses triplets generated from implicit user data to jointly train a Siamese convolutional neural network and a personalized ranking model, which together produce a user preference predictor that determines personalized fashion recommendations for a user.Type: GrantFiled: July 15, 2021Date of Patent: November 21, 2023Assignees: Adobe Inc., The Regents of the University of CaliforniaInventors: Chen Fang, Zhaowen Wang, Wangcheng Kang, Julian McAuley
-
Publication number: 20230359682Abstract: Digital content layout encoding techniques for search are described. In these techniques, a layout representation is generated (using machine learning automatically and without user intervention) that describes a layout of elements included within the digital content. In an implementation, the layout representation includes a description of both spatial and structural aspects of the elements in relation to each other. To do so, a two-pathway pipeline that is configured to model layout from both spatial and structural aspects using a spatial pathway, and a structural pathway, respectively. In one example, this is also performed through use of multi-level encoding and fusion to generate a layout representation.Type: ApplicationFiled: May 3, 2022Publication date: November 9, 2023Applicant: Adobe Inc.Inventors: Zhaowen Wang, Yue Bai, John Philip Collomosse
-
Patent number: 11810374Abstract: In implementations of recognizing text in images, text recognition systems are trained using noisy images that have nuisance factors applied, and corresponding clean images (e.g., without nuisance factors). Clean images serve as supervision at both feature and pixel levels, so that text recognition systems are trained to be feature invariant (e.g., by requiring features extracted from a noisy image to match features extracted from a clean image), and feature complete (e.g., by requiring that features extracted from a noisy image be sufficient to generate a clean image). Accordingly, text recognition systems generalize to text not included in training images, and are robust to nuisance factors. Furthermore, since clean images are provided as supervision at feature and pixel levels, training requires fewer training images than text recognition systems that are not trained with a supervisory clean image, thus saving time and resources.Type: GrantFiled: April 26, 2021Date of Patent: November 7, 2023Assignee: Adobe Inc.Inventors: Zhaowen Wang, Hailin Jin, Yang Liu
-
Publication number: 20230326104Abstract: Automatic font synthesis for modifying a local font to have an appearance that is visually similar to a source font is described. A font modification system receives an electronic document including the source font together with an indication of a font descriptor for the source font. The font descriptor includes information describing various font attributes for the source font, which define a visual appearance of the source font. Using the source font descriptor, the font modification system identifies a local font that is visually similar in appearance to the source font by comparing local font descriptors to the source font descriptor. A visually similar font is then synthesized by modifying glyph outlines of the local font to achieve the visual appearance defined by the source font descriptor. The synthesized font is then used to replace the source font and output in the electronic document at the computing device.Type: ApplicationFiled: June 13, 2023Publication date: October 12, 2023Applicant: Adobe Inc.Inventors: Nirmal Kumawat, Zhaowen Wang
-
Patent number: 11776180Abstract: Embodiments of the present disclosure are directed towards improved models trained using unsupervised domain adaptation. In particular, a style-content adaptation system provides improved translation during unsupervised domain adaptation by controlling the alignment of conditional distributions of a model during training such that content (e.g., a class) from a target domain is correctly mapped to content (e.g., the same class) in a source domain. The style-content adaptation system improves unsupervised domain adaptation using independent control over content (e.g., related to a class) as well as style (e.g., related to a domain) to control alignment when translating between the source and target domain. This independent control over content and style can also allow for images to be generated using the style-content adaptation system that contain desired content and/or style.Type: GrantFiled: February 26, 2020Date of Patent: October 3, 2023Assignee: Adobe Inc.Inventors: Ning Xu, Bayram Safa Cicek, Hailin Jin, Zhaowen Wang
-
Patent number: 11776168Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that extract a texture from embedded text within a digital image utilizing kerning-adjusted glyphs. For example, the disclosed systems utilize text recognition and text segmentation to identify and segment glyphs from embedded text depicted in a digital image. Subsequently, in some implementations, the disclosed systems determine optimistic kerning values between consecutive glyphs and utilize the kerning values to reduce gaps between the consecutive glyphs. Furthermore, in one or more implementations, the disclosed systems generate a synthesized texture utilizing the kerning-value-adjusted glyphs by utilizing image inpainting on the textures corresponding to the kerning-value-adjusted glyphs. Moreover, in certain instances, the disclosed systems apply a target texture to a target digital text based on the generated synthesized texture.Type: GrantFiled: March 31, 2021Date of Patent: October 3, 2023Assignee: Adobe Inc.Inventors: Nirmal Kumawat, Zhaowen Wang, Zhifei Zhang
-
Publication number: 20230246649Abstract: Phase interpolators are provided, the phase interpolators including: a first phase interpolator having a first output that outputs a first interpolated clock signal based on quadrature clock signals and a first phase interpolator control signal; a second phase interpolator having a second output that outputs a second interpolated clock signal based on the quadrature clock signals and a second phase interpolator control signal that is shifted from the first phase interpolator control signal by half of an integral nonlinearity (INL) period of the first phase interpolator; and a phase combiner that outputs a third interpolated clock signal based on the first interpolated clock signal and the second interpolated clock signal. In some of these embodiments, the phase interpolators further comprise a first amplitude limiter that receives the first interpolated clock signal and outputs a first amplitude-limited interpolated clock signal that is provided to the phase combiner.Type: ApplicationFiled: January 30, 2023Publication date: August 3, 2023Inventors: Zhaowen Wang, Peter Kinget
-
Patent number: 11711581Abstract: A multimodal recommendation identification system analyzes data describing a sequence of past content item interactions to generate a recommendation for a content item for a user. An indication of the recommended content item is provided to a website hosting system or recommendation system so that the recommended content item is displayed or otherwise presented to the user. The multimodal recommendation identification system identifies a content item to recommend to the user by generating an encoding that encodes identifiers of the sequence of content items the user has interacted with and generating encodings that encode multimodal information for content items in the sequence of content items the user has interacted with. An aggregated information encoding for a user based on these encodings and a system analyzes the content item sequence encoding and interaction between the content item sequence encoding and the multiple modality encodings to generate the aggregated information encoding.Type: GrantFiled: March 12, 2021Date of Patent: July 25, 2023Assignee: Adobe Inc.Inventors: Handong Zhao, Zhankui He, Zhe Lin, Zhaowen Wang, Ajinkya Gorakhnath Kale
-
Patent number: 11710262Abstract: Automatic font synthesis for modifying a local font to have an appearance that is visually similar to a source font is described. A font modification system receives an electronic document including the source font together with an indication of a font descriptor for the source font. The font descriptor includes information describing various font attributes for the source font, which define a visual appearance of the source font. Using the source font descriptor, the font modification system identifies a local font that is visually similar in appearance to the source font by comparing local font descriptors to the source font descriptor. A visually similar font is then synthesized by modifying glyph outlines of the local font to achieve the visual appearance defined by the source font descriptor. The synthesized font is then used to replace the source font and output in the electronic document at the computing device.Type: GrantFiled: February 18, 2022Date of Patent: July 25, 2023Assignee: Adobe Inc.Inventors: Nirmal Kumawat, Zhaowen Wang
-
Patent number: 11694248Abstract: The present disclosure relates to a personalized fashion generation system that synthesizes user-customized images using deep learning techniques based on visually-aware user preferences. In particular, the personalized fashion generation system employs an image generative adversarial neural network and a personalized preference network to synthesize new fashion items that are individually customized for a user. Additionally, the personalized fashion generation system can modify existing fashion items to tailor the fashion items to a user's tastes and preferences.Type: GrantFiled: March 4, 2021Date of Patent: July 4, 2023Assignees: Adobe Inc., The Regents of the University of CaliforniaInventors: Chen Fang, Zhaowen Wang, Wangcheng Kang, Julian McAuley
-
Patent number: 11688190Abstract: Systems and methods for text segmentation are described. Embodiments of the inventive concept are configured to receive an image including a foreground text portion and a background portion, classify each pixel of the image as foreground text or background using a neural network that refines a segmentation prediction using a key vector representing features of the foreground text portion, wherein the key vector is based on the segmentation prediction, and identify the foreground text portion based on the classification.Type: GrantFiled: November 5, 2020Date of Patent: June 27, 2023Assignee: ADOBE INC.Inventors: Zhifei Zhang, Xingqian Xu, Zhaowen Wang, Brian Price
-
Publication number: 20230133522Abstract: Digital content search techniques are described that overcome the challenges found in conventional sequence-based techniques through use of a query-aware sequential search. In one example, a search query is received and sequence input data is obtained based on the search query. The sequence input data describes a sequence of digital content and respective search queries. Embedding data is generated based on the sequence input data using an embedding module of a machine-learning model. The embedding module includes a query-aware embedding layer that generates embeddings of the sequence of digital content and respective search queries. A search result is generated referencing at least one item of digital content by processing the embedding data using at least one layer of the machine-learning model.Type: ApplicationFiled: October 28, 2021Publication date: May 4, 2023Applicant: Adobe Inc.Inventors: Handong Zhao, Zhe Lin, Zhaowen Wang, Zhankui He, Ajinkya Gorakhnath Kale
-
Patent number: 11636147Abstract: The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.Type: GrantFiled: January 26, 2022Date of Patent: April 25, 2023Assignee: Adobe Inc.Inventors: Zhaowen Wang, Tianlang Chen, Ning Xu, Hailin Jin
-
Publication number: 20230116969Abstract: Digital content search techniques are described. In one example, the techniques are incorporated as part of a multi-head self-attention module of a transformer using machine learning. A localized self-attention module, for instance, is incorporated as part of the multi-head self-attention module that applies local constraints to the sequence. This is performable in a variety of ways. In a first instance, a model-based local encoder is used, examples of which include a fixed-depth recurrent neural network (RNN) and a convolutional network. In a second instance, a masking-based local encoder is used, examples of which include use of a fixed window, Gaussian initialization, and an adaptive predictor.Type: ApplicationFiled: October 14, 2021Publication date: April 20, 2023Applicant: Adobe Inc.Inventors: Handong Zhao, Zhankui He, Zhaowen Wang, Ajinkya Gorakhnath Kale, Zhe Lin
-
Publication number: 20230110114Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for accurately and flexibly generating scalable fonts utilizing multi-implicit neural font representations. For instance, the disclosed systems combine deep learning with differentiable rasterization to generate a multi-implicit neural font representation of a glyph. For example, the disclosed systems utilize an implicit differentiable font neural network to determine a font style code for an input glyph as well as distance values for locations of the glyph to be rendered based on a glyph label and the font style code. Further, the disclosed systems rasterize the distance values utilizing a differentiable rasterization model and combines the rasterized distance values to generate a permutation-invariant version of the glyph corresponding glyph set.Type: ApplicationFiled: October 12, 2021Publication date: April 13, 2023Inventors: Chinthala Pradyumna Reddy, Zhifei Zhang, Matthew Fisher, Hailin Jin, Zhaowen Wang, Niloy J Mitra
-
Patent number: 11625932Abstract: Utilizing a visual-feature-classification model to generate font maps that efficiently and accurately organize fonts based on visual similarities. For example, extracting features from fonts of varying styles and utilize a self-organizing map (or other visual-feature-classification model) to map extracted font features to positions within font maps. Further, magnifying areas of font maps by mapping some fonts within a bounded area to positions within a higher-resolution font map. Additionally, navigating the font map to identify visually similar fonts (e.g., fonts within a threshold similarity).Type: GrantFiled: August 31, 2020Date of Patent: April 11, 2023Assignee: Adobe Inc.Inventors: Spyridon Ampanavos, Paul Asente, Jose Ignacio Echevarria Vallespi, Zhaowen Wang
-
Publication number: 20230070666Abstract: Embodiments are disclosed for translating an image from a source visual domain to a target visual domain. In particular, in one or more embodiments, the disclosed systems and methods comprise a training process that includes receiving a training input including a pair of keyframes and an unpaired image. The pair of keyframes represent a visual translation from a first version of an image in a source visual domain to a second version of the image in a target visual domain. The one or more embodiments further include sending the pair of keyframes and the unpaired image to an image translation network to generate a first training image and a second training image. The one or more embodiments further include training the image translation network to translate images from the source visual domain to the target visual domain based on a calculated loss using the first and second training images.Type: ApplicationFiled: September 3, 2021Publication date: March 9, 2023Inventors: Michal LUKÁC, Daniel SÝKORA, David FUTSCHIK, Zhaowen WANG, Elya SHECHTMAN
-
Patent number: 11566336Abstract: A method for transforming a crystal form of an electrolyte containing lithium for aluminum electrolysis includes the following steps: S1, pulverizing the electrolyte containing lithium; S2, uniformly mixing an additive with the electrolyte powder to obtain a mixture, wherein the additive is one or more selected from the group consisting of an oxide of an alkali metal other than lithium, an oxo acid salt of an alkali metal other than lithium, and a halide of an alkali metal other than lithium; a molar ratio of a sum of alkali metal fluoride contained in the electrolyte, alkali metal fluoride directly added from the additive, and alkali metal fluoride to which the additive is converted under the high-temperature calcination condition in the mixture to aluminum fluoride is greater than 3; S3, calcining the mixture at a high temperature.Type: GrantFiled: May 17, 2018Date of Patent: January 31, 2023Assignee: NORTHEASTERN UNIVERSITYInventors: Zhaowen Wang, Wenju Tao, Youjian Yang, Bingliang Gao, Fengguo Liu
-
Patent number: 11544831Abstract: The present disclosure relates to training and utilizing an image exposure transformation network to generate a long-exposure image from a single short-exposure image (e.g., still image). In various embodiments, the image exposure transformation network is trained using adversarial learning, long-exposure ground truth images, and a multi-term loss function. In some embodiments, the image exposure transformation network includes an optical flow prediction network and/or an appearance guided attention network. Trained embodiments of the image exposure transformation network generate realistic long-exposure images from single short-exposure images without additional information.Type: GrantFiled: August 4, 2020Date of Patent: January 3, 2023Assignee: Adobe Inc.Inventors: Yilin Wang, Zhe Lin, Zhaowen Wang, Xin Lu, Xiaohui Shen, Chih-Yao Hsieh
-
Publication number: 20220414314Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly generating scalable and semantically editable font representations utilizing a machine learning approach. For example, the disclosed systems generate a font representation code from a glyph utilizing a particular neural network architecture. For example, the disclosed systems utilize a glyph appearance propagation model and perform an iterative process to generate a font representation code from an initial glyph. Additionally, using a glyph appearance propagation model, the disclosed systems automatically propagate the appearance of the initial glyph from the font representation code to generate additional glyphs corresponding to respective glyph labels. In some embodiments, the disclosed systems propagate edits or other changes in appearance of a glyph to other glyphs within a glyph set (e.g., to match the appearance of the edited glyph).Type: ApplicationFiled: June 29, 2021Publication date: December 29, 2022Inventors: Zhifei Zhang, Zhaowen Wang, Hailin Jin, Matthew Fisher