Patents by Inventor Hailin Jin

Hailin Jin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10762135
    Abstract: A digital medium environment includes an asset processing application that performs editing of assets. A projection function is trained using pairs of actions pertaining to software edits, and assets resulting from the actions to learn a joint embedding between the actions and the assets. The projection function is used in the asset processing application to recommend software actions to create an asset, and also to recommend assets to demonstrate the effects of software actions. Recommendations are based on ranking distance measures that measure distances between actions representations and asset representations in a vector space.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: September 1, 2020
    Assignee: Adobe Inc.
    Inventors: Matthew Douglas Hoffman, Longqi Yang, Hailin Jin, Chen Fang
  • Patent number: 10755139
    Abstract: In one embodiment, a computer accessible storage medium stores a plurality of instructions which, when executed: group a set of reconstructed three dimensional (3D) points derived from image data into a plurality of groups based on one or more attributes of the 3D points; select one or more groups from the plurality of groups; and sample data from the selected groups, wherein the sampled data is input to a consensus estimator to generate a model that describes a 3D model of a scene captured by the image data. Other embodiments may bias sampling into a consensus estimator for any data set, based on relative quality of the data set.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: August 25, 2020
    Assignee: Adobe Inc.
    Inventors: Hailin Jin, Kai Ni
  • Publication number: 20200258241
    Abstract: Technology is disclosed herein for learning motion in video. In an implementation, an artificial neural network extracts features from a video. A correspondence proposal (CP) module performs, for at least some of the features, a search for corresponding features in the video based on a semantic similarity of a given feature to others of the features. The CP module then generates a joint semantic vector for each of the features based at least on the semantic similarity of the given feature to one or more of the corresponding features and a spatiotemporal distance of the given feature to the one or more of the corresponding features. The artificial neural network is able to identify motion in the video using the joint semantic vectors generated for the features extracted from the video.
    Type: Application
    Filed: February 13, 2019
    Publication date: August 13, 2020
    Inventors: Xingyu Liu, Hailin Jin, Joonyoung Lee
  • Patent number: 10733228
    Abstract: Sketch and style based image retrieval in a digital medium environment is described. Initially, a user sketches an object to be searched in connection with an image search. Styled images are selected to indicate a desired style of images to be returned by the search. A search request is generated based on the sketch and selected images. Responsive to the request, an image repository is searched to identify images having the desired object and styling. To search the image repository, a neural network is utilized that is capable of recognizing the desired object in images based on visual characteristics of the sketch and independently recognizing the desired styling in images based on visual characteristics of the selected images. This independent recognition allows desired styling to be specified by selecting images having the style but not the desired object. Images having the desired object and styling are returned.
    Type: Grant
    Filed: June 5, 2019
    Date of Patent: August 4, 2020
    Assignee: Adobe Inc.
    Inventors: Hailin Jin, John Philip Collomosse
  • Publication number: 20200242822
    Abstract: Techniques and systems are described for style-aware patching of a digital image in a digital medium environment. For example, a digital image creation system generates style data for a portion to be filled of a digital image, indicating a style of an area surrounding the portion. The digital image creation system also generates content data for the portion indicating content of the digital image of the area surrounding the portion. The digital image creation system selects a source digital image based on similarity of both style and content of the source digital image at a location of the patch to the style data and content data. The digital image creation system transforms the style of the source digital image based on the style data and generates the patch from the source digital image in the transformed style for incorporation into the portion to be filled of the digital image.
    Type: Application
    Filed: April 6, 2020
    Publication date: July 30, 2020
    Applicant: Adobe Inc.
    Inventors: Hailin Jin, John Philip Collomosse, Brian Lynn Price
  • Patent number: 10726313
    Abstract: Various embodiments describe active learning methods for training temporal action localization models used to localize actions in untrimmed videos. A trainable active learning selection function is used to select unlabeled samples that can improve the temporal action localization model the most. The select unlabeled samples are then annotated and used to retrain the temporal action localization model. In some embodiment, the trainable active learning selection function includes a trainable performance prediction model that maps a video sample and a temporal action localization model to a predicted performance improvement for the temporal action localization model.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: July 28, 2020
    Assignee: Adobe Inc.
    Inventors: Joon-Young Lee, Hailin Jin, Fabian David Caba Heilbron
  • Patent number: 10699166
    Abstract: Font recognition and similarity determination techniques and systems are described. In a first example, localization techniques are described to train a model using machine learning (e.g., a convolutional neural network) using training images. The model is then used to localize text in a subsequently received image, and may do so automatically and without user intervention, e.g., without specifying any of the edges of a bounding box. In a second example, a deep neural network is directly learned as an embedding function of a model that is usable to determine font similarity. In a third example, techniques are described that leverage attributes described in metadata associated with fonts as part of font recognition and similarity determinations.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: June 30, 2020
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Luoqi Liu, Hailin Jin
  • Patent number: 10699453
    Abstract: Techniques and systems are described for style-aware patching of a digital image in a digital medium environment. For example, a digital image creation system generates style data for a portion to be filled of a digital image, indicating a style of an area surrounding the portion. The digital image creation system also generates content data for the portion indicating content of the digital image of the area surrounding the portion. The digital image creation system selects a source digital image based on similarity of both style and content of the source digital image at a location of the patch to the style data and content data. The digital image creation system transforms the style of the source digital image based on the style data and generates the patch from the source digital image in the transformed style for incorporation into the portion to be filled of the digital image.
    Type: Grant
    Filed: August 17, 2017
    Date of Patent: June 30, 2020
    Assignee: Adobe Inc.
    Inventors: Hailin Jin, John Philip Collomosse, Brian L. Price
  • Publication number: 20200151503
    Abstract: In implementations of recognizing text in images, text recognition systems are trained using noisy images that have nuisance factors applied, and corresponding clean images (e.g., without nuisance factors). Clean images serve as supervision at both feature and pixel levels, so that text recognition systems are trained to be feature invariant (e.g., by requiring features extracted from a noisy image to match features extracted from a clean image), and feature complete (e.g., by requiring that features extracted from a noisy image be sufficient to generate a clean image). Accordingly, text recognition systems generalize to text not included in training images, and are robust to nuisance factors. Furthermore, since clean images are provided as supervision at feature and pixel levels, training requires fewer training images than text recognition systems that are not trained with a supervisory clean image, thus saving time and resources.
    Type: Application
    Filed: November 8, 2018
    Publication date: May 14, 2020
    Applicant: Adobe Inc.
    Inventors: Zhaowen Wang, Hailin Jin, Yang Liu
  • Publication number: 20200142994
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for guided visual search. A visual search query can be represented as a sketch sequence that includes ordering information of the constituent strokes in the sketch. The visual search query can be encoded into a structural search encoding in a common search space by a structural neural network. Indexed visual search results can be identified in the common search space and clustered in an auxiliary semantic space. Sketch suggestions can be identified from a plurality of indexed sketches in the common search space. A sketch suggestion can be identified for each semantic cluster of visual search results and presented with the cluster to guide a user towards relevant content through an iterative search process. Selecting a sketch suggestion as a target sketch can automatically transform the visual search query to the target sketch via adversarial images.
    Type: Application
    Filed: November 7, 2018
    Publication date: May 7, 2020
    Inventors: Hailin Jin, John Collomosse
  • Patent number: 10614381
    Abstract: This disclosure involves personalizing user experiences with electronic content based on application usage data. For example, a user representation model that facilitates content recommendations is iteratively trained with action histories from a content manipulation application. Each iteration involves selecting, from an action history for a particular user, an action sequence including a target action. An initial output is computed in each iteration by applying a probability function to the selected action sequence and a user representation vector for the particular user. The user representation vector is adjusted to maximize an output that is generated by applying the probability function to the action sequence and the user representation vector. This iterative training process generates a user representation model, which includes a set of adjusted user representation vectors, that facilitates content recommendations corresponding to users' usage pattern in the content manipulation application.
    Type: Grant
    Filed: December 16, 2016
    Date of Patent: April 7, 2020
    Assignee: Adobe Inc.
    Inventors: Matthew Hoffman, Longqi Yang, Hailin Jin, Chen Fang
  • Patent number: 10592787
    Abstract: The present disclosure relates to a font recognition system that employs a multi-task learning framework and adversarial training to improve font classification and remove negative side effects caused by intra-class variances of glyph content. For example, in one or more embodiments, the font recognition system adversarial trains a font recognition neural network by minimizing font classification loss while at the same time maximizing glyph classification loss. By employing an adversarially trained font classification neural network, the font recognition system can improve overall font recognition by removing the negative side effects from diverse glyph content.
    Type: Grant
    Filed: November 8, 2017
    Date of Patent: March 17, 2020
    Assignee: ADOBE INC.
    Inventors: Yang Liu, Zhaowen Wang, Hailin Jin
  • Patent number: 10592590
    Abstract: Embodiments of the present invention are directed at providing a font similarity preview for non-resident fonts. In one embodiment, a font is selected on a computing device. In response to the selection of the font, a pre-computed font list is checked to determine what fonts are similar to the selected font. In response to a determination that similar fonts are not local to the computing device, a non-resident font list is sent to a font vendor. The font vendor sends back previews of the non-resident fonts based on entitlement information of a user. Further, a full non-resident font can be synced to the computing device. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: August 4, 2016
    Date of Patent: March 17, 2020
    Assignee: Adobe Inc.
    Inventors: I-Ming Pao, Alan Lee Erickson, Yuyan Song, Seth Shaw, Hailin Jin, Zhaowen Wang
  • Patent number: 10565518
    Abstract: The present disclosure is directed to collaborative feature learning using social media data. For example, a machine learning system may identify social media data that includes user behavioral data, which indicates user interactions with content item. Using the identified social user behavioral data, the machine learning system may determine latent representations from the content items. In some embodiments, the machine learning system may train a machine-learning model based on the latent representations. Further, the machine learning system may extract features of the content item from the trained machine-learning model.
    Type: Grant
    Filed: June 23, 2015
    Date of Patent: February 18, 2020
    Assignee: Adobe Inc.
    Inventors: Hailin Jin, Chen Fang, Jianchao Yang, Zhe Lin
  • Publication number: 20200034671
    Abstract: Font recognition and similarity determination techniques and systems are described. In a first example, localization techniques are described to train a model using machine learning (e.g., a convolutional neural network) using training images. The model is then used to localize text in a subsequently received image, and may do so automatically and without user intervention, e.g., without specifying any of the edges of a bounding box. In a second example, a deep neural network is directly learned as an embedding function of a model that is usable to determine font similarity. In a third example, techniques are described that leverage attributes described in metadata associated with fonts as part of font recognition and similarity determinations.
    Type: Application
    Filed: October 1, 2019
    Publication date: January 30, 2020
    Applicant: Adobe Inc.
    Inventors: Zhaowen Wang, Luoqi Liu, Hailin Jin
  • Patent number: 10528649
    Abstract: Font recognition and similarity determination techniques and systems are described. For example, a computing device receives an image including a font and extracts font features corresponding to the font. The computing device computes font feature distances between the font and fonts from a set of training fonts. The computing device calculates, based on the font feature distances, similarity scores for the font and the training fonts used for calculating features distances. The computing device determines, based on the similarity scores, final similarity scores for the font relative to the training fonts.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: January 7, 2020
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Hailin Jin
  • Patent number: 10515295
    Abstract: The present disclosure relates to a font recognition system that employs a multi-task learning framework to jointly improve font classification and remove negative side effects caused by intra-class variances of glyph content. For example, in one or more embodiments, the font recognition system can jointly train a font recognition neural network using a font classification loss model and triplet loss model to generate a deep learning neural network that provides improved font classifications. In addition, the font recognition system can employ the trained font recognition neural network to efficiently recognize fonts within input images as well as provide other suggested fonts.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: December 24, 2019
    Assignee: Adobe Inc.
    Inventors: Yang Liu, Zhaowen Wang, Hailin Jin
  • Patent number: 10515296
    Abstract: The present disclosure relates to a font recognition system that employs a multi-task learning framework and training to improve font classification and remove negative side effects caused by intra-class variances of glyph content. For example, in one or more embodiments, the font recognition system trains a hybrid font recognition neural network that includes two or more font recognition neural networks and a weight prediction neural network. The hybrid font recognition neural network determines and generates classification weights based on which font recognition neural network within the hybrid font recognition neural network is best suited to classify the font in an input text image. By employing a hybrid trained font classification neural network, the font recognition system can improve overall font recognition as well as remove the negative side effects from diverse glyph content.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: December 24, 2019
    Assignee: Adobe Inc.
    Inventors: Yang Liu, Zhaowen Wang, I-Ming Pao, Hailin Jin
  • Patent number: 10496699
    Abstract: A framework is provided for associating dense images with topics. The framework is trained utilizing images, each having multiple regions, multiple visual characteristics and multiple keyword tags associated therewith. For each region of each image, visual features are computed from the visual characteristics utilizing a convolutional neural network, and an image feature vector is generated from the visual features. The keyword tags are utilized to generate a weighted word vector for each image by calculating a weighted average of word vector representations representing keyword tags associated with the image. The image feature vector and the weighted word vector are aligned in a common embedding space and a heat map is computed for the image. Once trained, the framework can be utilized to automatically tag images and rank the relevance of images with respect to queried keywords based upon associated heat maps.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: December 3, 2019
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Xiaohui Shen, Jianming Zhang, Hailin Jin, Yingwei Li
  • Patent number: 10467508
    Abstract: Font recognition and similarity determination techniques and systems are described. In a first example, localization techniques are described to train a model using machine learning (e.g., a convolutional neural network) using training images. The model is then used to localize text in a subsequently received image, and may do so automatically and without user intervention, e.g., without specifying any of the edges of a bounding box. In a second example, a deep neural network is directly learned as an embedding function of a model that is usable to determine font similarity. In a third example, techniques are described that leverage attributes described in metadata associated with fonts as part of font recognition and similarity determinations.
    Type: Grant
    Filed: April 25, 2018
    Date of Patent: November 5, 2019
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Luoqi Liu, Hailin Jin