Patents by Inventor Hailin Jin

Hailin Jin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11062460
    Abstract: Technology is disclosed herein for learning motion in video. In an implementation, an artificial neural network extracts features from a video. A correspondence proposal (CP) module performs, for at least some of the features, a search for corresponding features in the video based on a semantic similarity of a given feature to others of the features. The CP module then generates a joint semantic vector for each of the features based at least on the semantic similarity of the given feature to one or more of the corresponding features and a spatiotemporal distance of the given feature to the one or more of the corresponding features. The artificial neural network is able to identify motion in the video using the joint semantic vectors generated for the features extracted from the video.
    Type: Grant
    Filed: February 13, 2019
    Date of Patent: July 13, 2021
    Assignee: Adobe Inc.
    Inventors: Xingyu Liu, Hailin Jin, Joonyoung Lee
  • Patent number: 11055828
    Abstract: Techniques of inpainting video content include training a neural network to perform an inpainting operation on a video using only content from that video. For example, upon receiving video content including a sequence of initial frames, a computer generates a sequence of inputs corresponding to at least some of the sequence of initial frames and each input including, for example, a uniform noise map. The computer then generates a convolutional neural network (CNN) using the sequence of input as the initial layer. The parameters of the CNN are adjusted according to a cost function, which has components including a flow generation loss component and a consistency loss component. The CNN then outputs, on a final layer, estimated image values in a sequence of final frames.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: July 6, 2021
    Assignee: ADOBE INC.
    Inventors: Mai Long, Zhaowen Wang, Ning Xu, John Philip Collomosse, Haotian Zhang, Hailin Jin
  • Patent number: 11042798
    Abstract: Certain embodiments involve learning features of content items (e.g., images) based on web data and user behavior data. For example, a system determines latent factors from the content items based on data including a user's text query or keyword query for a content item and the user's interaction with the content items based on the query (e.g., a user's click on a content item resulting from a search using the text query). The system uses the latent factors to learn features of the content items. The system uses a previously learned feature of the content items for iterating the process of learning features of the content items to learn additional features of the content items, which improves the accuracy with which the system is used to learn other features of the content items.
    Type: Grant
    Filed: March 28, 2016
    Date of Patent: June 22, 2021
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Jianchao Yang, Hailin Jin, Chen Fang
  • Patent number: 11036915
    Abstract: Embodiments of the present invention are directed at providing a font similarity system. In one embodiment, a new font is detected on a computing device. In response to the detection of the new font, a pre-computed font list is checked to determine whether the new font is included therein. The pre-computed font list including feature representations, generated independently of the computing device, for corresponding fonts. In response to a determination that the new font is absent from the pre-computed font list, a feature representation for the new font is generated. The generated feature representation capable of being utilized for a similarity analysis of the new font. The feature representation is then stored in a supplemental font list to enable identification of one or more fonts installed on the computing device that are similar to the new font. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: March 10, 2016
    Date of Patent: June 15, 2021
    Assignee: Adobe Inc.
    Inventors: I-Ming Pao, Zhaowen Wang, Hailin Jin, Alan Lee Erickson
  • Patent number: 11003831
    Abstract: The present disclosure relates to an asymmetric font pairing system that efficiently pairs digital fonts. For example, in one or more embodiments, the asymmetric font pairing system automatically identifies and provides users with visually aesthetic font pairs for use in different sections of an electronic document. In particular, the asymmetric font pairing system learns visually aesthetic font pairs using joint symmetric and asymmetric compatibility metric learning. In addition, the asymmetric font pairing system provides compact compatibility spaces (e.g., a symmetric compatibility space and an asymmetric compatibility space) to computing devices (e.g., client devices and server devices), which enable the computing devices to quickly and efficiently provide font pairs to users.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: May 11, 2021
    Assignee: ADOBE INC.
    Inventors: Zhaowen Wang, Hailin Jin, Aaron Phillip Hertzmann, Shuhui Jiang
  • Patent number: 10997463
    Abstract: In implementations of recognizing text in images, text recognition systems are trained using noisy images that have nuisance factors applied, and corresponding clean images (e.g., without nuisance factors). Clean images serve as supervision at both feature and pixel levels, so that text recognition systems are trained to be feature invariant (e.g., by requiring features extracted from a noisy image to match features extracted from a clean image), and feature complete (e.g., by requiring that features extracted from a noisy image be sufficient to generate a clean image). Accordingly, text recognition systems generalize to text not included in training images, and are robust to nuisance factors. Furthermore, since clean images are provided as supervision at feature and pixel levels, training requires fewer training images than text recognition systems that are not trained with a supervisory clean image, thus saving time and resources.
    Type: Grant
    Filed: November 8, 2018
    Date of Patent: May 4, 2021
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Hailin Jin, Yang Liu
  • Patent number: 10984295
    Abstract: Font recognition and similarity determination techniques and systems are described. In a first example, localization techniques are described to train a model using machine learning (e.g., a convolutional neural network) using training images. The model is then used to localize text in a subsequently received image, and may do so automatically and without user intervention, e.g., without specifying any of the edges of a bounding box. In a second example, a deep neural network is directly learned as an embedding function of a model that is usable to determine font similarity. In a third example, techniques are described that leverage attributes described in metadata associated with fonts as part of font recognition and similarity determinations.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: April 20, 2021
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Luoqi Liu, Hailin Jin
  • Publication number: 20210103783
    Abstract: The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.
    Type: Application
    Filed: November 23, 2020
    Publication date: April 8, 2021
    Inventors: Zhaowen Wang, Tianlang Chen, Ning Xu, Hailin Jin
  • Patent number: 10963759
    Abstract: The present disclosure includes methods and systems for searching for digital visual media based on semantic and spatial information. In particular, one or more embodiments of the disclosed systems and methods identify digital visual media displaying targeted visual content in a targeted region based on a query term and a query area provide via a digital canvas. Specifically, the disclosed systems and methods can receive user input of a query term and a query area and provide the query term and query area to a query neural network to generate a query feature set. Moreover, the disclosed systems and methods can compare the query feature set to digital visual media feature sets. Further, based on the comparison, the disclosed systems and methods can identify digital visual media portraying targeted visual content corresponding to the query term within a targeted region corresponding to the query area.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: March 30, 2021
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Mai Long, Jonathan Brandt, Hailin Jin, Chen Fang
  • Patent number: 10878298
    Abstract: The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: December 29, 2020
    Assignee: ADOBE INC.
    Inventors: Zhaowen Wang, Tianlang Chen, Ning Xu, Hailin Jin
  • Publication number: 20200357099
    Abstract: Techniques of inpainting video content include training a neural network to perform an inpainting operation on a video using only content from that video. For example, upon receiving video content including a sequence of initial frames, a computer generates a sequence of inputs corresponding to at least some of the sequence of initial frames and each input including, for example, a uniform noise map. The computer then generates a convolutional neural network (CNN) using the sequence of input as the initial layer. The parameters of the CNN are adjusted according to a cost function, which has components including a flow generation loss component and a consistency loss component. The CNN then outputs, on a final layer, estimated image values in a sequence of final frames.
    Type: Application
    Filed: May 9, 2019
    Publication date: November 12, 2020
    Inventors: Mai Long, Zhaowen Wang, Ning Xu, John Philip Collomosse, Haotian Zhang, Hailin Jin
  • Publication number: 20200336802
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for automatic tagging of videos. In particular, in one or more embodiments, the disclosed systems generate a set of tagged feature vectors (e.g., tagged feature vectors based on action-rich digital videos) to utilize to generate tags for an input digital video. For instance, the disclosed systems can extract a set of frames for the input digital video and generate feature vectors from the set of frames. In some embodiments, the disclosed systems generate aggregated feature vectors from the feature vectors. Furthermore, the disclosed systems can utilize the feature vectors (or aggregated feature vectors) to identify similar tagged feature vectors from the set of tagged feature vectors. Additionally, the disclosed systems can generate a set of tags for the input digital videos by aggregating one or more tags corresponding to identified similar tagged feature vectors.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 22, 2020
    Inventors: Bryan Russell, Ruppesh Nalwaya, Markus Woodson, Joon-Young Lee, Hailin Jin
  • Patent number: 10803377
    Abstract: Techniques for predictively selecting a content presentation in a client-server computing environment are described. In an example, a content management system detects an interaction of a client with a server and accesses client features. Responses of the client to potential content presentations are predicted based on a multi-task neural network. The client features are mapped to input nodes and the potential content presentations are associated with tasks mapped to output nodes of the multi-task neural network. The tasks specify usages of the potential content presentations in response to the interaction with the server. In an example, the content management system selects the content presentation from the potential content presentations based on the predicted responses. For instance, the content presentation is selected based on having the highest likelihood. The content management system provides the content presentation to the client based on the task corresponding to the content presentation.
    Type: Grant
    Filed: February 25, 2016
    Date of Patent: October 13, 2020
    Assignee: Adobe Inc.
    Inventors: Anirban Roychowdhury, Trung Bui, John Kucera, Hung Bui, Hailin Jin
  • Patent number: 10803231
    Abstract: The present disclosure describes a font retrieval system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the font retrieval system jointly utilizes a combined recognition/retrieval model to generate font affinity scores corresponding to a list of font tags. Further, based on the font affinity scores, the font retrieval system identifies one or more fonts to recommend in response to the list of font tags such that the one or more provided fonts fairly reflect each of the font tags. Indeed, the font retrieval system utilizes a trained font retrieval neural network to efficiently and accurately identify and retrieve fonts in response to a text font tag query.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: October 13, 2020
    Assignee: ADOBE INC.
    Inventors: Zhaowen Wang, Tianlang Chen, Ning Xu, Hailin Jin
  • Publication number: 20200311186
    Abstract: The present disclosure relates to a font retrieval system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the font retrieval system jointly utilizes a combined recognition/retrieval model to generate font affinity scores corresponding to a list of font tags. Further, based on the font affinity scores, the font retrieval system identifies one or more fonts to recommend in response to the list of font tags such that the one or more provided fonts fairly reflect each of the font tags. Indeed, the font retrieval system utilizes a trained font retrieval neural network to efficiently and accurately identify and retrieve fonts in response to a text font tag query.
    Type: Application
    Filed: March 29, 2019
    Publication date: October 1, 2020
    Inventors: Zhaowen Wang, Tianlang Chen, Ning Xu, Hailin Jin
  • Patent number: 10783408
    Abstract: Systems and techniques for identification of fonts include receiving a selection of an area of an image including text, where the selection is received from within an application. The selected area of the image is input to a font matching module within the application. The font matching module identifies one or more fonts similar to the text in the selected area using a convolutional neural network. The one or more fonts similar to the text are displayed within the application and the selection and use of the one or more fonts is enabled within the application.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: September 22, 2020
    Assignee: ADOBE INC.
    Inventors: Zhaowen Wang, Sarah Aye Kong, I-Ming Pao, Hailin Jin, Alan Lee Erickson
  • Patent number: 10783409
    Abstract: Font replacement based on visual similarity is described. In one or more embodiments, a font descriptor includes multiple font features derived from a visual appearance of a font by a font visual similarity model. The font visual similarity model can be trained using a machine learning system that recognizes similarity between visual appearances of two different fonts. A source computing device embeds a font descriptor in a document, which is transmitted to a destination computing device. The destination compares the embedded font descriptor to font descriptors corresponding to local fonts. Based on distances between the embedded and the local font descriptors, at least one matching font descriptor is determined. The local font corresponding to the matching font descriptor is deemed similar to the original font. The destination computing device controls presentations of the document using the similar local font. Computation of font descriptors can be outsourced to a remote location.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: September 22, 2020
    Assignee: Adobe Inc.
    Inventors: Hailin Jin, Zhaowen Wang, Gavin Stuart Peter Miller
  • Patent number: 10783431
    Abstract: Image search techniques and systems involving emotions are described. In one or more implementations, a digital medium environment of a content sharing service is described for image search result configuration and control based on a search request that indicates an emotion. The search request is received that includes one or more keywords and specifies an emotion. Images are located that are available for licensing by matching one or more tags associated with the image with the one or more keywords and as corresponding to the emotion. The emotion of the images is identified using one or more models that are trained using machine learning based at least in part on training images having tagged emotions. Output is controlled of a search result having one or more representations of the images that are selectable to license respective images from the content sharing service.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: September 22, 2020
    Assignee: Adobe Inc.
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Patent number: 10778949
    Abstract: A robust system and method for estimating camera rotation in image sequences. A rotation-based reconstruction technique is described that is directed to performing reconstruction for image sequences with a zero or near-zero translation component. The technique may estimate only the rotation component of the camera motion in an image sequence, and may also estimate the camera intrinsic parameters if not known. Input to the technique may include an image sequence, and output may include the camera intrinsic parameters and the rotation parameters for all the images in the sequence. By only estimating a rotation component of camera motion, the assumption is made that the camera is not moving throughout the entire sequence. However, the camera is allowed to rotate and zoom arbitrarily. The technique may support both the case where the camera intrinsic parameters are known and the case where the camera intrinsic parameters are not known.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: September 15, 2020
    Assignee: Adobe Inc.
    Inventor: Hailin Jin
  • Publication number: 20200285916
    Abstract: The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.
    Type: Application
    Filed: March 6, 2019
    Publication date: September 10, 2020
    Inventors: Zhaowen Wang, Tianlang Chen, Ning Xu, Hailin Jin