Patents by Inventor Hailin Jin

Hailin Jin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190325275
    Abstract: Various embodiments describe active learning methods for training temporal action localization models used to localize actions in untrimmed videos. A trainable active learning selection function is used to select unlabeled samples that can improve the temporal action localization model the most. The select unlabeled samples are then annotated and used to retrain the temporal action localization model. In some embodiment, the trainable active learning selection function includes a trainable performance prediction model that maps a video sample and a temporal action localization model to a predicted performance improvement for the temporal action localization model.
    Type: Application
    Filed: April 19, 2018
    Publication date: October 24, 2019
    Inventors: Joon-Young Lee, Hailin Jin, Fabian David Caba Heilbron
  • Publication number: 20190325277
    Abstract: Font replacement based on visual similarity is described. In one or more embodiments, a font descriptor includes multiple font features derived from a visual appearance of a font by a font visual similarity model. The font visual similarity model can be trained using a machine learning system that recognizes similarity between visual appearances of two different fonts. A source computing device embeds a font descriptor in a document, which is transmitted to a destination computing device. The destination compares the embedded font descriptor to font descriptors corresponding to local fonts. Based on distances between the embedded and the local font descriptors, at least one matching font descriptor is determined. The local font corresponding to the matching font descriptor is deemed similar to the original font. The destination computing device controls presentations of the document using the similar local font. Computation of font descriptors can be outsourced to a remote location.
    Type: Application
    Filed: July 3, 2019
    Publication date: October 24, 2019
    Applicant: Adobe Inc.
    Inventors: Hailin Jin, Zhaowen Wang, Gavin Stuart Peter Miller
  • Patent number: 10453204
    Abstract: The present disclosure is directed towards systems and methods for generating a new aligned image from a plurality of burst image. The systems and methods subdivide a reference image into a plurality of local regions and a subsequent image into a plurality of corresponding local regions. Additionally, the systems and methods detect a plurality of feature points in each of the reference image and the subsequent image and determine matching feature point pairs between the reference image and the subsequent image. Based on the matching feature point pairs, the systems and methods determine at least one homography of the reference image to the subsequent image. Based on the homography, the systems and methods generate a new aligned image that is that is pixel-wise aligned to the reference image. Furthermore, the systems and methods refines boundaries between local regions of the new aligned image.
    Type: Grant
    Filed: August 14, 2017
    Date of Patent: October 22, 2019
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Hailin Jin
  • Patent number: 10430455
    Abstract: Sketch and style based image retrieval in a digital medium environment is described. Initially, a user sketches an object (e.g., with a stylus) to be searched in connection with an image search. Styled images are selected to indicate a desired style of images to be returned by the search. A search request is generated based on the sketch and selected images. Responsive to the request, an image repository is searched to identify images having the desired object and styling. To search the image repository, a neural network is utilized that is capable of recognizing the desired object in images based on visual characteristics of the sketch and independently recognizing the desired styling in images based on visual characteristics of the selected images. This independent recognition allows desired styling to be specified by selecting images having the style but not the desired object. Images having the desired object and styling are returned.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: October 1, 2019
    Assignee: Adobe Inc.
    Inventors: Hailin Jin, John Philip Collomosse
  • Publication number: 20190286647
    Abstract: Sketch and style based image retrieval in a digital medium environment is described. Initially, a user sketches an object to be searched in connection with an image search. Styled images are selected to indicate a desired style of images to be returned by the search. A search request is generated based on the sketch and selected images. Responsive to the request, an image repository is searched to identify images having the desired object and styling. To search the image repository, a neural network is utilized that is capable of recognizing the desired object in images based on visual characteristics of the sketch and independently recognizing the desired styling in images based on visual characteristics of the selected images. This independent recognition allows desired styling to be specified by selecting images having the style but not the desired object. Images having the desired object and styling are returned.
    Type: Application
    Filed: June 5, 2019
    Publication date: September 19, 2019
    Applicant: Adobe Inc.
    Inventors: Hailin Jin, John Philip Collomosse
  • Publication number: 20190272451
    Abstract: The present disclosure includes methods and systems for searching for digital visual media based on semantic and spatial information. In particular, one or more embodiments of the disclosed systems and methods identify digital visual media displaying targeted visual content in a targeted region based on a query term and a query area provide via a digital canvas. Specifically, the disclosed systems and methods can receive user input of a query term and a query area and provide the query term and query area to a query neural network to generate a query feature set. Moreover, the disclosed systems and methods can compare the query feature set to digital visual media feature sets. Further, based on the comparison, the disclosed systems and methods can identify digital visual media portraying targeted visual content corresponding to the query term within a targeted region corresponding to the query area.
    Type: Application
    Filed: May 20, 2019
    Publication date: September 5, 2019
    Inventors: Zhe Lin, Mai Long, Jonathan Brandt, Hailin Jin, Chen Fang
  • Patent number: 10389804
    Abstract: Content creation and sharing integration techniques and systems are described. In one or more implementations, techniques are described in which modifiable versions of content (e.g., images) are created and shared via a content sharing service such that image creation functionality used to create the images is preserved to permit continued creation using this functionality. In one or more additional implementations, image creation functionality employed by a creative professional to create content is leveraged to locate similar images from a content sharing service.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: August 20, 2019
    Assignee: Adobe Inc.
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Patent number: 10380462
    Abstract: Font replacement based on visual similarity is described. In one or more embodiments, a font descriptor includes multiple font features derived from a visual appearance of a font by a font visual similarity model. The font visual similarity model can be trained using a machine learning system that recognizes similarity between visual appearances of two different fonts. A source computing device embeds a font descriptor in a document, which is transmitted to a destination computing device. The destination compares the embedded font descriptor to font descriptors corresponding to local fonts. Based on distances between the embedded and the local font descriptors, at least one matching font descriptor is determined. The local font corresponding to the matching font descriptor is deemed similar to the original font. The destination computing device controls presentations of the document using the similar local font. Computation of font descriptors can be outsourced to a remote location.
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: August 13, 2019
    Assignee: Adobe Inc.
    Inventors: Hailin Jin, Zhaowen Wang, Gavin Stuart Peter Miller
  • Patent number: 10368047
    Abstract: A stereoscopic six-degree of freedom viewing experience with a monoscopic 360-degree video is provided. A monoscopic 360-degree video of a subject scene can be processed by analyzing each frame to recover a three-dimensional geometric representation, and recover a camera motion path. Utilizing the recovered three-dimensional geometric representation and camera motion path, a dense three-dimensional geometric representation of the subject scene is generated. The processed video can be provided for stereoscopic display via a device. As motion of the device is detected, novel viewpoints can be stereoscopically synthesized for presentation in real time, so as to provide an immersive virtual reality experience based on the original monoscopic 360-degree video and the detected motion of the device.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: July 30, 2019
    Assignee: ADONE INC.
    Inventors: Zhili Chen, Duygu Ceylan Aksit, Jingwei Huang, Hailin Jin
  • Patent number: 10346727
    Abstract: The present disclosure includes methods and systems for searching for digital visual media based on semantic and spatial information. In particular, one or more embodiments of the disclosed systems and methods identify digital visual media displaying targeted visual content in a targeted region based on a query term and a query area provide via a digital canvas. Specifically, the disclosed systems and methods can receive user input of a query term and a query area and provide the query term and query area to a query neural network to generate a query feature set. Moreover, the disclosed systems and methods can compare the query feature set to digital visual media feature sets. Further, based on the comparison, the disclosed systems and methods can identify digital visual media portraying targeted visual content corresponding to the query term within a targeted region corresponding to the query area.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: July 9, 2019
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Mai Long, Jonathan Brandt, Hailin Jin, Chen Fang
  • Publication number: 20190147304
    Abstract: The present disclosure relates to a font recognition system that employs a multi-task learning framework and training to improve font classification and remove negative side effects caused by intra-class variances of glyph content. For example, in one or more embodiments, the font recognition system trains a hybrid font recognition neural network that includes two or more font recognition neural networks and a weight prediction neural network. The hybrid font recognition neural network determines and generates classification weights based on which font recognition neural network within the hybrid font recognition neural network is best suited to classify the font in an input text image. By employing a hybrid trained font classification neural network, the font recognition system can improve overall font recognition as well as remove the negative side effects from diverse glyph content.
    Type: Application
    Filed: November 14, 2017
    Publication date: May 16, 2019
    Inventors: Yang Liu, Zhaowen Wang, I-Ming Pao, Hailin Jin
  • Publication number: 20190138860
    Abstract: The present disclosure relates to a font recognition system that employs a multi-task learning framework and adversarial training to improve font classification and remove negative side effects caused by intra-class variances of glyph content. For example, in one or more embodiments, the font recognition system adversarial trains a font recognition neural network by minimizing font classification loss while at the same time maximizing glyph classification loss. By employing an adversarially trained font classification neural network, the font recognition system can improve overall font recognition by removing the negative side effects from diverse glyph content.
    Type: Application
    Filed: November 8, 2017
    Publication date: May 9, 2019
    Inventors: Yang Liu, Zhaowen Wang, Hailin Jin
  • Publication number: 20190130231
    Abstract: The present disclosure relates to a font recognition system that employs a multi-task learning framework to jointly improve font classification and remove negative side effects caused by intra-class variances of glyph content. For example, in one or more embodiments, the font recognition system can jointly train a font recognition neural network using a font classification loss model and triplet loss model to generate a deep learning neural network that provides improved font classifications. In addition, the font recognition system can employ the trained font recognition neural network to efficiently recognize fonts within input images as well as provide other suggested fonts.
    Type: Application
    Filed: October 27, 2017
    Publication date: May 2, 2019
    Inventors: Yang Liu, Zhaowen Wang, Hailin Jin
  • Patent number: 10268928
    Abstract: A combined structure and style network is described. Initially, a large set of training images, having a variety of different styles, is obtained. Each of these training images is associated with one of multiple different predetermined style categories indicating the image's style and one of multiple different predetermined semantic categories indicating objects depicted in the image. Groups of these images are formed, such that each group includes an anchor image having one of the styles, a positive-style example image having the same style as the anchor image, and a negative-style example image having a different style. Based on those groups, an image style network is generated to identify images having desired styling by recognizing visual characteristics of the different styles. The image style network is further combined, according to a unifying training technique, with an image structure network configured to recognize desired objects in images irrespective of image style.
    Type: Grant
    Filed: June 7, 2017
    Date of Patent: April 23, 2019
    Assignee: Adobe Inc.
    Inventors: Hailin Jin, John Philip Collomosse
  • Publication number: 20190108203
    Abstract: The present disclosure relates to an asymmetric font pairing system that efficiently pairs digital fonts. For example, in one or more embodiments, the asymmetric font pairing system automatically identifies and provides users with visually aesthetic font pairs for use in different sections of an electronic document. In particular, the asymmetric font pairing system learns visually aesthetic font pairs using joint symmetric and asymmetric compatibility metric learning. In addition, the asymmetric font pairing system provides compact compatibility spaces (e.g., a symmetric compatibility space and an asymmetric compatibility space) to computing devices (e.g., client devices and server devices), which enable the computing devices to quickly and efficiently provide font pairs to users.
    Type: Application
    Filed: October 11, 2017
    Publication date: April 11, 2019
    Inventors: Zhaowen Wang, Hailin Jin, Aaron Phillip Hertzmann, Shuhui Jiang
  • Patent number: 10249061
    Abstract: Content creation and sharing integration techniques and systems are described. In one or more implementations, techniques are described in which modifiable versions of content (e.g., images) are created and shared via a content sharing service such that image creation functionality used to create the images is preserved to permit continued creation using this functionality. In one or more additional implementations, image creation functionality employed by a creative professional to create content is leveraged to locate similar images from a content sharing service.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: April 2, 2019
    Assignee: Adobe Inc.
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Patent number: 10216766
    Abstract: A framework is provided for associating images with topics utilizing embedding learning. The framework is trained utilizing images, each having multiple visual characteristics and multiple keyword tags associated therewith. Visual features are computed from the visual characteristics utilizing a convolutional neural network and an image feature vector is generated therefrom. The keyword tags are utilized to generate a weighted word vector (or “soft topic feature vector”) for each image by calculating a weighted average of word vector representations that represent the keyword tags associated with the image. The image feature vector and the soft topic feature vector are aligned in a common embedding space and a relevancy score is computed for each of the keyword tags. Once trained, the framework can automatically tag images and a text-based search engine can rank image relevance with respect to queried keywords based upon predicted relevancy scores.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: February 26, 2019
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Xiaohui Shen, Jianming Zhang, Hailin Jin, Yingwei Li
  • Publication number: 20190057527
    Abstract: Techniques and systems are described for style-aware patching of a digital image in a digital medium environment. For example, a digital image creation system generates style data for a portion to be filled of a digital image, indicating a style of an area surrounding the portion. The digital image creation system also generates content data for the portion indicating content of the digital image of the area surrounding the portion. The digital image creation system selects a source digital image based on similarity of both style and content of the source digital image at a location of the patch to the style data and content data. The digital image creation system transforms the style of the source digital image based on the style data and generates the patch from the source digital image in the transformed style for incorporation into the portion to be filled of the digital image.
    Type: Application
    Filed: August 17, 2017
    Publication date: February 21, 2019
    Applicant: Adobe Systems Incorporated
    Inventors: Hailin Jin, John Philip Collomosse, Brian L. Price
  • Patent number: 10198590
    Abstract: Content creation collection and navigation techniques and systems are described. In one example, a representative image is used by a content sharing service to interact with a collection of images provided as part of a search result. In another example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics. In a further example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics identified for an object selected from the image. In yet another example, collections of images are leveraged as part of content creation. In another example, data obtained from a content sharing service is leveraged to indicate suitability of images of a user for licensing as part of the service.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: February 5, 2019
    Assignee: Adobe Inc.
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Publication number: 20180373979
    Abstract: The present disclosure includes methods and systems for generating captions for digital images. In particular, the disclosed systems and methods can train an image encoder neural network and a sentence decoder neural network to generate a caption from an input digital image. For instance, in one or more embodiments, the disclosed systems and methods train an image encoder neural network (e.g., a character-level convolutional neural network) utilizing a semantic similarity constraint, training images, and training captions. Moreover, the disclosed systems and methods can train a sentence decoder neural network (e.g., a character-level recurrent neural network) utilizing training sentences and an adversarial classifier.
    Type: Application
    Filed: June 22, 2017
    Publication date: December 27, 2018
    Inventors: Zhaowen Wang, Shuai Tang, Hailin Jin, Chen Fang