Patents by Inventor Hailin Jin

Hailin Jin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180365536
    Abstract: Systems and techniques for identification of fonts include receiving a selection of an area of an image including text, where the selection is received from within an application. The selected area of the image is input to a font matching module within the application. The font matching module identifies one or more fonts similar to the text in the selected area using a convolutional neural network. The one or more fonts similar to the text are displayed within the application and the selection and use of the one or more fonts is enabled within the application.
    Type: Application
    Filed: June 19, 2017
    Publication date: December 20, 2018
    Inventors: Zhaowen Wang, Sarah Aye Kong, I-Ming Pao, Hailin Jin, Alan Lee Erickson
  • Publication number: 20180357519
    Abstract: A combined structure and style network is described. Initially, a large set of training images, having a variety of different styles, is obtained. Each of these training images is associated with one of multiple different predetermined style categories indicating the image's style and one of multiple different predetermined semantic categories indicating objects depicted in the image. Groups of these images are formed, such that each group includes an anchor image having one of the styles, a positive-style example image having the same style as the anchor image, and a negative-style example image having a different style. Based on those groups, an image style network is generated to identify images having desired styling by recognizing visual characteristics of the different styles. The image style network is further combined, according to a unifying training technique, with an image structure network configured to recognize desired objects in images irrespective of image style.
    Type: Application
    Filed: June 7, 2017
    Publication date: December 13, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Hailin Jin, John Philip Collomosse
  • Publication number: 20180357259
    Abstract: Sketch and style based image retrieval in a digital medium environment is described. Initially, a user sketches an object (e.g., with a stylus) to be searched in connection with an image search. Styled images are selected to indicate a desired style of images to be returned by the search. A search request is generated based on the sketch and selected images. Responsive to the request, an image repository is searched to identify images having the desired object and styling. To search the image repository, a neural network is utilized that is capable of recognizing the desired object in images based on visual characteristics of the sketch and independently recognizing the desired styling in images based on visual characteristics of the selected images. This independent recognition allows desired styling to be specified by selecting images having the style but not the desired object. Images having the desired object and styling are returned.
    Type: Application
    Filed: June 9, 2017
    Publication date: December 13, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Hailin Jin, John Philip Collomosse
  • Patent number: 10140261
    Abstract: Font graphs are defined having a finite set of nodes representing fonts and a finite set of undirected edges denoting similarities between fonts. The font graphs enable users to browse and identify similar fonts. Indications corresponding to a degree of similarity between connected nodes may be provided. A selection of a desired font or characteristics associated with one or more attributes of the desired font is received from a user interacting with the font graph. The font graph is dynamically redefined based on the selection.
    Type: Grant
    Filed: May 23, 2014
    Date of Patent: November 27, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Jianchao Yang, Hailin Jin, Jonathan Brandt
  • Publication number: 20180300592
    Abstract: Font replacement based on visual similarity is described. In one or more embodiments, a font descriptor includes multiple font features derived from a visual appearance of a font by a font visual similarity model. The font visual similarity model can be trained using a machine learning system that recognizes similarity between visual appearances of two different fonts. A source computing device embeds a font descriptor in a document, which is transmitted to a destination computing device. The destination compares the embedded font descriptor to font descriptors corresponding to local fonts. Based on distances between the embedded and the local font descriptors, at least one matching font descriptor is determined. The local font corresponding to the matching font descriptor is deemed similar to the original font. The destination computing device controls presentations of the document using the similar local font. Computation of font descriptors can be outsourced to a remote location.
    Type: Application
    Filed: June 20, 2018
    Publication date: October 18, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Hailin Jin, Zhaowen Wang, Gavin Stuart Peter Miller
  • Publication number: 20180267997
    Abstract: A framework is provided for associating images with topics utilizing embedding learning. The framework is trained utilizing images, each having multiple visual characteristics and multiple keyword tags associated therewith. Visual features are computed from the visual characteristics utilizing a convolutional neural network and an image feature vector is generated therefrom. The keyword tags are utilized to generate a weighted word vector (or “soft topic feature vector”) for each image by calculating a weighted average of word vector representations that represent the keyword tags associated with the image. The image feature vector and the soft topic feature vector are aligned in a common embedding space and a relevancy score is computed for each of the keyword tags. Once trained, the framework can automatically tag images and a text-based search engine can rank image relevance with respect to queried keywords based upon predicted relevancy scores.
    Type: Application
    Filed: March 20, 2017
    Publication date: September 20, 2018
    Inventors: ZHE LIN, XIAOHUI SHEN, JIANMING ZHANG, HAILIN JIN, YINGWEI LI
  • Publication number: 20180267996
    Abstract: A framework is provided for associating dense images with topics. The framework is trained utilizing images, each having multiple regions, multiple visual characteristics and multiple keyword tags associated therewith. For each region of each image, visual features are computed from the visual characteristics utilizing a convolutional neural network, and an image feature vector is generated from the visual features. The keyword tags are utilized to generate a weighted word vector for each image by calculating a weighted average of word vector representations representing keyword tags associated with the image. The image feature vector and the weighted word vector are aligned in a common embedding space and a heat map is computed for the image. Once trained, the framework can be utilized to automatically tag images and rank the relevance of images with respect to queried keywords based upon associated heat maps.
    Type: Application
    Filed: March 20, 2017
    Publication date: September 20, 2018
    Inventors: ZHE LIN, XIAOHUI SHEN, JIANMING ZHANG, HAILIN JIN, YINGWEI LI
  • Patent number: 10074042
    Abstract: Font recognition and similarity determination techniques and systems are described. In a first example, localization techniques are described to train a model using machine learning (e.g., a convolutional neural network) using training images. The model is then used to localize text in a subsequently received image, and may do so automatically and without user intervention, e.g., without specifying any of the edges of a bounding box. In a second example, a deep neural network is directly learned as an embedding function of a model that is usable to determine font similarity. In a third example, techniques are described that leverage attributes described in metadata associated with fonts as part of font recognition and similarity determinations.
    Type: Grant
    Filed: October 6, 2015
    Date of Patent: September 11, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Zhaowen Wang, Luoqi Liu, Hailin Jin
  • Patent number: 10068317
    Abstract: Methods and apparatus for constraining solution space in image processing techniques may use the metadata for a set of images to constrain an image processing solution to a smaller solution space. In one embodiment, a process may require N parameters for processing an image. A determination may be made from metadata that multiple images were captured with the same camera/lens and with the same settings. A set of values may be estimated for the N parameters from data in one or more of the images. The process may then be applied to each of images using the set of values. In one embodiment, a value for a parameter of a process may be estimated for an image. If the estimated value deviates substantially from a value for the parameter in the metadata, the metadata value is used in the process instead of the estimated value.
    Type: Grant
    Filed: November 21, 2012
    Date of Patent: September 4, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Simon Chen, Jen-Chan Chien, Hailin Jin
  • Publication number: 20180239995
    Abstract: Font recognition and similarity determination techniques and systems are described. In a first example, localization techniques are described to train a model using machine learning (e.g., a convolutional neural network) using training images. The model is then used to localize text in a subsequently received image, and may do so automatically and without user intervention, e.g., without specifying any of the edges of a bounding box. In a second example, a deep neural network is directly learned as an embedding function of a model that is usable to determine font similarity. In a third example, techniques are described that leverage attributes described in metadata associated with fonts as part of font recognition and similarity determinations.
    Type: Application
    Filed: April 25, 2018
    Publication date: August 23, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Zhaowen Wang, Luoqi Liu, Hailin Jin
  • Publication number: 20180234669
    Abstract: Systems and methods provide for providing a stereoscopic six-degree of freedom viewing experience with a monoscopic 360-degree video are provided. A monoscopic 360-degree video of a subject scene can be preprocessed by analyzing each frame to recover a three-dimensional geometric representation of the subject scene, and further recover a camera motion path that includes various parameters associated with the camera, such as orientation, translational movement, and the like, as evidenced by the recording. Utilizing the recovered three-dimensional geometric representation of the subject scene and recovered camera motion path, a dense three-dimensional geometric representation of the subject scene is generated utilizing random assignment and propagation operations. Once preprocessing is complete, the processed video can be provided for stereoscopic display via a device, such as a head-mounted display.
    Type: Application
    Filed: February 15, 2017
    Publication date: August 16, 2018
    Inventors: ZHILI CHEN, DUYGU CEYLAN AKSIT, JINGWEI HUANG, HAILIN JIN
  • Patent number: 10026020
    Abstract: Embedding space for images with multiple text labels is described. In the embedding space both text labels and image regions are embedded. The text labels embedded describe semantic concepts that can be exhibited in image content. The embedding space is trained to semantically relate the embedded text labels so that labels like “sun” and “sunset” are more closely related than “sun” and “bird”. Training the embedding space also includes mapping representative images, having image content which exemplifies the semantic concepts, to respective text labels. Unlike conventional techniques that embed an entire training image into the embedding space for each text label associated with the training image, the techniques described herein process a training image to generate regions that correspond to the multiple text labels. The regions of the training image are then embedded into the training space in a manner that maps the regions to the corresponding text labels.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: July 17, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Hailin Jin, Zhou Ren, Zhe Lin, Chen Fang
  • Patent number: 10007868
    Abstract: Font replacement based on visual similarity is described. In one or more embodiments, a font descriptor includes multiple font features derived from a visual appearance of a font by a font visual similarity model. The font visual similarity model can be trained using a machine learning system that recognizes similarity between visual appearances of two different fonts. A source computing device embeds a font descriptor in a document, which is transmitted to a destination computing device. The destination compares the embedded font descriptor to font descriptors corresponding to local fonts. Based on distances between the embedded and the local font descriptors, at least one matching font descriptor is determined. The local font corresponding to the matching font descriptor is deemed similar to the original font. The destination computing device controls presentations of the document using the similar local font. Computation of font descriptors can be outsourced to a remote location.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: June 26, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Hailin Jin, Zhaowen Wang, Gavin Stuart Peter Miller
  • Publication number: 20180174070
    Abstract: This disclosure involves personalizing user experiences with electronic content based on application usage data. For example, a user representation model that facilitates content recommendations is iteratively trained with action histories from a content manipulation application. Each iteration involves selecting, from an action history for a particular user, an action sequence including a target action. An initial output is computed in each iteration by applying a probability function to the selected action sequence and a user representation vector for the particular user. The user representation vector is adjusted to maximize an output that is generated by applying the probability function to the action sequence and the user representation vector. This iterative training process generates a user representation model, which includes a set of adjusted user representation vectors, that facilitates content recommendations corresponding to users' usage pattern in the content manipulation application.
    Type: Application
    Filed: December 16, 2016
    Publication date: June 21, 2018
    Inventors: MATTHEW HOFFMAN, LONGQI YANG, HAILIN JIN, CHEN FANG
  • Publication number: 20180158199
    Abstract: The present disclosure is directed towards systems and methods for generating a new aligned image from a plurality of burst image. The systems and methods subdivide a reference image into a plurality of local regions and a subsequent image into a plurality of corresponding local regions. Additionally, the systems and methods detect a plurality of feature points in each of the reference image and the subsequent image and determine matching feature point pairs between the reference image and the subsequent image. Based on the matching feature point pairs, the systems and methods determine at least one homography of the reference image to the subsequent image. Based on the homography, the systems and methods generate a new aligned image that is that is pixel-wise aligned to the reference image. Furthermore, the systems and methods refines boundaries between local regions of the new aligned image.
    Type: Application
    Filed: August 14, 2017
    Publication date: June 7, 2018
    Inventors: Zhaowen Wang, Hailin Jin
  • Patent number: 9992387
    Abstract: In techniques for video denoising using optical flow, image frames of video content include noise that corrupts the video content. A reference frame is selected, and matching patches to an image patch in the reference frame are determined from within the reference frame. A noise estimate is computed for previous and subsequent image frames relative to the reference frame. The noise estimate for an image frame is computed based on optical flow, and is usable to determine a contribution of similar motion patches to denoise the image patch in the reference frame. The similar motion patches from the previous and subsequent image frames that correspond to the image patch in the reference frame are determined based on the optical flow computations. The image patch is denoised based on an average of the matching patches from reference frame and the similar motion patches determined from the previous and subsequent image frames.
    Type: Grant
    Filed: March 7, 2016
    Date of Patent: June 5, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Hailin Jin, Zhuoyuan Chen, Scott D. Cohen, Jianchao Yang, Zhe Lin
  • Publication number: 20180143988
    Abstract: A digital medium environment includes an asset processing application that performs editing of assets. A projection function is trained using pairs of actions pertaining to software edits, and assets resulting from the actions to learn a joint embedding between the actions and the assets. The projection function is used in the asset processing application to recommend software actions to create an asset, and also to recommend assets to demonstrate the effects of software actions. Recommendations are based on ranking distance measures that measure distances between actions representations and asset representations in a vector space.
    Type: Application
    Filed: November 21, 2016
    Publication date: May 24, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Matthew Douglas Hoffman, Longqi Yang, Hailin Jin, Chen Fang
  • Patent number: 9978129
    Abstract: Patch partition and image processing techniques are described. In one or more implementations, a system includes one or more modules implemented at least partially in hardware. The one or more modules are configured to perform operations including grouping a plurality of patches taken from a plurality of training samples of images into respective ones of a plurality of partitions, calculating an image processing operator for each of the partitions, determining distances between the plurality of partitions that describe image similarity of patches of the plurality of partitions, one to another, and configuring a database to provide the determined distance and the image processing operator to process an image in response to identification of a respective partition that corresponds to a patch taken from the image.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: May 22, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Zhe Lin, Jianchao Yang, Hailin Jin, Xin Lu
  • Patent number: 9965717
    Abstract: Embodiments of the present invention relate to learning image representation by distilling from multi-task networks. In implementation, more than one single-task network is trained with heterogeneous labels. In some embodiments, each of the single-task networks is transformed into a Siamese structure with three branches of sub-networks so that a common triplet ranking loss can be applied to each branch. A distilling network is trained that approximates the single-task networks on a common ranking task. In some embodiments, the distilling network is a Siamese network whose ranking function is optimized to approximate an ensemble ranking of each of the single-task networks. The distilling network can be utilized to predict tags to associate with a test image or identify similar images to the test image.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: May 8, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Zhaowen Wang, Xianming Liu, Hailin Jin, Chen Fang
  • Publication number: 20180121768
    Abstract: The present disclosure includes methods and systems for searching for digital visual media based on semantic and spatial information. In particular, one or more embodiments of the disclosed systems and methods identify digital visual media displaying targeted visual content in a targeted region based on a query term and a query area provide via a digital canvas. Specifically, the disclosed systems and methods can receive user input of a query term and a query area and provide the query term and query area to a query neural network to generate a query feature set. Moreover, the disclosed systems and methods can compare the query feature set to digital visual media feature sets. Further, based on the comparison, the disclosed systems and methods can identify digital visual media portraying targeted visual content corresponding to the query term within a targeted region corresponding to the query area.
    Type: Application
    Filed: February 10, 2017
    Publication date: May 3, 2018
    Inventors: Zhe Lin, Mai Long, Jonathan Brandt, Hailin Jin, Chen Fang