Patents by Inventor Jonathan Brandt

Jonathan Brandt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9767386
    Abstract: This disclosure relates to training a classifier algorithm that can be used for automatically selecting tags to be applied to a received image. For example, a computing device can group training images together based on the training images having similar tags. The computing device trains a classifier algorithm to identify the training images as semantically similar to one another based on the training images being grouped together. The trained classifier algorithm is used to determine that an input image is semantically similar to an example tagged image. A tag is generated for the input image using tag content from the example tagged image based on determining that the input image is semantically similar to the tagged image.
    Type: Grant
    Filed: June 23, 2015
    Date of Patent: September 19, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Zhe Lin, Stefan Guggisberg, Jonathan Brandt, Michael Marth
  • Patent number: 9737105
    Abstract: A light-emitting system is provided which is removably attachable to headgear for personal illumination to enhance visibility of the user to others. The light-emitting system includes a housing that defines a receiving aperture and is configured to surround a portion of the headgear when the light-emitting system is removably attached to the headgear for use. The light-emitting system further includes at least one lens and a plurality of lighting elements coupled to the annular housing which are configured to selectively generate a halo or at least a partial halo of light that radiates outwardly away from the annular housing through the at least one lens to provide enhanced personal illumination.
    Type: Grant
    Filed: July 8, 2015
    Date of Patent: August 22, 2017
    Assignee: Illumagear, Inc.
    Inventors: John Maxwell Baker, Andrew Royal, Raymond Walter Riley, Mark John Ramberg, Chad Austin Brinckerhoff, John R. Murkowski, Trent Robert Wetherbee, Alexander Michael Diener, Kristin Marie Will, Kyle S. Johnston, Clint Timothy Schneider, Evan William Mattingly, Keith W. Kirkwood, Jonathan Brandt Hadley
  • Publication number: 20170236032
    Abstract: Embodiments of the present invention provide an automated image tagging system that can predict a set of tags, along with relevance scores, that can be used for keyword-based image retrieval, image tag proposal, and image tag auto-completion based on user input. Initially, during training, a clustering technique is utilized to reduce cluster imbalance in the data that is input into a convolutional neural network (CNN) for training feature data. In embodiments, the clustering technique can also be utilized to compute data point similarity that can be utilized for tag propagation (to tag untagged images). During testing, a diversity based voting framework is utilized to overcome user tagging biases. In some embodiments, bigram re-weighting can down-weight a keyword that is likely to be part of a bigram based on a predicted tag set.
    Type: Application
    Filed: February 12, 2016
    Publication date: August 17, 2017
    Inventors: ZHE LIN, XIAOHUI SHEN, JONATHAN BRANDT, JIANMING ZHANG, CHEN FANG
  • Publication number: 20170236055
    Abstract: Embodiments of the present invention provide an automated image tagging system that can predict a set of tags, along with relevance scores, that can be used for keyword-based image retrieval, image tag proposal, and image tag auto-completion based on user input. Initially, during training, a clustering technique is utilized to reduce cluster imbalance in the data that is input into a convolutional neural network (CNN) for training feature data. In embodiments, the clustering technique can also be utilized to compute data point similarity that can be utilized for tag propagation (to tag untagged images). During testing, a diversity based voting framework is utilized to overcome user tagging biases. In some embodiments, bigram re-weighting can down-weight a keyword that is likely to be part of a bigram based on a predicted tag set.
    Type: Application
    Filed: April 8, 2016
    Publication date: August 17, 2017
    Inventors: ZHE LIN, XIAOHUI SHEN, JONATHAN BRANDT, JIANMING ZHANG, CHEN FANG
  • Publication number: 20170140213
    Abstract: Methods and systems for recognizing people in images with increased accuracy are disclosed. In particular, the methods and systems divide images into a plurality of clusters based on common characteristics of the images. The methods and systems also determine an image cluster to which an image with an unknown person instance most corresponds. One or more embodiments determine a probability that the unknown person instance is each known person instance in the image cluster using a trained cluster classifier of the image cluster. Optionally, the methods and systems determine context weights for each combination of an unknown person instance and each known person instance using a conditional random field algorithm based on a plurality of context cues associated with the unknown person instance and the known person instances. The methods and systems calculate a contextual probability based on the cluster-based probabilities and context weights to identify the unknown person instance.
    Type: Application
    Filed: November 18, 2015
    Publication date: May 18, 2017
    Inventors: Jonathan Brandt, Zhe Lin, Xiaohui Shen, Haoxiang Li
  • Patent number: 9607014
    Abstract: A system is configured to annotate an image with tags. As configured, the system accesses an image and generates a set of vectors for the image. The set of vectors may be generated by mathematically transforming the image, such as by applying a mathematical transform to predetermined regions of the image. The system may then query a database of tagged images by submitting the set of vectors as search criteria to a search engine. The querying of the database may obtain a set of tagged images. Next, the system may rank the obtained set of tagged images according to similarity scores that quantify degrees of similarity between the image and each tagged image obtained. Tags from a top-ranked subset of the tagged images may be extracted by the system, which may then annotate the image with these extracted tags.
    Type: Grant
    Filed: October 31, 2013
    Date of Patent: March 28, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Zhaowen Wang, Jianchao Yang, Zhe Lin, Jonathan Brandt
  • Publication number: 20170060580
    Abstract: In various implementations, an abstraction is generated from an asset associated with an asset-modifying workflow. The abstraction can be embedded into an activity stream generated from an asset-modification application and communicated to a remote server device for collection and analysis. The remote server device, upon receiving at least the abstraction, can determine a contextual identifier for association with the abstraction and the asset associated with the asset-modifying workflow. The remote server device can conduct usage analysis on data received from the activity stream in association with the contextual identifier, and further send a signal to the asset-modification application to customize the workflow based on the contextual identifier determined to be associated with the abstraction and asset.
    Type: Application
    Filed: August 25, 2015
    Publication date: March 2, 2017
    Inventor: JONATHAN BRANDT
  • Publication number: 20170061257
    Abstract: Example systems and methods for classifying visual patterns into a plurality of classes are presented. Using reference visual patterns of known classification, at least one image or visual pattern classifier is generated, which is then employed to classify a plurality of candidate visual patterns of unknown classification. The classification scheme employed may be hierarchical or nonhierarchical. The types of visual patterns may be fonts, human faces, or any other type of visual patterns or images subject to classification.
    Type: Application
    Filed: November 11, 2016
    Publication date: March 2, 2017
    Inventors: JIANCHAO YANG, GUANG CHEN, HAILIN JIN, JONATHAN BRANDT, ELYA SHECHTMAN, ASEEM OMPRAKASH AGARWALA
  • Patent number: 9569213
    Abstract: In various implementations, an abstraction is generated from an asset associated with an asset-modifying workflow. The abstraction can be embedded into an activity stream generated from an asset-modification application and communicated to a remote server device for collection and analysis. The remote server device, upon receiving at least the abstraction, can determine a contextual identifier for association with the abstraction and the asset associated with the asset-modifying workflow. The remote server device can conduct usage analysis on data received from the activity stream in association with the contextual identifier, and further send a signal to the asset-modification application to customize the workflow based on the contextual identifier determined to be associated with the abstraction and asset.
    Type: Grant
    Filed: August 25, 2015
    Date of Patent: February 14, 2017
    Assignee: Adobe Systems Incorporated
    Inventor: Jonathan Brandt
  • Publication number: 20170004383
    Abstract: In various implementations, a personal asset management application is configured to perform operations that facilitate the ability to search multiple images, irrespective of the images having characterizing tags associated therewith or without, based on a simple text-based query. A first search is conducted by processing a text-based query to produce a first set of result images used to further generate a visually-based query based on the first set of result images. A second search is conducted employing the visually-based query that was based on the first set of result images received in accordance with the first search conducted and based on the text-based query. The second search can generate a second set of result images, each having visual similarity to at least one of the images generated for the first set of result images.
    Type: Application
    Filed: June 30, 2015
    Publication date: January 5, 2017
    Inventors: ZHE LIN, JONATHAN BRANDT, XIAOHUI SHEN, JAE-PIL HEO, JIANCHAO YANG
  • Publication number: 20160379091
    Abstract: This disclosure relates to training a classifier algorithm that can be used for automatically selecting tags to be applied to a received image. For example, a computing device can group training images together based on the training images having similar tags. The computing device trains a classifier algorithm to identify the training images as semantically similar to one another based on the training images being grouped together. The trained classifier algorithm is used to determine that an input image is semantically similar to an example tagged image. A tag is generated for the input image using tag content from the example tagged image based on determining that the input image is semantically similar to the tagged image.
    Type: Application
    Filed: June 23, 2015
    Publication date: December 29, 2016
    Inventors: Zhe Lin, Stefan Guggisberg, Jonathan Brandt, Michael Marth
  • Patent number: 9524449
    Abstract: Example systems and methods for classifying visual patterns into a plurality of classes are presented. Using reference visual patterns of known classification, at least one image or visual pattern classifier is generated, which is then employed to classify a plurality of candidate visual patterns of unknown classification. The classification scheme employed may be hierarchical or nonhierarchical. The types of visual patterns may be fonts, human faces, or any other type of visual patterns or images subject to classification.
    Type: Grant
    Filed: December 16, 2013
    Date of Patent: December 20, 2016
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Jianchao Yang, Guang Chen, Hailin Jin, Jonathan Brandt, Elya Shechtman, Aseem Omprakash Agarwala
  • Publication number: 20160364633
    Abstract: A convolutional neural network (CNN) is trained for font recognition and font similarity learning. In a training phase, text images with font labels are synthesized by introducing variances to minimize the gap between the training images and real-world text images. Training images are generated and input into the CNN. The output is fed into an N-way softmax function dependent on the number of fonts the CNN is being trained on, producing a distribution of classified text images over N class labels. In a testing phase, each test image is normalized in height and squeezed in aspect ratio resulting in a plurality of test patches. The CNN averages the probabilities of each test patch belonging to a set of fonts to obtain a classification. Feature representations may be extracted and utilized to define font similarity between fonts, which may be utilized in font suggestion, font browsing, or font recognition applications.
    Type: Application
    Filed: June 9, 2015
    Publication date: December 15, 2016
    Inventors: JIANCHAO YANG, ZHANGYANG WANG, JONATHAN BRANDT, HAILIN JIN, ELYA SHECHTMAN, ASEEM OMPRAKASH AGARWALA
  • Patent number: 9501724
    Abstract: A convolutional neural network (CNN) is trained for font recognition and font similarity learning. In a training phase, text images with font labels are synthesized by introducing variances to minimize the gap between the training images and real-world text images. Training images are generated and input into the CNN. The output is fed into an N-way softmax function dependent on the number of fonts the CNN is being trained on, producing a distribution of classified text images over N class labels. In a testing phase, each test image is normalized in height and squeezed in aspect ratio resulting in a plurality of test patches. The CNN averages the probabilities of each test patch belonging to a set of fonts to obtain a classification. Feature representations may be extracted and utilized to define font similarity between fonts, which may be utilized in font suggestion, font browsing, or font recognition applications.
    Type: Grant
    Filed: June 9, 2015
    Date of Patent: November 22, 2016
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Jianchao Yang, Zhangyang Wang, Jonathan Brandt, Hailin Jin, Elya Shechtman, Aseem Omprakash Agarwala
  • Patent number: 9436893
    Abstract: A system and method for distributed similarity learning for high-dimensional image features are described. A set of data features is accessed. Subspaces from a space formed by the set of data features are determined using a set of projection matrices. Each subspace has a dimension lower than a dimension of the set of data features. Similarity functions are computed for the subspaces. Each similarity function is based on the dimension of the corresponding subspace. A linear combination of the similarity functions is performed to determine a similarity function for the set of data features.
    Type: Grant
    Filed: November 27, 2013
    Date of Patent: September 6, 2016
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Jianchao Yang, Zhaowen Wang, Zhe Lin, Jonathan Brandt
  • Publication number: 20160132750
    Abstract: Techniques are disclosed for image feature representation. The techniques exhibit discriminative power that can be used in any number of classification tasks, and are particularly effective with respect to fine-grained image classification tasks. In an embodiment, a given image to be classified is divided into image patches. A vector is generated for each image patch. Each image patch vector is compared to the Gaussian mixture components (each mixture component is also a vector) of a Gaussian Mixture Model (GMM). Each such comparison generates a similarity score for each image patch vector. For each Gaussian mixture component, the image patch vectors associated with a similarity score that is too low are eliminated. The selectively pooled vectors from all the Gaussian mixture components are then concatenated to form the final image feature vector, which can be provided to a classifier so the given input image can be properly categorized.
    Type: Application
    Filed: November 7, 2014
    Publication date: May 12, 2016
    Applicant: ADOBE SYSTEMS INCORPORATED
    Inventors: Jianchao Yang, Jonathan Brandt
  • Publication number: 20160128412
    Abstract: A light-emitting system is provided which is removably attachable to headgear for personal illumination to enhance visibility of the user to others. The light-emitting system includes a housing that defines a receiving aperture and is configured to surround a portion of the headgear when the light-emitting system is removably attached to the headgear for use. The light-emitting system further includes at least one lens and a plurality of lighting elements coupled to the annular housing which are configured to selectively generate a halo or at least a partial halo of light that radiates outwardly away from the annular housing through the at least one lens to provide enhanced personal illumination.
    Type: Application
    Filed: July 8, 2015
    Publication date: May 12, 2016
    Inventors: John Maxwell Baker, Andrew Royal, Raymond Walter Riley, Mark John Ramberg, Chad Austin Brinckerhoff, John R. Murkowski, Trent Robert Wetherbee, Alexander Michael Diener, Kristin Marie Will, Kyle S. Johnston, Clint Timothy Schneider, Evan William Mattingly, Keith W. Kirkwood, Jonathan Brandt Hadley
  • Patent number: 9317534
    Abstract: An image search method includes receiving a first query, the first query providing a first image constraint. A first search of a plurality of images is performed, responsive to the first query, to identify a first set of images satisfying the first constraint. A first search result, which includes the first set of images identified as satisfying the first constraint, is presented. A second query is received, the second query providing a second image constraint with reference to a first image of the first set of images. A second search of the plurality of images is performed, responsive to the second query, to identify a second set of images that satisfy the second constraint. A second search result, which includes the second set of images identified as satisfying the second constraint, is presented.
    Type: Grant
    Filed: June 17, 2013
    Date of Patent: April 19, 2016
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventor: Jonathan Brandt
  • Publication number: 20160062731
    Abstract: Techniques are disclosed for indexing and searching high-dimensional data using inverted file structures and product quantization encoding. An image descriptor is quantized using a form of product quantization to determine which of several inverted lists the image descriptor is to be stored. The image descriptor is appended to the corresponding inverted list with a compact coding using a product quantization encoding scheme. When processing a query, a shortlist is computed that includes a set of candidate search results. The shortlist is based on the orthogonality between two random vectors in high-dimensional spaces. The inverted lists are traversed in the order of the distance between the query and the centroid of a coarse quantizer corresponding to each inverted list. The shortlist is ranked according to the distance estimated by a form of product quantization, and the top images referred to by the ranked shortlist are reported as the search results.
    Type: Application
    Filed: August 29, 2014
    Publication date: March 3, 2016
    Applicant: ADOBE SYSTEMS INCORPORATED
    Inventors: Zhe Lin, Jonathan Brandt, Xiaohui Shen, Jae-Pil Heo
  • Patent number: 9224066
    Abstract: One exemplary embodiment involves receiving, at a computing device comprising a processor, a test image having a candidate object and a set of object images detected to depict a similar object as the test image. The embodiment involves localizing the object depicted in each one of the object images based on the candidate object in the test image to determine a location of the object in each respective object image and then generating a validation score for the candidate object in the test image based at least in part on the determined location of the object in the respective object image and known location of the object in the same respective object image. The embodiment also involves computing a final detection score for the candidate object based on the validation score that indicates a confidence level that the object in the test image is located as indicated by the candidate object.
    Type: Grant
    Filed: March 1, 2013
    Date of Patent: December 29, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Zhe Lin, Jonathan Brandt, Xiaohui Shen