Patents by Inventor Jonathan W Brandt

Jonathan W Brandt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160371538
    Abstract: Accelerating object detection techniques are described. In one or more implementations, adaptive sampling techniques are used to extract features from an image. Coarse features are extracted from the image and used to generate an object probability map. Then, dense features are extracted from high-probability object regions of the image identified in the object probability map to enable detection of an object in the image. In one or more implementations, cascade object detection techniques are used to detect an object in an image. In a first stage, exemplars in a first subset of exemplars are applied to features extracted from the multiple regions of the image to detect object candidate regions. Then, in one or more validation stages, the object candidate regions are validated by applying exemplars from the first subset of exemplars and one or more additional subsets of exemplars.
    Type: Application
    Filed: September 1, 2016
    Publication date: December 22, 2016
    Applicant: Adobe Systems Incorporated
    Inventors: Xiaohui Shen, Zhe Lin, Jonathan W. Brandt
  • Publication number: 20160307074
    Abstract: Different candidate windows in an image are identified, such as by sliding a rectangular or other geometric shape of different sizes over an image to identify portions of the image (groups of pixels in the image). The candidate windows are analyzed by a set of convolutional neural networks, which are cascaded so that the input of one convolutional neural network layer is based on the input of another convolutional neural network layer. Each convolutional neural network layer drops or rejects one or more candidate windows that the convolutional neural network layer determines does not include an object (e.g., a face). The candidate windows that are identified as including an object (e.g., a face) are analyzed by another one of the convolutional neural network layers. The candidate windows identified by the last of the convolutional neural network layers are the indications of the objects (e.g., faces) in the image.
    Type: Application
    Filed: June 29, 2016
    Publication date: October 20, 2016
    Applicant: Adobe Systems Incorporated
    Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
  • Patent number: 9471828
    Abstract: Accelerating object detection techniques are described. In one or more implementations, adaptive sampling techniques are used to extract features from an image. Coarse features are extracted from the image and used to generate an object probability map. Then, dense features are extracted from high-probability object regions of the image identified in the object probability map to enable detection of an object in the image. In one or more implementations, cascade object detection techniques are used to detect an object in an image. In a first stage, exemplars in a first subset of exemplars are applied to features extracted from the multiple regions of the image to detect object candidate regions. Then, in one or more validation stages, the object candidate regions are validated by applying exemplars from the first subset of exemplars and one or more additional subsets of exemplars.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: October 18, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Xiaohui Shen, Zhe Lin, Jonathan W. Brandt
  • Publication number: 20160292537
    Abstract: Feature interpolation techniques are described. In a training stage, features are extracted from a collection of training images and quantized into visual words. Spatial configurations of the visual words in the training images are determined and stored in a spatial configuration database. In an object detection stage, a portion of features of an image are extracted from the image and quantized into visual words. Then, a remaining portion of the features of the image are interpolated using the visual words and the spatial configurations of visual words stored in the spatial configuration database.
    Type: Application
    Filed: June 15, 2016
    Publication date: October 6, 2016
    Applicant: Adobe Systems Incorporated
    Inventors: Xiaohui Shen, Zhe Lin, Jonathan W. Brandt
  • Publication number: 20160275341
    Abstract: Techniques for facial expression capture for character animation are described. In one or more implementations, facial key points are identified in a series of images. Each image, in the series of images, is normalized from the identified facial key points. Facial features are determined from each of the normalized images. Then a facial expression is classified, based on the determined facial features, for each of the normalized images. In additional implementations, a series of images are captured that include performances of one or more facial expressions. The facial expressions in each image of the series of images are classified by a facial expression classifier. Then the facial expression classifications are used by a character animator system to produce a series of animated images of an animated character that include animated facial expressions that are associated with the facial expression classification of the corresponding image in the series of images.
    Type: Application
    Filed: March 18, 2015
    Publication date: September 22, 2016
    Inventors: Wilmot Wei-Mau Li, Jianchao Yang, Linjie Luo, Jonathan W. Brandt, Xiang Yu
  • Patent number: 9424484
    Abstract: Feature interpolation techniques are described. In a training stage, features are extracted from a collection of training images and quantized into visual words. Spatial configurations of the visual words in the training images are determined and stored in a spatial configuration database. In an object detection stage, a portion of features of an image are extracted from the image and quantized into visual words. Then, a remaining portion of the features of the image are interpolated using the visual words and the spatial configurations of visual words stored in the spatial configuration database.
    Type: Grant
    Filed: July 18, 2014
    Date of Patent: August 23, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Xiaohui Shen, Zhe Lin, Jonathan W. Brandt
  • Patent number: 9418319
    Abstract: Different candidate windows in an image are identified, such as by sliding a rectangular or other geometric shape of different sizes over an image to identify portions of the image (groups of pixels in the image). The candidate windows are analyzed by a set of convolutional neural networks, which are cascaded so that the input of one convolutional neural network layer is based on the input of another convolutional neural network layer. Each convolutional neural network layer drops or rejects one or more candidate windows that the convolutional neural network layer determines does not include an object (e.g., a face). The candidate windows that are identified as including an object (e.g., a face) are analyzed by another one of the convolutional neural network layers. The candidate windows identified by the last of the convolutional neural network layers are the indications of the objects (e.g., faces) in the image.
    Type: Grant
    Filed: November 21, 2014
    Date of Patent: August 16, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
  • Publication number: 20160148079
    Abstract: Different candidate windows in an image are identified, such as by sliding a rectangular or other geometric shape of different sizes over an image to identify portions of the image (groups of pixels in the image). The candidate windows are analyzed by a set of convolutional neural networks, which are cascaded so that the input of one convolutional neural network layer is based on the input of another convolutional neural network layer. Each convolutional neural network layer drops or rejects one or more candidate windows that the convolutional neural network layer determines does not include an object (e.g., a face). The candidate windows that are identified as including an object (e.g., a face) are analyzed by another one of the convolutional neural network layers. The candidate windows identified by the last of the convolutional neural network layers are the indications of the objects (e.g., faces) in the image.
    Type: Application
    Filed: November 21, 2014
    Publication date: May 26, 2016
    Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
  • Publication number: 20160148078
    Abstract: A convolutional neural network is trained to analyze input data in various different manners. The convolutional neural network includes multiple layers, one of which is a convolution layer that performs a convolution, for each of one or more filters in the convolution layer, of the filter over the input data. The convolution includes generation of an inner product based on the filter and the input data. Both the filter of the convolution layer and the input data are binarized, allowing the inner product to be computed using particular operations that are typically faster than multiplication of floating point values. The possible results for the convolution layer can optionally be pre-computed and stored in a look-up table.
    Type: Application
    Filed: November 20, 2014
    Publication date: May 26, 2016
    Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
  • Patent number: 9269017
    Abstract: Cascaded object detection techniques are described. In one or more implementations, cascaded coarse-to-dense object detection techniques are utilized to detect objects in images. In a first stage, coarse features are extracted from an image, and non-object regions are rejected. Then, in one or more subsequent stages, dense features are extracted from the remaining non-rejected regions of the image to detect one or more objects in the image.
    Type: Grant
    Filed: November 15, 2013
    Date of Patent: February 23, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Zhe Lin, Jonathan W. Brandt, Xiaohui Shen, Haoxiang Li
  • Publication number: 20160027181
    Abstract: Accelerating object detection techniques are described. In one or more implementations, adaptive sampling techniques are used to extract features from an image. Coarse features are extracted from the image and used to generate an object probability map. Then, dense features are extracted from high-probability object regions of the image identified in the object probability map to enable detection of an object in the image. In one or more implementations, cascade object detection techniques are used to detect an object in an image. In a first stage, exemplars in a first subset of exemplars are applied to features extracted from the multiple regions of the image to detect object candidate regions. Then, in one or more validation stages, the object candidate regions are validated by applying exemplars from the first subset of exemplars and one or more additional subsets of exemplars.
    Type: Application
    Filed: July 28, 2014
    Publication date: January 28, 2016
    Inventors: Xiaohui Shen, Zhe Lin, Jonathan W. Brandt
  • Publication number: 20160019440
    Abstract: Feature interpolation techniques are described. In a training stage, features are extracted from a collection of training images and quantized into visual words. Spatial configurations of the visual words in the training images are determined and stored in a spatial configuration database. In an object detection stage, a portion of features of an image are extracted from the image and quantized into visual words. Then, a remaining portion of the features of the image are interpolated using the visual words and the spatial configurations of visual words stored in the spatial configuration database.
    Type: Application
    Filed: July 18, 2014
    Publication date: January 21, 2016
    Inventors: Xiaohui Shen, Zhe Lin, Jonathan W. Brandt
  • Patent number: 9230192
    Abstract: Image classification techniques using images with separate grayscale and color channels are described. In one or more implementations, an image classification network includes grayscale filters and color filters which are separate from the grayscale filters. The grayscale filters are configured to extract grayscale features from a grayscale channel of an image, and the color filters are configured to extract color features from a color channel of the image. The extracted grayscale features and color features are used to identify an object in the image, and the image is classified based on the identified object.
    Type: Grant
    Filed: November 15, 2013
    Date of Patent: January 5, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Hailin Jin, Thomas Le Paine, Jianchao Yang, Zhe Lin, Jonathan W. Brandt
  • Patent number: 9208404
    Abstract: In techniques for object detection with boosted exemplars, weak classifiers of a real-adaboost technique can be learned as exemplars that are collected from example images. The exemplars are examples of an object that is detectable in image patches of an image, such as faces that are detectable in images. The weak classifiers of the real-adaboost technique can be applied to the image patches of the image, and a confidence score is determined for each of the weak classifiers as applied to an image patch of the image. The confidence score of a weak classifier is an indication of whether the object is detected in the image patch of the image based on the weak classifier. All of the confidence scores of the weak classifiers can then be summed to generate an overall object detection score that indicates whether the image patch of the image includes the object.
    Type: Grant
    Filed: November 15, 2013
    Date of Patent: December 8, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Zhe Lin, Jonathan W. Brandt, Xiaohui Shen, Haoxiang Li
  • Patent number: 9202138
    Abstract: Various embodiments of methods and apparatus for feature point localization are disclosed. A profile model and a shape model may be applied to an object in an image to determine locations of feature points for each object component. Input may be received to move one of the feature points to a fixed location. Other ones of the feature points may be automatically adjusted to different locations based on the moved feature point.
    Type: Grant
    Filed: October 4, 2012
    Date of Patent: December 1, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Jonathan W. Brandt, Zhe Lin, Vuong Le, Lubomir D. Bourdev
  • Patent number: 9158963
    Abstract: Various embodiments of methods and apparatus for feature point localization are disclosed. An object in an input image may be detected. A profile model may be applied to determine feature point locations for each object component of the detected object. Applying the profile model may include globally optimizing the feature points for each object component to find a global energy minimum. A component-based shape model may be applied to update the respective feature point locations for each object component.
    Type: Grant
    Filed: October 4, 2012
    Date of Patent: October 13, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Jonathan W. Brandt, Zhe Lin, Lubomir D. Bourdev, Vuong Le
  • Patent number: 9098930
    Abstract: Embodiments of methods and systems for stereo-aware image editing are described. A three-dimensional model of a stereo scene is built from one or more input images. Camera parameters for the input images are computed. The three-dimensional model is modified. In some embodiments, the modifying the three-dimensional model includes modifying one or more of the images and applying results of the modifying one or more of the images to corresponding model vertices. The scene is re-rendered from the camera parameters to produce an edited stereo pair that is consistent with the three-dimensional model.
    Type: Grant
    Filed: September 27, 2012
    Date of Patent: August 4, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Scott D. Cohen, Brian L. Price, Chenxi Zhang, Jonathan W. Brandt
  • Publication number: 20150139536
    Abstract: Image classification techniques using images with separate grayscale and color channels are described. In one or more implementations, an image classification network includes grayscale filters and color filters which are separate from the grayscale filters. The grayscale filters are configured to extract grayscale features from a grayscale channel of an image, and the color filters are configured to extract color features from a color channel of the image. The extracted grayscale features and color features are used to identify an object in the image, and the image is classified based on the identified object.
    Type: Application
    Filed: November 15, 2013
    Publication date: May 21, 2015
    Applicant: Adobe Systems Incorporated
    Inventors: Hailin Jin, Thomas Le Paine, Jianchao Yang, Zhe Lin, Jonathan W. Brandt
  • Publication number: 20150139551
    Abstract: Cascaded object detection techniques are described. In one or more implementations, cascaded coarse-to-dense object detection techniques are utilized to detect objects in images. In a first stage, coarse features are extracted from an image, and non-object regions are rejected. Then, in one or more subsequent stages, dense features are extracted from the remaining non-rejected regions of the image to detect one or more objects in the image.
    Type: Application
    Filed: November 15, 2013
    Publication date: May 21, 2015
    Applicant: Adobe Systems Incorporated
    Inventors: Zhe Lin, Jonathan W. Brandt, Xiaohui Shen, Haoxiang Li
  • Publication number: 20150139538
    Abstract: In techniques for object detection with boosted exemplars, weak classifiers of a real-adaboost technique can be learned as exemplars that are collected from example images. The exemplars are examples of an object that is detectable in image patches of an image, such as faces that are detectable in images. The weak classifiers of the real-adaboost technique can be applied to the image patches of the image, and a confidence score is determined for each of the weak classifiers as applied to an image patch of the image. The confidence score of a weak classifier is an indication of whether the object is detected in the image patch of the image based on the weak classifier. All of the confidence scores of the weak classifiers can then be summed to generate an overall object detection score that indicates whether the image patch of the image includes the object.
    Type: Application
    Filed: November 15, 2013
    Publication date: May 21, 2015
    Applicant: Adobe Systems Incorporated
    Inventors: Zhe Lin, Jonathan W. Brandt, Xiaohui Shen, Haoxiang Li