Patents by Inventor Jonathan W Brandt
Jonathan W Brandt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20160371538Abstract: Accelerating object detection techniques are described. In one or more implementations, adaptive sampling techniques are used to extract features from an image. Coarse features are extracted from the image and used to generate an object probability map. Then, dense features are extracted from high-probability object regions of the image identified in the object probability map to enable detection of an object in the image. In one or more implementations, cascade object detection techniques are used to detect an object in an image. In a first stage, exemplars in a first subset of exemplars are applied to features extracted from the multiple regions of the image to detect object candidate regions. Then, in one or more validation stages, the object candidate regions are validated by applying exemplars from the first subset of exemplars and one or more additional subsets of exemplars.Type: ApplicationFiled: September 1, 2016Publication date: December 22, 2016Applicant: Adobe Systems IncorporatedInventors: Xiaohui Shen, Zhe Lin, Jonathan W. Brandt
-
Publication number: 20160307074Abstract: Different candidate windows in an image are identified, such as by sliding a rectangular or other geometric shape of different sizes over an image to identify portions of the image (groups of pixels in the image). The candidate windows are analyzed by a set of convolutional neural networks, which are cascaded so that the input of one convolutional neural network layer is based on the input of another convolutional neural network layer. Each convolutional neural network layer drops or rejects one or more candidate windows that the convolutional neural network layer determines does not include an object (e.g., a face). The candidate windows that are identified as including an object (e.g., a face) are analyzed by another one of the convolutional neural network layers. The candidate windows identified by the last of the convolutional neural network layers are the indications of the objects (e.g., faces) in the image.Type: ApplicationFiled: June 29, 2016Publication date: October 20, 2016Applicant: Adobe Systems IncorporatedInventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
-
Patent number: 9471828Abstract: Accelerating object detection techniques are described. In one or more implementations, adaptive sampling techniques are used to extract features from an image. Coarse features are extracted from the image and used to generate an object probability map. Then, dense features are extracted from high-probability object regions of the image identified in the object probability map to enable detection of an object in the image. In one or more implementations, cascade object detection techniques are used to detect an object in an image. In a first stage, exemplars in a first subset of exemplars are applied to features extracted from the multiple regions of the image to detect object candidate regions. Then, in one or more validation stages, the object candidate regions are validated by applying exemplars from the first subset of exemplars and one or more additional subsets of exemplars.Type: GrantFiled: July 28, 2014Date of Patent: October 18, 2016Assignee: Adobe Systems IncorporatedInventors: Xiaohui Shen, Zhe Lin, Jonathan W. Brandt
-
Publication number: 20160292537Abstract: Feature interpolation techniques are described. In a training stage, features are extracted from a collection of training images and quantized into visual words. Spatial configurations of the visual words in the training images are determined and stored in a spatial configuration database. In an object detection stage, a portion of features of an image are extracted from the image and quantized into visual words. Then, a remaining portion of the features of the image are interpolated using the visual words and the spatial configurations of visual words stored in the spatial configuration database.Type: ApplicationFiled: June 15, 2016Publication date: October 6, 2016Applicant: Adobe Systems IncorporatedInventors: Xiaohui Shen, Zhe Lin, Jonathan W. Brandt
-
Publication number: 20160275341Abstract: Techniques for facial expression capture for character animation are described. In one or more implementations, facial key points are identified in a series of images. Each image, in the series of images, is normalized from the identified facial key points. Facial features are determined from each of the normalized images. Then a facial expression is classified, based on the determined facial features, for each of the normalized images. In additional implementations, a series of images are captured that include performances of one or more facial expressions. The facial expressions in each image of the series of images are classified by a facial expression classifier. Then the facial expression classifications are used by a character animator system to produce a series of animated images of an animated character that include animated facial expressions that are associated with the facial expression classification of the corresponding image in the series of images.Type: ApplicationFiled: March 18, 2015Publication date: September 22, 2016Inventors: Wilmot Wei-Mau Li, Jianchao Yang, Linjie Luo, Jonathan W. Brandt, Xiang Yu
-
Patent number: 9424484Abstract: Feature interpolation techniques are described. In a training stage, features are extracted from a collection of training images and quantized into visual words. Spatial configurations of the visual words in the training images are determined and stored in a spatial configuration database. In an object detection stage, a portion of features of an image are extracted from the image and quantized into visual words. Then, a remaining portion of the features of the image are interpolated using the visual words and the spatial configurations of visual words stored in the spatial configuration database.Type: GrantFiled: July 18, 2014Date of Patent: August 23, 2016Assignee: Adobe Systems IncorporatedInventors: Xiaohui Shen, Zhe Lin, Jonathan W. Brandt
-
Patent number: 9418319Abstract: Different candidate windows in an image are identified, such as by sliding a rectangular or other geometric shape of different sizes over an image to identify portions of the image (groups of pixels in the image). The candidate windows are analyzed by a set of convolutional neural networks, which are cascaded so that the input of one convolutional neural network layer is based on the input of another convolutional neural network layer. Each convolutional neural network layer drops or rejects one or more candidate windows that the convolutional neural network layer determines does not include an object (e.g., a face). The candidate windows that are identified as including an object (e.g., a face) are analyzed by another one of the convolutional neural network layers. The candidate windows identified by the last of the convolutional neural network layers are the indications of the objects (e.g., faces) in the image.Type: GrantFiled: November 21, 2014Date of Patent: August 16, 2016Assignee: Adobe Systems IncorporatedInventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
-
Publication number: 20160148079Abstract: Different candidate windows in an image are identified, such as by sliding a rectangular or other geometric shape of different sizes over an image to identify portions of the image (groups of pixels in the image). The candidate windows are analyzed by a set of convolutional neural networks, which are cascaded so that the input of one convolutional neural network layer is based on the input of another convolutional neural network layer. Each convolutional neural network layer drops or rejects one or more candidate windows that the convolutional neural network layer determines does not include an object (e.g., a face). The candidate windows that are identified as including an object (e.g., a face) are analyzed by another one of the convolutional neural network layers. The candidate windows identified by the last of the convolutional neural network layers are the indications of the objects (e.g., faces) in the image.Type: ApplicationFiled: November 21, 2014Publication date: May 26, 2016Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
-
Publication number: 20160148078Abstract: A convolutional neural network is trained to analyze input data in various different manners. The convolutional neural network includes multiple layers, one of which is a convolution layer that performs a convolution, for each of one or more filters in the convolution layer, of the filter over the input data. The convolution includes generation of an inner product based on the filter and the input data. Both the filter of the convolution layer and the input data are binarized, allowing the inner product to be computed using particular operations that are typically faster than multiplication of floating point values. The possible results for the convolution layer can optionally be pre-computed and stored in a look-up table.Type: ApplicationFiled: November 20, 2014Publication date: May 26, 2016Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
-
Patent number: 9269017Abstract: Cascaded object detection techniques are described. In one or more implementations, cascaded coarse-to-dense object detection techniques are utilized to detect objects in images. In a first stage, coarse features are extracted from an image, and non-object regions are rejected. Then, in one or more subsequent stages, dense features are extracted from the remaining non-rejected regions of the image to detect one or more objects in the image.Type: GrantFiled: November 15, 2013Date of Patent: February 23, 2016Assignee: Adobe Systems IncorporatedInventors: Zhe Lin, Jonathan W. Brandt, Xiaohui Shen, Haoxiang Li
-
Publication number: 20160027181Abstract: Accelerating object detection techniques are described. In one or more implementations, adaptive sampling techniques are used to extract features from an image. Coarse features are extracted from the image and used to generate an object probability map. Then, dense features are extracted from high-probability object regions of the image identified in the object probability map to enable detection of an object in the image. In one or more implementations, cascade object detection techniques are used to detect an object in an image. In a first stage, exemplars in a first subset of exemplars are applied to features extracted from the multiple regions of the image to detect object candidate regions. Then, in one or more validation stages, the object candidate regions are validated by applying exemplars from the first subset of exemplars and one or more additional subsets of exemplars.Type: ApplicationFiled: July 28, 2014Publication date: January 28, 2016Inventors: Xiaohui Shen, Zhe Lin, Jonathan W. Brandt
-
Publication number: 20160019440Abstract: Feature interpolation techniques are described. In a training stage, features are extracted from a collection of training images and quantized into visual words. Spatial configurations of the visual words in the training images are determined and stored in a spatial configuration database. In an object detection stage, a portion of features of an image are extracted from the image and quantized into visual words. Then, a remaining portion of the features of the image are interpolated using the visual words and the spatial configurations of visual words stored in the spatial configuration database.Type: ApplicationFiled: July 18, 2014Publication date: January 21, 2016Inventors: Xiaohui Shen, Zhe Lin, Jonathan W. Brandt
-
Patent number: 9230192Abstract: Image classification techniques using images with separate grayscale and color channels are described. In one or more implementations, an image classification network includes grayscale filters and color filters which are separate from the grayscale filters. The grayscale filters are configured to extract grayscale features from a grayscale channel of an image, and the color filters are configured to extract color features from a color channel of the image. The extracted grayscale features and color features are used to identify an object in the image, and the image is classified based on the identified object.Type: GrantFiled: November 15, 2013Date of Patent: January 5, 2016Assignee: Adobe Systems IncorporatedInventors: Hailin Jin, Thomas Le Paine, Jianchao Yang, Zhe Lin, Jonathan W. Brandt
-
Patent number: 9208404Abstract: In techniques for object detection with boosted exemplars, weak classifiers of a real-adaboost technique can be learned as exemplars that are collected from example images. The exemplars are examples of an object that is detectable in image patches of an image, such as faces that are detectable in images. The weak classifiers of the real-adaboost technique can be applied to the image patches of the image, and a confidence score is determined for each of the weak classifiers as applied to an image patch of the image. The confidence score of a weak classifier is an indication of whether the object is detected in the image patch of the image based on the weak classifier. All of the confidence scores of the weak classifiers can then be summed to generate an overall object detection score that indicates whether the image patch of the image includes the object.Type: GrantFiled: November 15, 2013Date of Patent: December 8, 2015Assignee: Adobe Systems IncorporatedInventors: Zhe Lin, Jonathan W. Brandt, Xiaohui Shen, Haoxiang Li
-
Patent number: 9202138Abstract: Various embodiments of methods and apparatus for feature point localization are disclosed. A profile model and a shape model may be applied to an object in an image to determine locations of feature points for each object component. Input may be received to move one of the feature points to a fixed location. Other ones of the feature points may be automatically adjusted to different locations based on the moved feature point.Type: GrantFiled: October 4, 2012Date of Patent: December 1, 2015Assignee: Adobe Systems IncorporatedInventors: Jonathan W. Brandt, Zhe Lin, Vuong Le, Lubomir D. Bourdev
-
Patent number: 9158963Abstract: Various embodiments of methods and apparatus for feature point localization are disclosed. An object in an input image may be detected. A profile model may be applied to determine feature point locations for each object component of the detected object. Applying the profile model may include globally optimizing the feature points for each object component to find a global energy minimum. A component-based shape model may be applied to update the respective feature point locations for each object component.Type: GrantFiled: October 4, 2012Date of Patent: October 13, 2015Assignee: Adobe Systems IncorporatedInventors: Jonathan W. Brandt, Zhe Lin, Lubomir D. Bourdev, Vuong Le
-
Patent number: 9098930Abstract: Embodiments of methods and systems for stereo-aware image editing are described. A three-dimensional model of a stereo scene is built from one or more input images. Camera parameters for the input images are computed. The three-dimensional model is modified. In some embodiments, the modifying the three-dimensional model includes modifying one or more of the images and applying results of the modifying one or more of the images to corresponding model vertices. The scene is re-rendered from the camera parameters to produce an edited stereo pair that is consistent with the three-dimensional model.Type: GrantFiled: September 27, 2012Date of Patent: August 4, 2015Assignee: Adobe Systems IncorporatedInventors: Scott D. Cohen, Brian L. Price, Chenxi Zhang, Jonathan W. Brandt
-
Publication number: 20150139536Abstract: Image classification techniques using images with separate grayscale and color channels are described. In one or more implementations, an image classification network includes grayscale filters and color filters which are separate from the grayscale filters. The grayscale filters are configured to extract grayscale features from a grayscale channel of an image, and the color filters are configured to extract color features from a color channel of the image. The extracted grayscale features and color features are used to identify an object in the image, and the image is classified based on the identified object.Type: ApplicationFiled: November 15, 2013Publication date: May 21, 2015Applicant: Adobe Systems IncorporatedInventors: Hailin Jin, Thomas Le Paine, Jianchao Yang, Zhe Lin, Jonathan W. Brandt
-
Publication number: 20150139551Abstract: Cascaded object detection techniques are described. In one or more implementations, cascaded coarse-to-dense object detection techniques are utilized to detect objects in images. In a first stage, coarse features are extracted from an image, and non-object regions are rejected. Then, in one or more subsequent stages, dense features are extracted from the remaining non-rejected regions of the image to detect one or more objects in the image.Type: ApplicationFiled: November 15, 2013Publication date: May 21, 2015Applicant: Adobe Systems IncorporatedInventors: Zhe Lin, Jonathan W. Brandt, Xiaohui Shen, Haoxiang Li
-
Publication number: 20150139538Abstract: In techniques for object detection with boosted exemplars, weak classifiers of a real-adaboost technique can be learned as exemplars that are collected from example images. The exemplars are examples of an object that is detectable in image patches of an image, such as faces that are detectable in images. The weak classifiers of the real-adaboost technique can be applied to the image patches of the image, and a confidence score is determined for each of the weak classifiers as applied to an image patch of the image. The confidence score of a weak classifier is an indication of whether the object is detected in the image patch of the image based on the weak classifier. All of the confidence scores of the weak classifiers can then be summed to generate an overall object detection score that indicates whether the image patch of the image includes the object.Type: ApplicationFiled: November 15, 2013Publication date: May 21, 2015Applicant: Adobe Systems IncorporatedInventors: Zhe Lin, Jonathan W. Brandt, Xiaohui Shen, Haoxiang Li