Patents by Inventor Jonathan W Brandt

Jonathan W Brandt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10783431
    Abstract: Image search techniques and systems involving emotions are described. In one or more implementations, a digital medium environment of a content sharing service is described for image search result configuration and control based on a search request that indicates an emotion. The search request is received that includes one or more keywords and specifies an emotion. Images are located that are available for licensing by matching one or more tags associated with the image with the one or more keywords and as corresponding to the emotion. The emotion of the images is identified using one or more models that are trained using machine learning based at least in part on training images having tagged emotions. Output is controlled of a search result having one or more representations of the images that are selectable to license respective images from the content sharing service.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: September 22, 2020
    Assignee: Adobe Inc.
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Patent number: 10389804
    Abstract: Content creation and sharing integration techniques and systems are described. In one or more implementations, techniques are described in which modifiable versions of content (e.g., images) are created and shared via a content sharing service such that image creation functionality used to create the images is preserved to permit continued creation using this functionality. In one or more additional implementations, image creation functionality employed by a creative professional to create content is leveraged to locate similar images from a content sharing service.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: August 20, 2019
    Assignee: Adobe Inc.
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Patent number: 10249061
    Abstract: Content creation and sharing integration techniques and systems are described. In one or more implementations, techniques are described in which modifiable versions of content (e.g., images) are created and shared via a content sharing service such that image creation functionality used to create the images is preserved to permit continued creation using this functionality. In one or more additional implementations, image creation functionality employed by a creative professional to create content is leveraged to locate similar images from a content sharing service.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: April 2, 2019
    Assignee: Adobe Inc.
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Patent number: 10198590
    Abstract: Content creation collection and navigation techniques and systems are described. In one example, a representative image is used by a content sharing service to interact with a collection of images provided as part of a search result. In another example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics. In a further example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics identified for an object selected from the image. In yet another example, collections of images are leveraged as part of content creation. In another example, data obtained from a content sharing service is leveraged to indicate suitability of images of a user for licensing as part of the service.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: February 5, 2019
    Assignee: Adobe Inc.
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Patent number: 10043057
    Abstract: Accelerating object detection techniques are described. In one or more implementations, adaptive sampling techniques are used to extract features from an image. Coarse features are extracted from the image and used to generate an object probability map. Then, dense features are extracted from high-probability object regions of the image identified in the object probability map to enable detection of an object in the image. In one or more implementations, cascade object detection techniques are used to detect an object in an image. In a first stage, exemplars in a first subset of exemplars are applied to features extracted from the multiple regions of the image to detect object candidate regions. Then, in one or more validation stages, the object candidate regions are validated by applying exemplars from the first subset of exemplars and one or more additional subsets of exemplars.
    Type: Grant
    Filed: September 1, 2016
    Date of Patent: August 7, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Xiaohui Shen, Zhe Lin, Jonathan W. Brandt
  • Patent number: 9852326
    Abstract: Techniques for facial expression capture for character animation are described. In one or more implementations, facial key points are identified in a series of images. Each image, in the series of images, is normalized from the identified facial key points. Facial features are determined from each of the normalized images. Then a facial expression is classified, based on the determined facial features, for each of the normalized images. In additional implementations, a series of images are captured that include performances of one or more facial expressions. The facial expressions in each image of the series of images are classified by a facial expression classifier. Then the facial expression classifications are used by a character animator system to produce a series of animated images of an animated character that include animated facial expressions that are associated with the facial expression classification of the corresponding image in the series of images.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: December 26, 2017
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Wilmot Wei-Mau Li, Jianchao Yang, Linjie Luo, Jonathan W. Brandt, Xiang Yu
  • Patent number: 9846840
    Abstract: Semantic class localization techniques and systems are described. In one or more implementation, a technique is employed to back communicate relevancies of aggregations back through layers of a neural network. Through use of these relevancies, activation relevancy maps are created that describe relevancy of portions of the image to the classification of the image as corresponding to a semantic class. In this way, the semantic class is localized to portions of the image. This may be performed through communication of positive and not negative relevancies, use of contrastive attention maps to different between semantic classes and even within a same semantic class through use of a self-contrastive technique.
    Type: Grant
    Filed: May 25, 2016
    Date of Patent: December 19, 2017
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Zhe Lin, Xiaohui Shen, Jonathan W. Brandt, Jianming Zhang
  • Publication number: 20170344884
    Abstract: Semantic class localization techniques and systems are described. In one or more implementation, a technique is employed to back communicate relevancies of aggregations back through layers of a neural network. Through use of these relevancies, activation relevancy maps are created that describe relevancy of portions of the image to the classification of the image as corresponding to a semantic class. In this way, the semantic class is localized to portions of the image. This may be performed through communication of positive and not negative relevancies, use of contrastive attention maps to different between semantic classes and even within a same semantic class through use of a self-contrastive technique.
    Type: Application
    Filed: May 25, 2016
    Publication date: November 30, 2017
    Applicant: Adobe Systems Incorporated
    Inventors: Zhe Lin, Xiaohui Shen, Jonathan W. Brandt, Jianming Zhang
  • Patent number: 9818044
    Abstract: Content update and suggestion techniques are described. In one or more implementations, techniques are implemented to generate suggestions that are usable to guide creative professionals in updating content such as images, video, sound, multimedia, and so forth. A variety of techniques are usable to generate suggestions for the content professionals. In one example, suggestions are based on shared characteristics of images licensed by users of a content sharing service, e.g., licensed by the users. In another example, suggestions are based on metadata of the images licensed by the users, the metadata describing characteristics of how the images are created. These suggestions are then used to guide transformation of a user's image such that the image exhibits these characteristics and thus has an increased likelihood of being desired for licensing by customers of the service.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: November 14, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Patent number: 9734434
    Abstract: Feature interpolation techniques are described. In a training stage, features are extracted from a collection of training images and quantized into visual words. Spatial configurations of the visual words in the training images are determined and stored in a spatial configuration database. In an object detection stage, a portion of features of an image are extracted from the image and quantized into visual words. Then, a remaining portion of the features of the image are interpolated using the visual words and the spatial configurations of visual words stored in the spatial configuration database.
    Type: Grant
    Filed: June 15, 2016
    Date of Patent: August 15, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Xiaohui Shen, Zhe Lin, Jonathan W. Brandt
  • Patent number: 9697416
    Abstract: Different candidate windows in an image are identified, such as by sliding a rectangular or other geometric shape of different sizes over an image to identify portions of the image (groups of pixels in the image). The candidate windows are analyzed by a set of convolutional neural networks, which are cascaded so that the input of one convolutional neural network layer is based on the input of another convolutional neural network layer. Each convolutional neural network layer drops or rejects one or more candidate windows that the convolutional neural network layer determines does not include an object (e.g., a face). The candidate windows that are identified as including an object (e.g., a face) are analyzed by another one of the convolutional neural network layers. The candidate windows identified by the last of the convolutional neural network layers are the indications of the objects (e.g., faces) in the image.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: July 4, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
  • Publication number: 20170132290
    Abstract: Image search techniques and systems involving emotions are described. In one or more implementations, a digital medium environment of a content sharing service is described for image search result configuration and control based on a search request that indicates an emotion. The search request is received that includes one or more keywords and specifies an emotion. Images are located that are available for licensing by matching one or more tags associated with the image with the one or more keywords and as corresponding to the emotion. The emotion of the images is identified using one or more models that are trained using machine learning based at least in part on training images having tagged emotions. Output is controlled of a search result having one or more representations of the images that are selectable to license respective images from the content sharing service.
    Type: Application
    Filed: November 11, 2015
    Publication date: May 11, 2017
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Publication number: 20170131877
    Abstract: Content creation and sharing integration techniques and systems are described. In one or more implementations, techniques are described in which modifiable versions of content (e.g., images) are created and shared via a content sharing service such that image creation functionality used to create the images is preserved to permit continued creation using this functionality. In one or more additional implementations, image creation functionality employed by a creative professional to create content is leveraged to locate similar images from a content sharing service.
    Type: Application
    Filed: November 11, 2015
    Publication date: May 11, 2017
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Publication number: 20170132252
    Abstract: Content creation collection and navigation techniques and systems are described. In one example, a representative image is used by a content sharing service to interact with a collection of images provided as part of a search result. In another example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics. In a further example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics identified for an object selected from the image. In yet another example, collections of images are leveraged as part of content creation. In another example, data obtained from a content sharing service is leveraged to indicate suitability of images of a user for licensing as part of the service.
    Type: Application
    Filed: November 11, 2015
    Publication date: May 11, 2017
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Publication number: 20170131876
    Abstract: Content creation and sharing integration techniques and systems are described. In one or more implementations, techniques are described in which modifiable versions of content (e.g., images) are created and shared via a content sharing service such that image creation functionality used to create the images is preserved to permit continued creation using this functionality. In one or more additional implementations, image creation functionality employed by a creative professional to create content is leveraged to locate similar images from a content sharing service.
    Type: Application
    Filed: November 11, 2015
    Publication date: May 11, 2017
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Publication number: 20170132425
    Abstract: Content creation collection and navigation techniques and systems are described. In one example, a representative image is used by a content sharing service to interact with a collection of images provided as part of a search result. In another example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics. In a further example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics identified for an object selected from the image. In yet another example, collections of images are leveraged as part of content creation. In another example, data obtained from a content sharing service is leveraged to indicate suitability of images of a user for licensing as part of the service.
    Type: Application
    Filed: November 11, 2015
    Publication date: May 11, 2017
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Publication number: 20170132490
    Abstract: Content update and suggestion techniques are described. In one or more implementations, techniques are implemented to generate suggestions that are usable to guide creative professionals in updating content such as images, video, sound, multimedia, and so forth. A variety of techniques are usable to generate suggestions for the content professionals. In one example, suggestions are based on shared characteristics of images licensed by users of a content sharing service, e.g., licensed by the users. In another example, suggestions are based on metadata of the images licensed by the users, the metadata describing characteristics of how the images are created. These suggestions are then used to guide transformation of a user's image such that the image exhibits these characteristics and thus has an increased likelihood of being desired for licensing by customers of the service.
    Type: Application
    Filed: November 11, 2015
    Publication date: May 11, 2017
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Publication number: 20170116467
    Abstract: Techniques for facial expression capture for character animation are described. In one or more implementations, facial key points are identified in a series of images. Each image, in the series of images, is normalized from the identified facial key points. Facial features are determined from each of the normalized images. Then a facial expression is classified, based on the determined facial features, for each of the normalized images. In additional implementations, a series of images are captured that include performances of one or more facial expressions. The facial expressions in each image of the series of images are classified by a facial expression classifier. Then the facial expression classifications are used by a character animator system to produce a series of animated images of an animated character that include animated facial expressions that are associated with the facial expression classification of the corresponding image in the series of images.
    Type: Application
    Filed: January 5, 2017
    Publication date: April 27, 2017
    Applicant: Adobe Systems Incorporated
    Inventors: Wilmot Wei-Mau Li, Jianchao Yang, Linjie Luo, Jonathan W. Brandt, Xiang Yu
  • Patent number: 9563825
    Abstract: A convolutional neural network is trained to analyze input data in various different manners. The convolutional neural network includes multiple layers, one of which is a convolution layer that performs a convolution, for each of one or more filters in the convolution layer, of the filter over the input data. The convolution includes generation of an inner product based on the filter and the input data. Both the filter of the convolution layer and the input data are binarized, allowing the inner product to be computed using particular operations that are typically faster than multiplication of floating point values. The possible results for the convolution layer can optionally be pre-computed and stored in a look-up table. Thus, during operation of the convolutional neural network, rather than performing the convolution on the input data, the pre-computed result can be obtained from the look-up table.
    Type: Grant
    Filed: November 20, 2014
    Date of Patent: February 7, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
  • Patent number: 9552510
    Abstract: Techniques for facial expression capture for character animation are described. In one or more implementations, facial key points are identified in a series of images. Each image, in the series of images, is normalized from the identified facial key points. Facial features are determined from each of the normalized images. Then a facial expression is classified, based on the determined facial features, for each of the normalized images. In additional implementations, a series of images are captured that include performances of one or more facial expressions. The facial expressions in each image of the series of images are classified by a facial expression classifier. Then the facial expression classifications are used by a character animator system to produce a series of animated images of an animated character that include animated facial expressions that are associated with the facial expression classification of the corresponding image in the series of images.
    Type: Grant
    Filed: March 18, 2015
    Date of Patent: January 24, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Wilmot Wei-Mau Li, Jianchao Yang, Linjie Luo, Jonathan W. Brandt, Xiang Yu