Patents by Inventor David P. Simons

David P. Simons has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10825224
    Abstract: Certain embodiments involve automatically detecting video frames that depict visemes and that are usable for generating an animatable puppet. For example, a computing device accesses video frames depicting a person performing gestures usable for generating a layered puppet, including a viseme gesture corresponding to a target sound or phoneme. The computing device determines that audio data including the target sound or phoneme aligns with a particular video frame from the video frames that depicts the person performing the viseme gesture. The computing device creates, from the video frames, a puppet animation of the gestures, including an animation of the viseme corresponding to the target sound or phoneme that is generated from the particular video frame. The computing device outputs the puppet animation to a presentation device.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: November 3, 2020
    Assignee: Adobe Inc.
    Inventors: Geoffrey Heller, Jakub Fiser, David P. Simons
  • Publication number: 20200160581
    Abstract: Certain embodiments involve automatically detecting video frames that depict visemes and that are usable for generating an animatable puppet. For example, a computing device accesses video frames depicting a person performing gestures usable for generating a layered puppet, including a viseme gesture corresponding to a target sound or phoneme. The computing device determines that audio data including the target sound or phoneme aligns with a particular video frame from the video frames that depicts the person performing the viseme gesture. The computing device creates, from the video frames, a puppet animation of the gestures, including an animation of the viseme corresponding to the target sound or phoneme that is generated from the particular video frame. The computing device outputs the puppet animation to a presentation device.
    Type: Application
    Filed: November 20, 2018
    Publication date: May 21, 2020
    Inventors: Geoffrey Heller, Jakub Fiser, David P. Simons
  • Patent number: 10607065
    Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: March 31, 2020
    Assignee: Adobe Inc.
    Inventors: Rebecca Ilene Milman, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shechtman, Duygu Ceylan Aksit, David P. Simons
  • Publication number: 20190340419
    Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
    Type: Application
    Filed: May 3, 2018
    Publication date: November 7, 2019
    Applicant: Adobe Inc.
    Inventors: Rebecca Ilene Milman, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shechtman, Duygu Ceylan Aksit, David P. Simons
  • Patent number: 10402481
    Abstract: Systems and methods for switching to different states of electronic content being developed in a content creation application. This involves storing different states of the electronic content using a content-addressable data store, where individual states are represented by identifiers that identify items of respective states stored in the content-addressable data store. Identical items that are included in multiple states are stored once in the content-addressable data store and referenced by common identifiers. Input is received to change the electronic content to a selected state of the different states and the electronic content is displayed in the selected state based on identifiers for the selected state. In this way, undo, redo, and other commands to switch to different states of electronic content being developed are provided.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: September 3, 2019
    Assignee: Adobe Inc.
    Inventors: David P. Simons, James Acquavella, Gregory Scott Evans, Joel Brandt
  • Patent number: 9842094
    Abstract: Systems and methods for switching to different states of electronic content being developed in a content creation application. This involves storing different states of the electronic content using a content-addressable data store, where individual states are represented by identifiers that identify items of respective states stored in the content-addressable data store. Identical items that are included in multiple states are stored once in the content-addressable data store and referenced by common identifiers. Input is received to change the electronic content to a selected state of the different states and the electronic content is displayed in the selected state based on identifiers for the selected state. In this way, undo, redo, and other commands to switch to different states of electronic content being developed are provided.
    Type: Grant
    Filed: February 12, 2016
    Date of Patent: December 12, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: David P. Simons, James Acquavella, Gregory Scott Evans, Joel Brandt
  • Publication number: 20170235710
    Abstract: Systems and methods for switching to different states of electronic content being developed in a content creation application. This involves storing different states of the electronic content using a content-addressable data store, where individual states are represented by identifiers that identify items of respective states stored in the content-addressable data store. Identical items that are included in multiple states are stored once in the content-addressable data store and referenced by common identifiers. Input is received to change the electronic content to a selected state of the different states and the electronic content is displayed in the selected state based on identifiers for the selected state. In this way, undo, redo, and other commands to switch to different states of electronic content being developed are provided.
    Type: Application
    Filed: February 12, 2016
    Publication date: August 17, 2017
    Inventors: David P. Simons, James Acquavella, Gregory Scott Evans, Joel Brandt
  • Patent number: 9697229
    Abstract: One embodiment of the present disclosure is a method of creating metadata during object development. The method comprises receiving a change to an object during its development that results in a changed version of the object, identifying information about the change, and creating metadata comprising the information about the change. The information about the change may include a unique instance identifier identifying and unique to the changed version of the object. As an object is changed multiple times during development, the created metadata may include a series of information segments each relating to a particular change and each uniquely identified by its unique instance identifier. The information about the change may also include, as examples, an identification of a unique instance identifier of a prior version of the object, the time of the change to the object, and/or identification of the software used to make the change.
    Type: Grant
    Filed: May 29, 2008
    Date of Patent: July 4, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Larry Melvin Masinter, Stephen Arnulf Deach, David P. Simons
  • Patent number: 8971584
    Abstract: Systems, methods, and computer-readable storage media for chatter reduction in video object segmentation using a variable bandwidth search region. A variable bandwidth search region generation method may be applied to a uniform search region to generate a variable bandwidth search region that reduces the search range for segmentation methods such as a graph cut method. The method may identify parts of the contour that are moving slowly, and reduce the search region bandwidth in those places to stabilize the segmentation. This method may determine a bandwidth for each of a plurality of local windows of an image according to an estimate of how much an object in the image has moved from a previous image. The method may blend the bandwidths for the plurality of local windows to generate a blended map. The method may then generate a variable bandwidth search region for an object according to the blended map.
    Type: Grant
    Filed: January 15, 2013
    Date of Patent: March 3, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Jue Wang, David P. Simons, Daniel M. Wilk, Xue Bai
  • Patent number: 8897562
    Abstract: Methods and apparatus for adaptive trimap propagation. Methods are described that allow a trimap to be propagated from one frame to the next in a temporally coherent way. A radius-based method propagates automatically computed local trimap radii from frame to frame. A mesh-based method employs pins on the binary segmentation boundary and a mesh generated for the unknown region; the pins are tracked from one frame to the next according to an optical flow technique, the mesh is deformed from one frame to the next according to the movement of the pins, and the adaptive trimap is then warped according to the deformed mesh. These methods can be used separately, or the first method can be used to propagate some regions of the adaptive trimap, and the second method can be used to propagate other regions of the adaptive trimap.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: November 25, 2014
    Assignee: Adobe Systems Incorporated
    Inventors: Xue Bai, Jue Wang, David P. Simons
  • Publication number: 20140304215
    Abstract: One embodiment of the present disclosure is a method of creating metadata during object development. The method comprises receiving a change to an object during its development that results in a changed version of the object, identifying information about the change, and creating metadata comprising the information about the change. The information about the change may include a unique instance identifier identifying and unique to the changed version of the object. As an object is changed multiple times during development, the created metadata may include a series of information segments each relating to a particular change and each uniquely identified by its unique instance identifier. The information about the change may also include, as examples, an identification of a unique instance identifier of a prior version of the object, the time of the change to the object, and/or identification of the software used to make the change.
    Type: Application
    Filed: May 29, 2008
    Publication date: October 9, 2014
    Applicant: Adobe Systems Incorporated
    Inventors: Larry Melvin Masinter, Stephen Arnulf Deach, David P. Simons
  • Patent number: 8792718
    Abstract: Methods and apparatus for temporal matte filtering. Temporal matte filtering methods are described that improve the temporal coherence of alpha mattes for a video sequence while maintaining the matte structures on individual frames. The temporal matte filter may implement a level-set-based matte averaging method. In the level-set-based matte averaging method, two or more input alpha mattes are obtained. Level set curves are generated for the two or more alpha mattes. An averaged level set is computed from the two or more level sets according to a distance-transform-based technique. A temporally smoothed alpha matte may then be reconstructed by interpolating pixel alpha values between the inner and outer level set curves of the averaged level set. The alpha mattes can be optionally warped towards a center frame according to an optical flow technique before the averaging operation performed by the temporal matte filter.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: July 29, 2014
    Assignee: Adobe Systems Incorporated
    Inventors: Xue Bai, Jue Wang, David P. Simons
  • Patent number: 8731329
    Abstract: Systems and methods for rolling shutter artifact repair are disclosed.
    Type: Grant
    Filed: July 16, 2012
    Date of Patent: May 20, 2014
    Assignee: Adobe Systems Incorporated
    Inventors: David P. Simons, Daniel Wilk, Xue Bai
  • Publication number: 20140016877
    Abstract: Systems and methods for rolling shutter artifact repair are disclosed.
    Type: Application
    Filed: July 16, 2012
    Publication date: January 16, 2014
    Applicant: Adobe Systems Incorporated
    Inventors: David P. Simons, Daniel Wilk, Xue Bai
  • Publication number: 20140002746
    Abstract: Methods and apparatus for temporal matte filtering. Temporal matte filtering methods are described that improve the temporal coherence of alpha mattes for a video sequence while maintaining the matte structures on individual frames. The temporal matte filter may implement a level-set-based matte averaging method. In the level-set-based matte averaging method, two or more input alpha mattes are obtained. Level set curves are generated for the two or more alpha mattes. An averaged level set is computed from the two or more level sets according to a distance-transform-based technique. A temporally smoothed alpha matte may then be reconstructed by interpolating pixel alpha values between the inner and outer level set curves of the averaged level set. The alpha mattes can be optionally warped towards a center frame according to an optical flow technique before the averaging operation performed by the temporal matte filter.
    Type: Application
    Filed: June 29, 2012
    Publication date: January 2, 2014
    Inventors: Xue Bai, Jue Wang, David P. Simons
  • Publication number: 20140003719
    Abstract: Methods and apparatus for adaptive trimap propagation. Methods are described that allow a trimap to be propagated from one frame to the next in a temporally coherent way. A radius-based method propagates automatically computed local trimap radii from frame to frame. A mesh-based method employs pins on the binary segmentation boundary and a mesh generated for the unknown region; the pins are tracked from one frame to the next according to an optical flow technique, the mesh is deformed from one frame to the next according to the movement of the pins, and the adaptive trimap is then warped according to the deformed mesh. These methods can be used separately, or the first method can be used to propagate some regions of the adaptive trimap, and the second method can be used to propagate other regions of the adaptive trimap.
    Type: Application
    Filed: June 29, 2012
    Publication date: January 2, 2014
    Inventors: Xue Bai, Jue Wang, David P. Simons
  • Patent number: 8532421
    Abstract: A sharp frame and a blurred frame are detected from among a plurality of frames. A blur kernel is estimated. The blur kernel represents a motion-transform between the sharp frame and the blurred frame. Using the blur kernel, a static region measure for the sharp frame and the blurred frame is estimated. A de-blurred frame is generated by replacing one or more pixels of the blurred frame as indicated by the static region measure.
    Type: Grant
    Filed: November 30, 2010
    Date of Patent: September 10, 2013
    Assignee: Adobe Systems Incorporated
    Inventors: Jue Wang, David P. Simons, Seungyong Lee, Sunghyun Cho
  • Patent number: 8520975
    Abstract: Systems, methods, and computer-readable storage media for chatter reduction in video object segmentation using optical flow assisted gaussholding. An optical flow assisted gaussholding method may be applied to segmentation masks generated for a video sequence. For each frame of at least some frames in a video sequence, for each of one or more other frames prior to and one or more other frames after the current frame, optical flow is computed for the other frame in relation to the current frame and used to warp the contour of the segmentation mask of the other frame to generate warped segmentation mask for the other frames. The weighted average of the warpedsegmentation masks and the segmentation mask of the current frame is then computed; this weighted average may be blurred spatially, for example using a Gaussian filter. The initial smoothed mask may be thresholded to produce a binary smoothed mask.
    Type: Grant
    Filed: August 30, 2010
    Date of Patent: August 27, 2013
    Assignee: Adobe Systems Incorporated
    Inventors: Jue Wang, David P. Simons, Daniel M. Wilk, Xue Bai
  • Publication number: 20130121577
    Abstract: Systems, methods, and computer-readable storage media for chatter reduction in video object segmentation using optical flow assisted gaussholding. An optical flow assisted gaussholding method may be applied to segmentation masks generated for a video sequence. For each frame of at least some frames in a video sequence, for each of one or more other frames prior to and one or more other frames after the current frame, optical flow is computed for the other frame in relation to the current frame and used to warp the contour of the segmentation mask of the other frame to generate warped segmentation mask for the other frames. The weighted average of the warpedsegmentation masks and the segmentation mask of the current frame is then computed; this weighted average may be blurred spatially, for example using a Gaussian filter. The initial smoothed mask may be thresholded to produce a binary smoothed mask.
    Type: Application
    Filed: August 30, 2010
    Publication date: May 16, 2013
    Inventors: Jue Wang, David P. Simons, Daniel M. Wilk, Xue Bai
  • Patent number: 8358691
    Abstract: Systems, methods, and computer-readable storage media for chatter reduction in video object segmentation using a variable bandwidth search region. A variable bandwidth search region generation method may be applied to a uniform search region to generate a variable bandwidth search region that reduces the search range for segmentation methods such as a graph cut method. The method may identify parts of the contour that are moving slowly, and reduce the search region bandwidth in those places to stabilize the segmentation. This method may determine a bandwidth for each of a plurality of local windows of an image according to an estimate of how much an object in the image has moved from a previous image. The method may blend the bandwidths for the plurality of local windows to generate a blended map. The method may then generate a variable bandwidth search region for an object according to the blended map.
    Type: Grant
    Filed: August 30, 2010
    Date of Patent: January 22, 2013
    Assignee: Adobe Systems Incorporated
    Inventors: Jue Wang, David P. Simons, Daniel M. Wilk, Xue Bai