Patents by Inventor David P. Simons
David P. Simons has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10825224Abstract: Certain embodiments involve automatically detecting video frames that depict visemes and that are usable for generating an animatable puppet. For example, a computing device accesses video frames depicting a person performing gestures usable for generating a layered puppet, including a viseme gesture corresponding to a target sound or phoneme. The computing device determines that audio data including the target sound or phoneme aligns with a particular video frame from the video frames that depicts the person performing the viseme gesture. The computing device creates, from the video frames, a puppet animation of the gestures, including an animation of the viseme corresponding to the target sound or phoneme that is generated from the particular video frame. The computing device outputs the puppet animation to a presentation device.Type: GrantFiled: November 20, 2018Date of Patent: November 3, 2020Assignee: Adobe Inc.Inventors: Geoffrey Heller, Jakub Fiser, David P. Simons
-
Publication number: 20200160581Abstract: Certain embodiments involve automatically detecting video frames that depict visemes and that are usable for generating an animatable puppet. For example, a computing device accesses video frames depicting a person performing gestures usable for generating a layered puppet, including a viseme gesture corresponding to a target sound or phoneme. The computing device determines that audio data including the target sound or phoneme aligns with a particular video frame from the video frames that depicts the person performing the viseme gesture. The computing device creates, from the video frames, a puppet animation of the gestures, including an animation of the viseme corresponding to the target sound or phoneme that is generated from the particular video frame. The computing device outputs the puppet animation to a presentation device.Type: ApplicationFiled: November 20, 2018Publication date: May 21, 2020Inventors: Geoffrey Heller, Jakub Fiser, David P. Simons
-
Patent number: 10607065Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.Type: GrantFiled: May 3, 2018Date of Patent: March 31, 2020Assignee: Adobe Inc.Inventors: Rebecca Ilene Milman, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shechtman, Duygu Ceylan Aksit, David P. Simons
-
Publication number: 20190340419Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.Type: ApplicationFiled: May 3, 2018Publication date: November 7, 2019Applicant: Adobe Inc.Inventors: Rebecca Ilene Milman, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shechtman, Duygu Ceylan Aksit, David P. Simons
-
Patent number: 10402481Abstract: Systems and methods for switching to different states of electronic content being developed in a content creation application. This involves storing different states of the electronic content using a content-addressable data store, where individual states are represented by identifiers that identify items of respective states stored in the content-addressable data store. Identical items that are included in multiple states are stored once in the content-addressable data store and referenced by common identifiers. Input is received to change the electronic content to a selected state of the different states and the electronic content is displayed in the selected state based on identifiers for the selected state. In this way, undo, redo, and other commands to switch to different states of electronic content being developed are provided.Type: GrantFiled: November 22, 2017Date of Patent: September 3, 2019Assignee: Adobe Inc.Inventors: David P. Simons, James Acquavella, Gregory Scott Evans, Joel Brandt
-
Patent number: 9842094Abstract: Systems and methods for switching to different states of electronic content being developed in a content creation application. This involves storing different states of the electronic content using a content-addressable data store, where individual states are represented by identifiers that identify items of respective states stored in the content-addressable data store. Identical items that are included in multiple states are stored once in the content-addressable data store and referenced by common identifiers. Input is received to change the electronic content to a selected state of the different states and the electronic content is displayed in the selected state based on identifiers for the selected state. In this way, undo, redo, and other commands to switch to different states of electronic content being developed are provided.Type: GrantFiled: February 12, 2016Date of Patent: December 12, 2017Assignee: Adobe Systems IncorporatedInventors: David P. Simons, James Acquavella, Gregory Scott Evans, Joel Brandt
-
Publication number: 20170235710Abstract: Systems and methods for switching to different states of electronic content being developed in a content creation application. This involves storing different states of the electronic content using a content-addressable data store, where individual states are represented by identifiers that identify items of respective states stored in the content-addressable data store. Identical items that are included in multiple states are stored once in the content-addressable data store and referenced by common identifiers. Input is received to change the electronic content to a selected state of the different states and the electronic content is displayed in the selected state based on identifiers for the selected state. In this way, undo, redo, and other commands to switch to different states of electronic content being developed are provided.Type: ApplicationFiled: February 12, 2016Publication date: August 17, 2017Inventors: David P. Simons, James Acquavella, Gregory Scott Evans, Joel Brandt
-
Patent number: 9697229Abstract: One embodiment of the present disclosure is a method of creating metadata during object development. The method comprises receiving a change to an object during its development that results in a changed version of the object, identifying information about the change, and creating metadata comprising the information about the change. The information about the change may include a unique instance identifier identifying and unique to the changed version of the object. As an object is changed multiple times during development, the created metadata may include a series of information segments each relating to a particular change and each uniquely identified by its unique instance identifier. The information about the change may also include, as examples, an identification of a unique instance identifier of a prior version of the object, the time of the change to the object, and/or identification of the software used to make the change.Type: GrantFiled: May 29, 2008Date of Patent: July 4, 2017Assignee: Adobe Systems IncorporatedInventors: Larry Melvin Masinter, Stephen Arnulf Deach, David P. Simons
-
Patent number: 8971584Abstract: Systems, methods, and computer-readable storage media for chatter reduction in video object segmentation using a variable bandwidth search region. A variable bandwidth search region generation method may be applied to a uniform search region to generate a variable bandwidth search region that reduces the search range for segmentation methods such as a graph cut method. The method may identify parts of the contour that are moving slowly, and reduce the search region bandwidth in those places to stabilize the segmentation. This method may determine a bandwidth for each of a plurality of local windows of an image according to an estimate of how much an object in the image has moved from a previous image. The method may blend the bandwidths for the plurality of local windows to generate a blended map. The method may then generate a variable bandwidth search region for an object according to the blended map.Type: GrantFiled: January 15, 2013Date of Patent: March 3, 2015Assignee: Adobe Systems IncorporatedInventors: Jue Wang, David P. Simons, Daniel M. Wilk, Xue Bai
-
Patent number: 8897562Abstract: Methods and apparatus for adaptive trimap propagation. Methods are described that allow a trimap to be propagated from one frame to the next in a temporally coherent way. A radius-based method propagates automatically computed local trimap radii from frame to frame. A mesh-based method employs pins on the binary segmentation boundary and a mesh generated for the unknown region; the pins are tracked from one frame to the next according to an optical flow technique, the mesh is deformed from one frame to the next according to the movement of the pins, and the adaptive trimap is then warped according to the deformed mesh. These methods can be used separately, or the first method can be used to propagate some regions of the adaptive trimap, and the second method can be used to propagate other regions of the adaptive trimap.Type: GrantFiled: June 29, 2012Date of Patent: November 25, 2014Assignee: Adobe Systems IncorporatedInventors: Xue Bai, Jue Wang, David P. Simons
-
Publication number: 20140304215Abstract: One embodiment of the present disclosure is a method of creating metadata during object development. The method comprises receiving a change to an object during its development that results in a changed version of the object, identifying information about the change, and creating metadata comprising the information about the change. The information about the change may include a unique instance identifier identifying and unique to the changed version of the object. As an object is changed multiple times during development, the created metadata may include a series of information segments each relating to a particular change and each uniquely identified by its unique instance identifier. The information about the change may also include, as examples, an identification of a unique instance identifier of a prior version of the object, the time of the change to the object, and/or identification of the software used to make the change.Type: ApplicationFiled: May 29, 2008Publication date: October 9, 2014Applicant: Adobe Systems IncorporatedInventors: Larry Melvin Masinter, Stephen Arnulf Deach, David P. Simons
-
Patent number: 8792718Abstract: Methods and apparatus for temporal matte filtering. Temporal matte filtering methods are described that improve the temporal coherence of alpha mattes for a video sequence while maintaining the matte structures on individual frames. The temporal matte filter may implement a level-set-based matte averaging method. In the level-set-based matte averaging method, two or more input alpha mattes are obtained. Level set curves are generated for the two or more alpha mattes. An averaged level set is computed from the two or more level sets according to a distance-transform-based technique. A temporally smoothed alpha matte may then be reconstructed by interpolating pixel alpha values between the inner and outer level set curves of the averaged level set. The alpha mattes can be optionally warped towards a center frame according to an optical flow technique before the averaging operation performed by the temporal matte filter.Type: GrantFiled: June 29, 2012Date of Patent: July 29, 2014Assignee: Adobe Systems IncorporatedInventors: Xue Bai, Jue Wang, David P. Simons
-
Patent number: 8731329Abstract: Systems and methods for rolling shutter artifact repair are disclosed.Type: GrantFiled: July 16, 2012Date of Patent: May 20, 2014Assignee: Adobe Systems IncorporatedInventors: David P. Simons, Daniel Wilk, Xue Bai
-
Publication number: 20140016877Abstract: Systems and methods for rolling shutter artifact repair are disclosed.Type: ApplicationFiled: July 16, 2012Publication date: January 16, 2014Applicant: Adobe Systems IncorporatedInventors: David P. Simons, Daniel Wilk, Xue Bai
-
Publication number: 20140002746Abstract: Methods and apparatus for temporal matte filtering. Temporal matte filtering methods are described that improve the temporal coherence of alpha mattes for a video sequence while maintaining the matte structures on individual frames. The temporal matte filter may implement a level-set-based matte averaging method. In the level-set-based matte averaging method, two or more input alpha mattes are obtained. Level set curves are generated for the two or more alpha mattes. An averaged level set is computed from the two or more level sets according to a distance-transform-based technique. A temporally smoothed alpha matte may then be reconstructed by interpolating pixel alpha values between the inner and outer level set curves of the averaged level set. The alpha mattes can be optionally warped towards a center frame according to an optical flow technique before the averaging operation performed by the temporal matte filter.Type: ApplicationFiled: June 29, 2012Publication date: January 2, 2014Inventors: Xue Bai, Jue Wang, David P. Simons
-
Publication number: 20140003719Abstract: Methods and apparatus for adaptive trimap propagation. Methods are described that allow a trimap to be propagated from one frame to the next in a temporally coherent way. A radius-based method propagates automatically computed local trimap radii from frame to frame. A mesh-based method employs pins on the binary segmentation boundary and a mesh generated for the unknown region; the pins are tracked from one frame to the next according to an optical flow technique, the mesh is deformed from one frame to the next according to the movement of the pins, and the adaptive trimap is then warped according to the deformed mesh. These methods can be used separately, or the first method can be used to propagate some regions of the adaptive trimap, and the second method can be used to propagate other regions of the adaptive trimap.Type: ApplicationFiled: June 29, 2012Publication date: January 2, 2014Inventors: Xue Bai, Jue Wang, David P. Simons
-
Patent number: 8532421Abstract: A sharp frame and a blurred frame are detected from among a plurality of frames. A blur kernel is estimated. The blur kernel represents a motion-transform between the sharp frame and the blurred frame. Using the blur kernel, a static region measure for the sharp frame and the blurred frame is estimated. A de-blurred frame is generated by replacing one or more pixels of the blurred frame as indicated by the static region measure.Type: GrantFiled: November 30, 2010Date of Patent: September 10, 2013Assignee: Adobe Systems IncorporatedInventors: Jue Wang, David P. Simons, Seungyong Lee, Sunghyun Cho
-
Patent number: 8520975Abstract: Systems, methods, and computer-readable storage media for chatter reduction in video object segmentation using optical flow assisted gaussholding. An optical flow assisted gaussholding method may be applied to segmentation masks generated for a video sequence. For each frame of at least some frames in a video sequence, for each of one or more other frames prior to and one or more other frames after the current frame, optical flow is computed for the other frame in relation to the current frame and used to warp the contour of the segmentation mask of the other frame to generate warped segmentation mask for the other frames. The weighted average of the warpedsegmentation masks and the segmentation mask of the current frame is then computed; this weighted average may be blurred spatially, for example using a Gaussian filter. The initial smoothed mask may be thresholded to produce a binary smoothed mask.Type: GrantFiled: August 30, 2010Date of Patent: August 27, 2013Assignee: Adobe Systems IncorporatedInventors: Jue Wang, David P. Simons, Daniel M. Wilk, Xue Bai
-
Publication number: 20130121577Abstract: Systems, methods, and computer-readable storage media for chatter reduction in video object segmentation using optical flow assisted gaussholding. An optical flow assisted gaussholding method may be applied to segmentation masks generated for a video sequence. For each frame of at least some frames in a video sequence, for each of one or more other frames prior to and one or more other frames after the current frame, optical flow is computed for the other frame in relation to the current frame and used to warp the contour of the segmentation mask of the other frame to generate warped segmentation mask for the other frames. The weighted average of the warpedsegmentation masks and the segmentation mask of the current frame is then computed; this weighted average may be blurred spatially, for example using a Gaussian filter. The initial smoothed mask may be thresholded to produce a binary smoothed mask.Type: ApplicationFiled: August 30, 2010Publication date: May 16, 2013Inventors: Jue Wang, David P. Simons, Daniel M. Wilk, Xue Bai
-
Patent number: 8358691Abstract: Systems, methods, and computer-readable storage media for chatter reduction in video object segmentation using a variable bandwidth search region. A variable bandwidth search region generation method may be applied to a uniform search region to generate a variable bandwidth search region that reduces the search range for segmentation methods such as a graph cut method. The method may identify parts of the contour that are moving slowly, and reduce the search region bandwidth in those places to stabilize the segmentation. This method may determine a bandwidth for each of a plurality of local windows of an image according to an estimate of how much an object in the image has moved from a previous image. The method may blend the bandwidths for the plurality of local windows to generate a blended map. The method may then generate a variable bandwidth search region for an object according to the blended map.Type: GrantFiled: August 30, 2010Date of Patent: January 22, 2013Assignee: Adobe Systems IncorporatedInventors: Jue Wang, David P. Simons, Daniel M. Wilk, Xue Bai