Patents by Inventor David A. Kuspa
David A. Kuspa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9817829Abstract: A priority for one or more source components can be determined for use in providing metadata for a composite media presentation. For example, an audio component containing or associated with a text transcript may be prioritized based on a gain value, gain differential, and/or frequency range associated with the audio component, with data indicating the priority stored in a computer-readable medium. When transcript or other metadata is provided for the composite media presentation, the priority information can be used to select how (or whether) data or metadata associated with each component will be used in providing transcript metadata for the presentation as a whole.Type: GrantFiled: October 28, 2008Date of Patent: November 14, 2017Assignee: Adobe Systems IncorporatedInventor: David Kuspa
-
Patent number: 9191639Abstract: Provided in some embodiments is a computer implemented method that includes receiving time-aligned script data including dialogue words of a script and timecodes corresponding to the dialogue words, identifying gaps between dialogue words for the insertion of video description content, wherein the gaps are identified based on the duration of pauses between timecodes of adjacent dialogue words, aligning segments of video description content with corresponding gaps in dialogue, wherein the video description content for the segments is derived from corresponding script elements of the script; and generating a script document including the aligned segments of video description content.Type: GrantFiled: May 28, 2010Date of Patent: November 17, 2015Assignee: Adobe Systems IncorporatedInventor: David A. Kuspa
-
Patent number: 9066049Abstract: Provided in some embodiments is a computer implemented method that includes providing script data including script words indicative of dialogue words to be spoken, providing recorded dialogue audio data corresponding to at least a portion of the dialogue words to be spoken, wherein the recorded dialogue audio data includes timecodes associated with recorded audio dialogue words, matching at least some of the script words to corresponding recorded audio dialogue words to determine alignment points, determining that a set of unmatched script words are accurate based on the matching of at least some of the script words matched to corresponding recorded audio dialogue words, generating time-aligned script data including the script words and their corresponding timecodes and the set of unmatched script words determined to be accurate based on the matching of at least some of the script words matched to corresponding recorded audio dialogue words.Type: GrantFiled: May 28, 2010Date of Patent: June 23, 2015Assignee: Adobe Systems IncorporatedInventors: Jerry R. Scoggins, II, Walter W. Chang, David A. Kuspa
-
Publication number: 20140250056Abstract: A priority for one or more source components can be determined for use in providing metadata for a composite media presentation. For example, an audio component containing or associated with a text transcript may be prioritized based on a gain value, gain differential, and/or frequency range associated with the audio component, with data indicating the priority stored in a computer-readable medium. When transcript or other metadata is provided for the composite media presentation, the priority information can be used to select how (or whether) data or metadata associated with each component will be used in providing transcript metadata for the presentation as a whole.Type: ApplicationFiled: October 28, 2008Publication date: September 4, 2014Applicant: Adobe Systems IncorporatedInventor: David Kuspa
-
Publication number: 20140250055Abstract: Certain embodiments described herein provide methods and systems that use metadata placeholders to facilitate the association of metadata with recorded media content. Metadata placeholders, for example, may be created prior to recording content and then used at the time of the recording and editing of the actual content. Metadata placeholders can be used to make useful information, including a director's shot plan and other shot attribute information, available on-location to be used and edited by those present at recording and to facilitate the association of the information with the actual recorded content. One exemplary method involves creating a metadata placeholder for a shot, including information about the shot in the metadata fields of the metadata placeholder, and then storing the placeholder's metadata with the content that is recorded for the shot.Type: ApplicationFiled: July 7, 2008Publication date: September 4, 2014Inventors: David Kuspa, Mark Mapes, Benoit Ambry
-
Patent number: 8825488Abstract: A method includes receiving script data including script words for dialogue, receiving audio data corresponding to at least a portion of the dialogue, wherein the audio data includes timecodes associated with dialogue words, generating a sequential alignment of the script words to the dialogue words, matching at least some of the script words to corresponding dialogue words to determine hard alignment points, partitioning the sequential alignment of script words into alignment sub-sets, wherein the bounds of the alignment sub-subsets are defined by adjacent hard-alignment points, and wherein the alignment subsets includes a sub-set of the script words and a corresponding sub-set of dialogue words that occur between the hard-alignment points, determining corresponding timecodes for a sub-set of script words in a sub-subset based on the timecodes associated with the sub-set of dialogue words, and generating time-aligned script data including the sub-set of script words and their corresponding timecodes.Type: GrantFiled: May 28, 2010Date of Patent: September 2, 2014Assignee: Adobe Systems IncorporatedInventors: Jerry R. Scoggins, II, Walter W. Chang, David A. Kuspa, Charles E. Van Winkle, Simon R. Hayhurst
-
Publication number: 20130124212Abstract: A method includes receiving script data including script words for dialogue, receiving audio data corresponding to at least a portion of the dialogue, wherein the audio data includes timecodes associated with dialogue words, generating a sequential alignment of the script words to the dialogue words, matching at least some of the script words to corresponding dialogue words to determine hard alignment points, partitioning the sequential alignment of script words into alignment sub-sets, wherein the bounds of the alignment sub-subsets are defined by adjacent hard-alignment points, and wherein the alignment subsets includes a sub-set of the script words and a corresponding sub-set of dialogue words that occur between the hard-alignment points, determining corresponding timecodes for a sub-set of script words in a sub-subset based on the timecodes associated with the sub-set of dialogue words, and generating time-aligned script data including the sub-set of script words and their corresponding timecodes.Type: ApplicationFiled: May 28, 2010Publication date: May 16, 2013Inventors: Jerry R. Scoggins, II, Walter W. Chang, David A. Kuspa, Charles E. Van Winkle, Simon R. Hayhurst
-
Publication number: 20130120654Abstract: Provided in some embodiments is a computer implemented method that includes receiving time-aligned script data including dialogue words of a script and timecodes corresponding to the dialogue words, identifying gaps between dialogue words for the insertion of video description content, wherein the gaps are identified based on the duration of pauses between timecodes of adjacent dialogue words, aligning segments of video description content with corresponding gaps in dialogue, wherein the video description content for the segments is derived from corresponding script elements of the script; and generating a script document including the aligned segments of video description content.Type: ApplicationFiled: May 28, 2010Publication date: May 16, 2013Inventor: David A. Kuspa
-
Publication number: 20130124203Abstract: Provided in some embodiments is a computer implemented method that includes providing script data including script words indicative of dialogue words to be spoken, providing recorded dialogue audio data corresponding to at least a portion of the dialogue words to be spoken, wherein the recorded dialogue audio data includes timecodes associated with recorded audio dialogue words, matching at least some of the script words to corresponding recorded audio dialogue words to determine alignment points, determining that a set of unmatched script words are accurate based on the matching of at least some of the script words matched to corresponding recorded audio dialogue words, generating time-aligned script data including the script words and their corresponding timecodes and the set of unmatched script words determined to be accurate based on the matching of at least some of the script words matched to corresponding recorded audio dialogue words.Type: ApplicationFiled: May 28, 2010Publication date: May 16, 2013Inventors: Jerry R. Scoggins, II, Walter W. Chang, David A. Kuspa
-
Publication number: 20130124984Abstract: A method includes receiving script metadata extracted from a script for a program, wherein the script metadata includes clip metadata associated with a particular portion of the program, associating the clip metadata with a clip corresponding to the particular portion of the program, receiving a request to revise the clip metadata, revising the clip metadata in accordance with the request to revise the clip metadata to generate revised clip metadata associated with the clip, and generating a revised script using the revised clip metadata.Type: ApplicationFiled: May 28, 2010Publication date: May 16, 2013Inventor: David A. Kuspa
-
Patent number: 8295687Abstract: The present disclosure includes systems and techniques relating to indicating different video playback rates. In general, one aspect of the subject matter described in this specification can be embodied in a method that includes providing a user interface for a digital video editing system, the user interface including a graphical representation of playback time for a sequence of digital video; receiving input specifying a change in playback rate for the sequence of digital video; and showing the change in playback rate, the showing including providing marks along the graphical representation of playback time for the sequence of digital video; wherein the marks include different shapes to represent at least two different playback rates.Type: GrantFiled: April 16, 2007Date of Patent: October 23, 2012Assignee: Adobe Systems IncorporatedInventor: David Kuspa
-
Patent number: 8170396Abstract: The present disclosure includes systems and techniques relating to changing video playback rate. In general, one aspect of the subject matter described in this specification can be embodied in a method that includes providing a user interface for a digital video editing system, the user interface including a graphical representation of playback rate and playback duration for a sequence of digital video, and the user interface including defined points that reference respective frames in the sequence of digital video; receiving input specifying a change in playback rate for the sequence of digital video; and showing the change in playback rate and a corresponding change in playback duration for the sequence of digital video, the showing including moving one of the defined points in accordance with a new temporal position of a corresponding frame referenced by the one of the defined points.Type: GrantFiled: April 16, 2007Date of Patent: May 1, 2012Assignee: Adobe Systems IncorporatedInventors: David Kuspa, Matthew Davey, Steven Warner, Paul E. Young
-
Patent number: 7823056Abstract: Techniques and systems for recording video and audio in timeline sequences. In some embodiments, a method involves playing video from a timeline sequence that can include footage from portions of video tracks including at least one pre-recorded edit. In response to a selection of footage from the timeline sequence, the method can involve overwriting at least one pre-recorded edit in the timeline sequence. Each video track can correspond to a respective camera in a multiple-camera source. The footage can include one or more video clips, still images, frames, and moving images. The overwriting can occur while playing the video at a play rate faster than realtime, a play rate slower than realtime, a user-selected playing rate, during video scrubbing, or during realtime playback. The recording may involve a jump back input, and punch in and punch out locations associated with the timeline sequence.Type: GrantFiled: May 4, 2006Date of Patent: October 26, 2010Assignee: Adobe Systems IncorporatedInventors: Matthew Davey, David Kuspa, Steven Warner, Michael Gregory Jennings
-
Patent number: 7623755Abstract: Techniques and systems for positioning video and audio clips in timeline sequences. In some embodiments, a computer program product, encoded on a computer-readable medium, is operable to cause data processing apparatus to perform operations that include, in response to selection of a first clip in a first track, moving the selected first clip in a timeline sequence. The timeline sequence includes multiple matched audio and video tracks configured to serve as containers for clips, and the first clip is linked with a second clip. In response to selection of the second clip, the operations include moving the selected second clip into a track that is a non-matching track in the timeline sequence, in which the non-matching track includes a track that is not associated with a track where the first clip is located. The drag-and-drop techniques can allow independent placement of linked audio and video clips into non-matching tracks.Type: GrantFiled: August 17, 2006Date of Patent: November 24, 2009Assignee: Adobe Systems IncorporatedInventor: David Kuspa
-
Publication number: 20080253735Abstract: The present disclosure includes systems and techniques relating to changing video playback rate. In general, one aspect of the subject matter described in this specification can be embodied in a method that includes providing a user interface for a digital video editing system, the user interface including a graphical representation of playback rate and playback duration for a sequence of digital video, and the user interface including defined points that reference respective frames in the sequence of digital video; receiving input specifying a change in playback rate for the sequence of digital video; and showing the change in playback rate and a corresponding change in playback duration for the sequence of digital video, the showing including moving one of the defined points in accordance with a new temporal position of a corresponding frame referenced by the one of the defined points.Type: ApplicationFiled: April 16, 2007Publication date: October 16, 2008Inventors: DAVID KUSPA, Matthew Davey, Steven Warner, Paul E. Young
-
Publication number: 20080044155Abstract: Techniques and systems for positioning video and audio clips in timeline sequences. In some embodiments, a computer program product, encoded on a computer-readable medium, is operable to cause data processing apparatus to perform operations that include, in response to selection of a first clip in a first track, moving the selected first clip in a timeline sequence. The timeline sequence includes multiple matched audio and video tracks configured to serve as containers for clips, and the first clip is linked with a second clip. In response to selection of the second clip, the operations include moving the selected second clip into a track that is a non-matching track in the timeline sequence, in which the non-matching track includes a track that is not associated with a track where the first clip is located. The drag-and-drop techniques can allow independent placement of linked audio and video clips into non-matching tracks.Type: ApplicationFiled: August 17, 2006Publication date: February 21, 2008Inventor: David Kuspa