Patents by Inventor Lubomira Assenova Dontcheva
Lubomira Assenova Dontcheva has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12125501Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for face-aware speaker diarization. In an example embodiment, an audio-only speaker diarization technique is applied to generate an audio-only speaker diarization of a video, an audio-visual speaker diarization technique is applied to generate a face-aware speaker diarization of the video, and the audio-only speaker diarization is refined using the face-aware speaker diarization to generate a hybrid speaker diarization that links detected faces to detected voices. In some embodiments, to accommodate videos with small faces that appear pixelated, a cropped image of any given face is extracted from each frame of the video, and the size of the cropped image is used to select a corresponding active speaker detection model to predict an active speaker score for the face in the cropped image.Type: GrantFiled: October 17, 2022Date of Patent: October 22, 2024Assignee: Adobe Inc.Inventors: Fabian David Caba Heilbron, Xue Bai, Aseem Omprakash Agarwala, Haoran Cai, Lubomira Assenova Dontcheva
-
Patent number: 12119028Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for identifying candidate boundaries for video segments, video segment selection using those boundaries, and text-based video editing of video segments selected via transcript interactions. In an example implementation, boundaries of detected sentences and words are extracted from a transcript, the boundaries are retimed into an adjacent speech gap to a location where voice or audio activity is a minimum, and the resulting boundaries are stored as candidate boundaries for video segments. As such, a transcript interface presents the transcript, interprets input selecting transcript text as an instruction to select a video segment with corresponding boundaries selected from the candidate boundaries, and interprets commands that are traditionally thought of as text-based operations (e.g., cut, copy, paste) as an instruction to perform a corresponding video editing operation using the selected video segment.Type: GrantFiled: October 17, 2022Date of Patent: October 15, 2024Assignee: ADOBE INC.Inventors: Xue Bai, Justin Jonathan Salamon, Aseem Omprakash Agarwala, Hijung Shin, Haoran Cai, Joel Richard Brandt, Lubomira Assenova Dontcheva, Cristin Ailidh Fraser
-
Publication number: 20240244287Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.Type: ApplicationFiled: January 13, 2023Publication date: July 18, 2024Inventors: Kim Pascal PIMMEL, Stephen Joseph DIVERDI, Jiaju MA, Rubaiat HABIB, Li-Yi WEI, Hijung SHIN, Deepali ANEJA, John G. NELSON, Wilmot LI, Dingzeyu LI, Lubomira Assenova DONTCHEVA, Joel Richard BRANDT
-
Publication number: 20240233769Abstract: Embodiments of the present disclosure provide systems, methods, and computer storage media providing visualizations and mechanisms utilized when performing video edits using wrapped timelines (e.g., effect bars/effect tracks) interspersed between text lines representing video effects being applied to text segments in a transcript. An example embodiment provides a transcript using an audio track from a transcribed video. A transcript interface presents the transcript and accepts an input selecting sentences or words from the transcript. The identified boundaries corresponding to the selected text segment are used as boundaries for a selected video segment. Using the selected text segment, a user selects a video effect in which to apply to the corresponding video segment and within the transcript interface, a wrapped timeline is placed in the transcript along the selected text segment to indicate that the video effect is applied to the corresponding video segment.Type: ApplicationFiled: January 10, 2023Publication date: July 11, 2024Inventors: David Tamas Kutas, Lubomira Assenova Dontcheva, Kim Pascal Pimmel, Hijung Shin
-
Publication number: 20240134597Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a question search for meaningful questions that appear in a video. In an example embodiment, an audio track from a video is transcribed, and the transcript is parsed to identify sentences that end with a question mark. Depending on the embodiment, one or more types of questions are filtered out, such as short questions less than a designated length or duration, logistical questions, and/or rhetorical questions. As such, in response to a command to perform a question search, the questions are identified, and search result tiles representing video segments of the questions are presented. Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.Type: ApplicationFiled: October 17, 2022Publication date: April 25, 2024Inventors: Lubomira Assenova DONTCHEVA, Anh Lan TRUONG, Hanieh DEILAMSALEHY, Kim Pascal PIMMEL, Aseem Omprakash AGARWALA, Dingzeyu Li, Joel Richard BRANDT, Joy Oakyung KIM
-
Publication number: 20240134909Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a visual and text search interface used to navigate a video transcript. In an example embodiment, a freeform text query triggers a visual search for frames of a loaded video that match the freeform text query (e.g., frame embeddings that match a corresponding embedding of the freeform query), and triggers a text search for matching words from a corresponding transcript or from tags of detected features from the loaded video. Visual search results are displayed (e.g., in a row of tiles that can be scrolled to the left and right), and textual search results are displayed (e.g., in a row of tiles that can be scrolled up and down). Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.Type: ApplicationFiled: October 17, 2022Publication date: April 25, 2024Inventors: Lubomira Assenova DONTCHEVA, Dingzeyu LI, Kim Pascal PIMMEL, Hijung SHIN, Hanieh DEILAMSALEHY, Aseem Omprakash AGARWALA, Joy Oakyung KIM, Joel Richard BRANDT, Cristin Ailidh Fraser
-
Publication number: 20240135973Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for identifying candidate boundaries for video segments, video segment selection using those boundaries, and text-based video editing of video segments selected via transcript interactions. In an example implementation, boundaries of detected sentences and words are extracted from a transcript, the boundaries are retimed into an adjacent speech gap to a location where voice or audio activity is a minimum, and the resulting boundaries are stored as candidate boundaries for video segments. As such, a transcript interface presents the transcript, interprets input selecting transcript text as an instruction to select a video segment with corresponding boundaries selected from the candidate boundaries, and interprets commands that are traditionally thought of as text-based operations (e.g., cut, copy, paste) as an instruction to perform a corresponding video editing operation using the selected video segment.Type: ApplicationFiled: October 17, 2022Publication date: April 25, 2024Inventors: Xue BAI, Justin Jonathan SALAMON, Aseem Omprakash AGARWALA, Hijung SHIN, Haoran CAI, Joel Richard BRANDT, Lubomira Assenova DONTCHEVA, Cristin Ailidh Fraser
-
Publication number: 20240127855Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for selection of the best image of a particular speaker's face in a video, and visualization in a diarized transcript. In an example embodiment, candidate images of a face of a detected speaker are extracted from frames of a video identified by a detected face track for the face, and a representative image of the detected speaker's face is selected from the candidate images based on image quality, facial emotion (e.g., using an emotion classifier that generates a happiness score), a size factor (e.g., favoring larger images), and/or penalizing images that appear towards the beginning or end of a face track. As such, each segment of the transcript is presented with the representative image of the speaker who spoke that segment and/or input is accepted changing the representative image associated with each speaker.Type: ApplicationFiled: October 17, 2022Publication date: April 18, 2024Inventors: Lubomira Assenova DONTCHEVA, Xue BAI, Aseem Omprakash AGARWALA, Joel Richard BRANDT
-
Publication number: 20240127857Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for face-aware speaker diarization. In an example embodiment, an audio-only speaker diarization technique is applied to generate an audio-only speaker diarization of a video, an audio-visual speaker diarization technique is applied to generate a face-aware speaker diarization of the video, and the audio-only speaker diarization is refined using the face-aware speaker diarization to generate a hybrid speaker diarization that links detected faces to detected voices. In some embodiments, to accommodate videos with small faces that appear pixelated, a cropped image of any given face is extracted from each frame of the video, and the size of the cropped image is used to select a corresponding active speaker detection model to predict an active speaker score for the face in the cropped image.Type: ApplicationFiled: October 17, 2022Publication date: April 18, 2024Inventors: Fabian David CABA HEILBRON, Xue BAI, Aseem Omprakash AGARWALA, Haoran CAI, Lubomira Assenova DONTCHEVA
-
Publication number: 20240127858Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for annotating transcript text with video metadata, and including thumbnail bars in the transcript to help users select a desired portion of a video through transcript interactions. In an example embodiment, a video editing interface includes a transcript interface that presents a transcript with transcript text that is annotated to indicate corresponding portions of the video where various features were detected (e.g., annotating via text stylization of transcript text and/or labeling the transcript text with a textual representation of a corresponding detected feature class). In some embodiments, the transcript interface displays a visual representation of detected non-speech audio or pauses (e.g., a sound bar) and/or video thumbnails corresponding to each line of transcript text (e.g., a thumbnail bar).Type: ApplicationFiled: October 17, 2022Publication date: April 18, 2024Inventors: Lubomira Assenova DONTCHEVA, Hijung SHIN, Joel Richard BRANDT, Joy Oakyung KIM
-
Publication number: 20240126994Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for segmenting a transcript into paragraphs. In an example embodiment, a transcript is segmented to start a new paragraph whenever there is a change in speaker and/or a long pause in speech. If any remaining paragraphs are longer than a designated length or duration (e.g., 50 or 100 words), each of those paragraphs is segmented using dynamic programming to minimize a cost function that penalizes candidate paragraphs based on divergence from a target paragraph length and/or that rewards candidate paragraphs that group semantically similar sentences. As such, the transcript is visualized, segmented at the identified paragraphs.Type: ApplicationFiled: October 17, 2022Publication date: April 18, 2024Inventors: Hanieh DEILAMSALEHY, Aseem Omprakash AGARWALA, Haoran CAI, Hijung SHIN, Joel Richard BRANDT, Lubomira Assenova DONTCHEVA
-
Publication number: 20240127820Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for music-aware speaker diarization. In an example embodiment, one or more audio classifiers detect speech and music independently of each other, which facilitates detecting regions in an audio track that contain music but do not contain speech. These music-only regions are compared to the transcript, and any transcription and speakers that overlap in time with the music-only regions are removed from the transcript. In some embodiments, rather than having the transcript display the text from this detected music, a visual representation of the audio waveform is included in the corresponding regions of the transcript.Type: ApplicationFiled: October 17, 2022Publication date: April 18, 2024Inventors: Justin Jonathan SALAMON, Fabian David CABA HEILBRON, Xue BAI, Aseem Omprakash AGARWALA, Hijung SHIN, Lubomira Assenova DONTCHEVA
-
Publication number: 20200051302Abstract: Techniques described herein relate to a streamlined animation production workflow that integrates script drafting, performance, and editing. A script including animation events is parsed to encode the animation events into nodes of a story model. The animation events are automatically triggered by a performance as a playhead advances through the story model and identifies active node(s). A command interface accepts various commands that allow a performer to act as a director by controlling recording and playback. Recording binds a generated animation event to each active node. Playback triggers generated animation events for active nodes. An animated movie is assembled from the generated animation events in the story model. The animated movie can be presented as a live preview to provide feedback to the performer, and a teleprompter interface can guide a performer by presenting and advancing the script to follow the performance.Type: ApplicationFiled: August 7, 2018Publication date: February 13, 2020Inventors: Hariharan Subramonyam, Eytan Adar, Lubomira Assenova Dontcheva, Wilmot Wei-Mau Li
-
Patent number: 10546409Abstract: Techniques described herein relate to a streamlined animation production workflow that integrates script drafting, performance, and editing. A script including animation events is parsed to encode the animation events into nodes of a story model. The animation events are automatically triggered by a performance as a playhead advances through the story model and identifies active node(s). A command interface accepts various commands that allow a performer to act as a director by controlling recording and playback. Recording binds a generated animation event to each active node. Playback triggers generated animation events for active nodes. An animated movie is assembled from the generated animation events in the story model. The animated movie can be presented as a live preview to provide feedback to the performer, and a teleprompter interface can guide a performer by presenting and advancing the script to follow the performance.Type: GrantFiled: August 7, 2018Date of Patent: January 28, 2020Assignee: Adobe Inc.Inventors: Hariharan Subramonyam, Eytan Adar, Lubomira Assenova Dontcheva, Wilmot Wei-Mau Li
-
Patent number: 10360473Abstract: User interface creation from screenshots is described. Initially, a user captures a screenshot of an existing graphical user interface (GUI). In one or more implementations, the screenshot is processed to generate different types of templates that are modifiable by users to create new GUIs. These different types of templates can include a snapping template, a wireframe template, and a stylized template. The described templates may aid GUI development in different ways depending on the type selected. To generate a template, the screenshot serving as the basis for the template is segmented into groups of pixels corresponding to components of the existing GUI. A type of component is identified for each group of pixels and locations in the screenshot are determined. Based on the identified types of GUI components and determined locations, the user-modifiable template for creating a new GUI is generated.Type: GrantFiled: May 30, 2017Date of Patent: July 23, 2019Assignee: Adobe Inc.Inventors: Morgan Emory Dixon, Lubomira Assenova Dontcheva, Joel Richard Brandt, Amanda Marie Swearngin
-
Publication number: 20180349730Abstract: User interface creation from screenshots is described. Initially, a user captures a screenshot of an existing graphical user interface (GUI). In one or more implementations, the screenshot is processed to generate different types of templates that are modifiable by users to create new GUIs. These different types of templates can include a snapping template, a wireframe template, and a stylized template. The described templates may aid GUI development in different ways depending on the type selected. To generate a template, the screenshot serving as the basis for the template is segmented into groups of pixels corresponding to components of the existing GUI. A type of component is identified for each group of pixels and locations in the screenshot are determined. Based on the identified types of GUI components and determined locations, the user-modifiable template for creating a new GUI is generated.Type: ApplicationFiled: May 30, 2017Publication date: December 6, 2018Applicant: Adobe Systems IncorporatedInventors: Morgan Emory Dixon, Lubomira Assenova Dontcheva, Joel Richard Brandt, Amanda Marie Swearngin