Patents by Inventor Lubomira Assenova Dontcheva

Lubomira Assenova Dontcheva has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250139161
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for cutting down a user's larger input video into an edited video comprising the most important video segments and applying corresponding video effects. Some embodiments of the present invention are directed to adding captioning video effects to the trimmed video (e.g., applying face-aware and non-face-aware captioning to emphasize extracted video segment headings, important sentences, quotes, words of interest, extracted lists, etc.). For example, a prompt is provided to a generative language model to identify portions of a transcript (e.g., extracted scene summaries, important sentences, lists of items discussed in the video, etc.) to apply to corresponding video segments as captions depending on the type of caption (e.g., an extracted heading may be captioned at the start of a corresponding video segment, important sentences and/or extracted list items may be captioned when they are spoken).
    Type: Application
    Filed: February 2, 2024
    Publication date: May 1, 2025
    Inventors: Deepali ANEJA, Zeyu JIN, Hijung SHIN, Anh Lan TRUONG, Dingzeyu LI, Hanieh DEILAMSALEHY, Rubaiat HABIB, Matthew David FISHER, Kim Pascal PIMMEL, Wilmot LI, Lubomira Assenova DONTCHEVA
  • Publication number: 20250140292
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for cutting down a user's larger input video into an edited video comprising the most important video segments and applying corresponding video effects. Some embodiments of the present invention are directed to adding face-aware scale magnification to the trimmed video (e.g., applying scale magnification to simulate a camera zoom effect that hides shot cuts with respect to the subject's face). For example, as the trimmed video transitions from one video segment to the next video segment, a scale magnification may be applied that zooms in on a detected face at a boundary between the video segments to smooth the transition between video segments.
    Type: Application
    Filed: February 2, 2024
    Publication date: May 1, 2025
    Inventors: Anh Lan TRUONG, Deepali ANEJA, Hijung SHIN, Rubaiat HABIB, Jakub FISER, Kishore RADHAKRISHNA, Joel Richard BRANDT, Matthew David FISHER, Zeyu JIN, Kim Pascal PIMMEL, Wilmot LI, Lubomira Assenova DONTCHEVA
  • Publication number: 20250140291
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for identifying the relevant segments that effectively summarize the larger input video and/or form a rough cut, and assembling them into one or more smaller trimmed videos. For example, visual scenes and corresponding scene captions are extracted from the input video and associated with an extracted diarized and timestamped transcript to generate an augmented transcript. The augmented transcript is applied to a large language model to extract sentences that characterize a trimmed version of the input video (e.g., a natural language summary, a representation of identified sentences from the transcript). As such, corresponding video segments are identified (e.g., using similarity to match each sentence in a generated summary with a corresponding transcript sentence) and assembled into one or more trimmed videos. In some embodiments, the trimmed video is generated based on a user's query and/or desired length.
    Type: Application
    Filed: February 2, 2024
    Publication date: May 1, 2025
    Inventors: Hanieh DEILAMSALEHY, Jui-Hsien WANG, Zhengyang MA, Dingzeyu LI, Hijung SHIN, Aseem Omprakash AGARWALA, Kim Pascal PIMMEL, Lubomira Assenova DONTCHEVA
  • Patent number: 12223962
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for music-aware speaker diarization. In an example embodiment, one or more audio classifiers detect speech and music independently of each other, which facilitates detecting regions in an audio track that contain music but do not contain speech. These music-only regions are compared to the transcript, and any transcription and speakers that overlap in time with the music-only regions are removed from the transcript. In some embodiments, rather than having the transcript display the text from this detected music, a visual representation of the audio waveform is included in the corresponding regions of the transcript.
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: February 11, 2025
    Assignee: ADOBE INC.
    Inventors: Justin Jonathan Salamon, Fabian David Caba Heilbron, Xue Bai, Aseem Omprakash Agarwala, Hijung Shin, Lubomira Assenova Dontcheva
  • Patent number: 12206930
    Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: January 21, 2025
    Assignee: Adobe Inc.
    Inventors: Kim Pascal Pimmel, Stephen Joseph Diverdi, Jiaju MA, Rubaiat Habib, Li-Yi Wei, Hijung Shin, Deepali Aneja, John G. Nelson, Wilmot Li, Dingzeyu Li, Lubomira Assenova Dontcheva, Joel Richard Brandt
  • Patent number: 12125501
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for face-aware speaker diarization. In an example embodiment, an audio-only speaker diarization technique is applied to generate an audio-only speaker diarization of a video, an audio-visual speaker diarization technique is applied to generate a face-aware speaker diarization of the video, and the audio-only speaker diarization is refined using the face-aware speaker diarization to generate a hybrid speaker diarization that links detected faces to detected voices. In some embodiments, to accommodate videos with small faces that appear pixelated, a cropped image of any given face is extracted from each frame of the video, and the size of the cropped image is used to select a corresponding active speaker detection model to predict an active speaker score for the face in the cropped image.
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: October 22, 2024
    Assignee: Adobe Inc.
    Inventors: Fabian David Caba Heilbron, Xue Bai, Aseem Omprakash Agarwala, Haoran Cai, Lubomira Assenova Dontcheva
  • Patent number: 12119028
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for identifying candidate boundaries for video segments, video segment selection using those boundaries, and text-based video editing of video segments selected via transcript interactions. In an example implementation, boundaries of detected sentences and words are extracted from a transcript, the boundaries are retimed into an adjacent speech gap to a location where voice or audio activity is a minimum, and the resulting boundaries are stored as candidate boundaries for video segments. As such, a transcript interface presents the transcript, interprets input selecting transcript text as an instruction to select a video segment with corresponding boundaries selected from the candidate boundaries, and interprets commands that are traditionally thought of as text-based operations (e.g., cut, copy, paste) as an instruction to perform a corresponding video editing operation using the selected video segment.
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: October 15, 2024
    Assignee: ADOBE INC.
    Inventors: Xue Bai, Justin Jonathan Salamon, Aseem Omprakash Agarwala, Hijung Shin, Haoran Cai, Joel Richard Brandt, Lubomira Assenova Dontcheva, Cristin Ailidh Fraser
  • Publication number: 20240244287
    Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.
    Type: Application
    Filed: January 13, 2023
    Publication date: July 18, 2024
    Inventors: Kim Pascal PIMMEL, Stephen Joseph DIVERDI, Jiaju MA, Rubaiat HABIB, Li-Yi WEI, Hijung SHIN, Deepali ANEJA, John G. NELSON, Wilmot LI, Dingzeyu LI, Lubomira Assenova DONTCHEVA, Joel Richard BRANDT
  • Publication number: 20240233769
    Abstract: Embodiments of the present disclosure provide systems, methods, and computer storage media providing visualizations and mechanisms utilized when performing video edits using wrapped timelines (e.g., effect bars/effect tracks) interspersed between text lines representing video effects being applied to text segments in a transcript. An example embodiment provides a transcript using an audio track from a transcribed video. A transcript interface presents the transcript and accepts an input selecting sentences or words from the transcript. The identified boundaries corresponding to the selected text segment are used as boundaries for a selected video segment. Using the selected text segment, a user selects a video effect in which to apply to the corresponding video segment and within the transcript interface, a wrapped timeline is placed in the transcript along the selected text segment to indicate that the video effect is applied to the corresponding video segment.
    Type: Application
    Filed: January 10, 2023
    Publication date: July 11, 2024
    Inventors: David Tamas Kutas, Lubomira Assenova Dontcheva, Kim Pascal Pimmel, Hijung Shin
  • Publication number: 20240135973
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for identifying candidate boundaries for video segments, video segment selection using those boundaries, and text-based video editing of video segments selected via transcript interactions. In an example implementation, boundaries of detected sentences and words are extracted from a transcript, the boundaries are retimed into an adjacent speech gap to a location where voice or audio activity is a minimum, and the resulting boundaries are stored as candidate boundaries for video segments. As such, a transcript interface presents the transcript, interprets input selecting transcript text as an instruction to select a video segment with corresponding boundaries selected from the candidate boundaries, and interprets commands that are traditionally thought of as text-based operations (e.g., cut, copy, paste) as an instruction to perform a corresponding video editing operation using the selected video segment.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 25, 2024
    Inventors: Xue BAI, Justin Jonathan SALAMON, Aseem Omprakash AGARWALA, Hijung SHIN, Haoran CAI, Joel Richard BRANDT, Lubomira Assenova DONTCHEVA, Cristin Ailidh Fraser
  • Publication number: 20240134909
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a visual and text search interface used to navigate a video transcript. In an example embodiment, a freeform text query triggers a visual search for frames of a loaded video that match the freeform text query (e.g., frame embeddings that match a corresponding embedding of the freeform query), and triggers a text search for matching words from a corresponding transcript or from tags of detected features from the loaded video. Visual search results are displayed (e.g., in a row of tiles that can be scrolled to the left and right), and textual search results are displayed (e.g., in a row of tiles that can be scrolled up and down). Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 25, 2024
    Inventors: Lubomira Assenova DONTCHEVA, Dingzeyu LI, Kim Pascal PIMMEL, Hijung SHIN, Hanieh DEILAMSALEHY, Aseem Omprakash AGARWALA, Joy Oakyung KIM, Joel Richard BRANDT, Cristin Ailidh Fraser
  • Publication number: 20240134597
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a question search for meaningful questions that appear in a video. In an example embodiment, an audio track from a video is transcribed, and the transcript is parsed to identify sentences that end with a question mark. Depending on the embodiment, one or more types of questions are filtered out, such as short questions less than a designated length or duration, logistical questions, and/or rhetorical questions. As such, in response to a command to perform a question search, the questions are identified, and search result tiles representing video segments of the questions are presented. Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 25, 2024
    Inventors: Lubomira Assenova DONTCHEVA, Anh Lan TRUONG, Hanieh DEILAMSALEHY, Kim Pascal PIMMEL, Aseem Omprakash AGARWALA, Dingzeyu Li, Joel Richard BRANDT, Joy Oakyung KIM
  • Publication number: 20240127858
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for annotating transcript text with video metadata, and including thumbnail bars in the transcript to help users select a desired portion of a video through transcript interactions. In an example embodiment, a video editing interface includes a transcript interface that presents a transcript with transcript text that is annotated to indicate corresponding portions of the video where various features were detected (e.g., annotating via text stylization of transcript text and/or labeling the transcript text with a textual representation of a corresponding detected feature class). In some embodiments, the transcript interface displays a visual representation of detected non-speech audio or pauses (e.g., a sound bar) and/or video thumbnails corresponding to each line of transcript text (e.g., a thumbnail bar).
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Inventors: Lubomira Assenova DONTCHEVA, Hijung SHIN, Joel Richard BRANDT, Joy Oakyung KIM
  • Publication number: 20240127820
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for music-aware speaker diarization. In an example embodiment, one or more audio classifiers detect speech and music independently of each other, which facilitates detecting regions in an audio track that contain music but do not contain speech. These music-only regions are compared to the transcript, and any transcription and speakers that overlap in time with the music-only regions are removed from the transcript. In some embodiments, rather than having the transcript display the text from this detected music, a visual representation of the audio waveform is included in the corresponding regions of the transcript.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Inventors: Justin Jonathan SALAMON, Fabian David CABA HEILBRON, Xue BAI, Aseem Omprakash AGARWALA, Hijung SHIN, Lubomira Assenova DONTCHEVA
  • Publication number: 20240127855
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for selection of the best image of a particular speaker's face in a video, and visualization in a diarized transcript. In an example embodiment, candidate images of a face of a detected speaker are extracted from frames of a video identified by a detected face track for the face, and a representative image of the detected speaker's face is selected from the candidate images based on image quality, facial emotion (e.g., using an emotion classifier that generates a happiness score), a size factor (e.g., favoring larger images), and/or penalizing images that appear towards the beginning or end of a face track. As such, each segment of the transcript is presented with the representative image of the speaker who spoke that segment and/or input is accepted changing the representative image associated with each speaker.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Inventors: Lubomira Assenova DONTCHEVA, Xue BAI, Aseem Omprakash AGARWALA, Joel Richard BRANDT
  • Publication number: 20240127857
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for face-aware speaker diarization. In an example embodiment, an audio-only speaker diarization technique is applied to generate an audio-only speaker diarization of a video, an audio-visual speaker diarization technique is applied to generate a face-aware speaker diarization of the video, and the audio-only speaker diarization is refined using the face-aware speaker diarization to generate a hybrid speaker diarization that links detected faces to detected voices. In some embodiments, to accommodate videos with small faces that appear pixelated, a cropped image of any given face is extracted from each frame of the video, and the size of the cropped image is used to select a corresponding active speaker detection model to predict an active speaker score for the face in the cropped image.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Inventors: Fabian David CABA HEILBRON, Xue BAI, Aseem Omprakash AGARWALA, Haoran CAI, Lubomira Assenova DONTCHEVA
  • Patent number: D1056928
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: January 7, 2025
    Assignee: Adobe Inc.
    Inventors: Kim Pascal Pimmel, Hijung Shin, Dingzeyu Li, Anh Lan Truong, Joy Oakyung Kim, Pankaj Kumar Nathani, Xuecong Xu, Cristin Ailidh Fraser, Kyratso George Karahalios, Zhengyang Ma, Joel Richard Brandt, Lubomira Assenova Dontcheva, Seth John Walker
  • Patent number: D1056942
    Type: Grant
    Filed: February 29, 2024
    Date of Patent: January 7, 2025
    Assignee: Adobe Inc.
    Inventors: Kim Pascal Pimmel, Hijung Shin, Dingzeyu Li, Anh Lan Truong, Joy Oakyung Kim, Pankaj Kumar Nathani, Xuecong Xu, Cristin Ailidh Fraser, Kyratso George Karahalios, Zhengyang Ma, Joel Richard Brandt, Lubomira Assenova Dontcheva, Seth John Walker
  • Patent number: D1056943
    Type: Grant
    Filed: February 29, 2024
    Date of Patent: January 7, 2025
    Assignee: Adobe Inc.
    Inventors: Kim Pascal Pimmel, Hijung Shin, Dingzeyu Li, Anh Lan Truong, Joy Oakyung Kim, Pankaj Kumar Nathani, Xuecong Xu, Cristin Ailidh Fraser, Kyratso George Karahalios, Zhengyang Ma, Joel Richard Brandt, Lubomira Assenova Dontcheva, Seth John Walker
  • Patent number: D1056944
    Type: Grant
    Filed: February 29, 2024
    Date of Patent: January 7, 2025
    Assignee: Adobe Inc.
    Inventors: Kim Pascal Pimmel, Hijung Shin, Dingzeyu Li, Anh Lan Truong, Joy Oakyung Kim, Pankaj Kumar Nathani, Xuecong Xu, Cristin Ailidh Fraser, Kyratso George Karahalios, Zhengyang Ma, Joel Richard Brandt, Lubomira Assenova Dontcheva, Seth John Walker