Patents by Inventor Lubomira Dontcheva

Lubomira Dontcheva has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220292831
    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.
    Type: Application
    Filed: June 2, 2022
    Publication date: September 15, 2022
    Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
  • Publication number: 20220076025
    Abstract: Embodiments are directed to video segmentation based on a query. Initially, a first segmentation such as a default segmentation is displayed (e.g., as interactive tiles in a finder interface, as a video timeline in an editor interface), and the default segmentation is re-segmented in response to a user query. The query can take the form of a keyword and one or more selected facets in a category of detected features. Keywords are searched for detected transcript words, detected object or action tags, or detected audio event tags that match the keywords. Selected facets are searched for detected instances of the selected facets. Each video segment that matches the query is re-segmented by solving a shortest path problem through a graph that models different segmentation options.
    Type: Application
    Filed: May 26, 2021
    Publication date: March 10, 2022
    Inventors: Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic
  • Publication number: 20220076707
    Abstract: Embodiments are directed to a snap point segmentation that defines the locations of selection snap points for a selection of video segments. Candidate snap points are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate snap point separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation between consecutive snap points on a video timeline. The snap point segmentation is computed by solving a shortest path problem through a graph that models different snap point locations and separations. When a user clicks or taps on the video timeline and drags, a selection snaps to the snap points defined by the snap point segmentation. In some embodiments, the snap points are displayed during a drag operation and disappear when the drag operation is released.
    Type: Application
    Filed: May 26, 2021
    Publication date: March 10, 2022
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Publication number: 20220075820
    Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation by performing a metadata search. Generally, various types of metadata can be extracted from a video, such as a transcript of audio, keywords from the transcript, content or action tags visually extracted from video frames, and log event tags extracted from an associated temporal log. The extracted metadata is segmented into metadata segments and associated with corresponding video segments defined by a hierarchical video segmentation. As such, a metadata search can be performed to identify matching metadata segments and corresponding matching video segments defined by a particular level of the hierarchical segmentation. Matching metadata segments are emphasized in a composite list of the extracted metadata, and matching video segments are emphasized on the video timeline. Navigating to a different level of the hierarchy transforms the search results into corresponding coarser or finer segments defined by the level.
    Type: Application
    Filed: September 10, 2020
    Publication date: March 10, 2022
    Inventors: Seth Walker, Joy Oakyung Kim, Morgan Nicole Evans, Najika Skyler Halsema Yoo, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
  • Publication number: 20220076026
    Abstract: Embodiments are directed to a thumbnail segmentation that defines the locations on a video timeline where thumbnails are displayed. Candidate thumbnail locations are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate thumbnail separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation (e.g., the width of a thumbnail) between consecutive thumbnail locations on a video timeline. The thumbnail segmentation is computed by solving a shortest path problem through a graph that models different thumbnail locations and separations. As such, a video timeline is displayed with thumbnails at locations on the timeline defined by the thumbnail segmentation, with each thumbnail depicting a portion of the video associated with the thumbnail location.
    Type: Application
    Filed: May 26, 2021
    Publication date: March 10, 2022
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Publication number: 20220076024
    Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a metadata panel with a composite list of video metadata. The composite list is segmented into selectable metadata segments at locations corresponding to boundaries of video segments defined by a hierarchical segmentation. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. One or more metadata segments can be selected in various ways, such as by clicking or tapping on a metadata segment or by performing a metadata search. When a metadata segment is selected, a corresponding video segment is emphasized on the video timeline, a playback cursor is moved to the first video frame of the video segment, and the first video frame is presented.
    Type: Application
    Filed: September 10, 2020
    Publication date: March 10, 2022
    Inventors: Seth Walker, Joy Oakyung Kim, Hijung Shin, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Xue Bai
  • Publication number: 20220076424
    Abstract: Embodiments are directed to video segmentation based on detected video features. More specifically, a segmentation of a video is computed by determining candidate boundaries from detected feature boundaries from one or more feature tracks; modeling different segmentation options by constructing a graph with nodes that represent candidate boundaries, edges that represent candidate segments, and edge weights that represent cut costs; and computing the video segmentation by solving a shortest path problem to find the path through the edges (segmentation) that minimizes the sum of edge weights along the path (cut costs). A representation of the video segmentation is presented, for example, using interactive tiles or a video timeline that represent(s) the video segments in the segmentation.
    Type: Application
    Filed: May 26, 2021
    Publication date: March 10, 2022
    Inventors: Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic
  • Publication number: 20220076705
    Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation. In some embodiments, the finest level of the hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms. Each level of the hierarchical segmentation clusters the clip atoms with a corresponding degree of granularity into a corresponding set of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline (e.g., clicks, drags), by performing a metadata search, or through selection of corresponding metadata segments from a metadata panel. Navigating to a different level of the hierarchy transforms the selection into corresponding coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.
    Type: Application
    Filed: September 10, 2020
    Publication date: March 10, 2022
    Inventors: Seth Walker, Joy Oakyung Kim, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
  • Publication number: 20220076023
    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.
    Type: Application
    Filed: September 10, 2020
    Publication date: March 10, 2022
    Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
  • Publication number: 20220076706
    Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.
    Type: Application
    Filed: May 26, 2021
    Publication date: March 10, 2022
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Publication number: 20220075513
    Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.
    Type: Application
    Filed: September 10, 2020
    Publication date: March 10, 2022
    Inventors: Seth Walker, Joy Oakyung Kim, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
  • Publication number: 20210405964
    Abstract: A framework for generating and presenting verbal command suggestions to facilitate discoverability of commands capable of being understood and support users exploring available commands. A target associated with a direct-manipulation input is received from a user via a multimodal user interface. A set of operations relevant to the target is selected and verbal command suggestions relevant to the selected set of operations and the determined target are generated. At least a portion of the generated verbal command suggestions is provided for presentation in association with the multimodal user interface in one of three interface variants: one that presents command suggestions as a list, one that presents command suggestions using contextual overlay windows, and one that presents command suggestions embedded within the interface. Each of the proposed interface variants facilitates user awareness of verbal commands that are capable of being executed and teaches users how available verbal commands can be invoked.
    Type: Application
    Filed: September 8, 2021
    Publication date: December 30, 2021
    Inventors: Lubomira Dontcheva, Arjun Srinivasan, Seth John Walker, Eytan Adar
  • Patent number: 11145333
    Abstract: Systems and methods provide for capturing and presenting content creation tools of an application used in a video. Application data from the application for the duration of the video is received. The application data includes data identifiers and time markers corresponding to user interaction with an application in a video. The application data is processed to detect tool identifiers identifying tools used in the video based on the data identifiers. For each a tool identifier, a tool label and a corresponding time in the timeline is determined. A tool record storing the tool labels and the corresponding times in association with the video is generated. When a viewer requests to watch the video, the tool record is presented to the viewer in conjunction with the video.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: October 12, 2021
    Assignee: ADOBE INC.
    Inventors: William Hayes Allen, Lubomira Dontcheva, Haiqing Lu, Zachary Platt McCullough, David R. Stein, Christopher Nuuja, Benoit Ambry, Joel Richard Brandt, Cristin Ailidh Fraser, Joy Oakyung Kim, Hijung Shin
  • Patent number: 11132174
    Abstract: A framework for generating and presenting verbal command suggestions to facilitate discoverability of commands capable of being understood and support users exploring available commands. A target associated with a direct-manipulation input is received from a user via a multimodal user interface. A set of operations relevant to the target is selected and verbal command suggestions relevant to the selected set of operations and the determined target are generated. At least a portion of the generated verbal command suggestions is provided for presentation in association with the multimodal user interface in one of three interface variants: one that presents command suggestions as a list, one that presents command suggestions using contextual overlay windows, and one that presents command suggestions embedded within the interface. Each of the proposed interface variants facilitates user awareness of verbal commands that are capable of being executed and teaches users how available verbal commands can be invoked.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: September 28, 2021
    Assignee: ADOBE INC.
    Inventors: Lubomira Dontcheva, Arjun Srinivasan, Seth John Walker, Eytan Adar
  • Publication number: 20210142827
    Abstract: Systems and methods provide for capturing and presenting content creation tools of an application used in a video. Application data from the application for the duration of the video is received. The application data includes data identifiers and time markers corresponding to user interaction with an application in a video. The application data is processed to detect tool identifiers identifying tools used in the video based on the data identifiers. For each a tool identifier, a tool label and a corresponding time in the timeline is determined. A tool record storing the tool labels and the corresponding times in association with the video is generated. When a viewer requests to watch the video, the tool record is presented to the viewer in conjunction with the video.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 13, 2021
    Inventors: William Hayes ALLEN, Lubomira DONTCHEVA, Haiqing LU, Zachary Platt MCCULLOUGH, David R. STEIN, Christopher NUUJA, Benoit AMBRY, Joel Richard Brandt, Cristin Ailidh FRASER, Joy Oakyung KIM, Hijung SHIN
  • Patent number: 10896161
    Abstract: Techniques of managing design iterations include generating data linking selected snapshot histories with contextual notes within a single presentation environment. A designer may generate a design iteration in a design environment. Once the design iteration is complete, the designer may show a snapshot of the design iteration to a stakeholder. The stakeholder then may provide written contextual notes within the design environment. The computer links the contextual notes to the snapshot and stores the snapshot and contextual notes in a database. When the designer generates a new design iteration from the previous design iteration and the contextual notes, the computer generates a new snapshot and a link to the previous snapshot to form a timeline of snapshots. The designer may then present the snapshots, the timeline, and the contextual notes to the stakeholder as a coherent history of how the design of the mobile app evolved to its present state.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: January 19, 2021
    Assignee: ADOBE INC.
    Inventors: Lubomira A. Dontcheva, Wilmot Li, Morgan Dixon, Jasper O'Leary, Holger Winnemoeller
  • Publication number: 20200334290
    Abstract: A method includes detecting control of an active content creation tool of an interactive computing system in response to a user input received at a user interface of the interactive computing system. The method also includes automatically updating a video search query based on the detected control of the active content creation tool to include context information about the active content creation tool. Further, the method includes performing a video search of video captions from a video database using the video search query and providing search results of the video search to the user interface of the interactive computing system.
    Type: Application
    Filed: April 19, 2019
    Publication date: October 22, 2020
    Inventors: Lubomira Dontcheva, Kent Andrew Edmonds, Cristin Fraser, Scott Klemmer
  • Publication number: 20200293274
    Abstract: A framework for generating and presenting verbal command suggestions to facilitate discoverability of commands capable of being understood and support users exploring available commands. A target associated with a direct-manipulation input is received from a user via a multimodal user interface. A set of operations relevant to the target is selected and verbal command suggestions relevant to the selected set of operations and the determined target are generated. At least a portion of the generated verbal command suggestions is provided for presentation in association with the multimodal user interface in one of three interface variants: one that presents command suggestions as a list, one that presents command suggestions using contextual overlay windows, and one that presents command suggestions embedded within the interface. Each of the proposed interface variants facilitates user awareness of verbal commands that are capable of being executed and teaches users how available verbal commands can be invoked.
    Type: Application
    Filed: March 15, 2019
    Publication date: September 17, 2020
    Inventors: Lubomira Dontcheva, Arjun Srinivasan, Seth John Walker, Eytan Adar
  • Patent number: 10769738
    Abstract: A tutorial for a given application may be leveraged to generate executable code that can then be executed within a native instruction service of the application. In this way, a software application may thus provide an integrated, interactive learning experience for a user, in a manner that extends beyond the instructional content included in the native instruction service, i.e., that includes at least a portion of the instructional content of the tutorial.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: September 8, 2020
    Assignee: ADOBE INC.
    Inventors: Walter W. Chang, Zhihong Ding, Lubomira A. Dontcheva, Gregg D. Wilensky, Darshan D. Prasad, Claudia Veronica Roberts
  • Patent number: 10656808
    Abstract: Natural language and user interface control techniques are described. In one or more implementations, a natural language input is received that is indicative of an operation to be performed by one or more modules of a computing device. Responsive to determining that the operation is associated with a degree to which the operation is performable, a user interface control is output that is manipulable by a user to control the degree to which the operation is to be performed.
    Type: Grant
    Filed: November 21, 2012
    Date of Patent: May 19, 2020
    Assignee: Adobe Inc.
    Inventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala