Patents by Inventor Cristin Ailidh FRASER

Cristin Ailidh FRASER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240134909
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a visual and text search interface used to navigate a video transcript. In an example embodiment, a freeform text query triggers a visual search for frames of a loaded video that match the freeform text query (e.g., frame embeddings that match a corresponding embedding of the freeform query), and triggers a text search for matching words from a corresponding transcript or from tags of detected features from the loaded video. Visual search results are displayed (e.g., in a row of tiles that can be scrolled to the left and right), and textual search results are displayed (e.g., in a row of tiles that can be scrolled up and down). Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 25, 2024
    Inventors: Lubomira Assenova DONTCHEVA, Dingzeyu LI, Kim Pascal PIMMEL, Hijung SHIN, Hanieh DEILAMSALEHY, Aseem Omprakash AGARWALA, Joy Oakyung KIM, Joel Richard BRANDT, Cristin Ailidh Fraser
  • Publication number: 20240135973
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for identifying candidate boundaries for video segments, video segment selection using those boundaries, and text-based video editing of video segments selected via transcript interactions. In an example implementation, boundaries of detected sentences and words are extracted from a transcript, the boundaries are retimed into an adjacent speech gap to a location where voice or audio activity is a minimum, and the resulting boundaries are stored as candidate boundaries for video segments. As such, a transcript interface presents the transcript, interprets input selecting transcript text as an instruction to select a video segment with corresponding boundaries selected from the candidate boundaries, and interprets commands that are traditionally thought of as text-based operations (e.g., cut, copy, paste) as an instruction to perform a corresponding video editing operation using the selected video segment.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 25, 2024
    Inventors: Xue BAI, Justin Jonathan SALAMON, Aseem Omprakash AGARWALA, Hijung SHIN, Haoran CAI, Joel Richard BRANDT, Lubomira Assenova DONTCHEVA, Cristin Ailidh Fraser
  • Patent number: 11887371
    Abstract: Embodiments are directed to a thumbnail segmentation that defines the locations on a video timeline where thumbnails are displayed. Candidate thumbnail locations are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate thumbnail separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation (e.g., the width of a thumbnail) between consecutive thumbnail locations on a video timeline. The thumbnail segmentation is computed by solving a shortest path problem through a graph that models different thumbnail locations and separations. As such, a video timeline is displayed with thumbnails at locations on the timeline defined by the thumbnail segmentation, with each thumbnail depicting a portion of the video associated with the thumbnail location.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Patent number: 11887629
    Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Patent number: 11810358
    Abstract: Embodiments are directed to video segmentation based on a query. Initially, a first segmentation such as a default segmentation is displayed (e.g., as interactive tiles in a finder interface, as a video timeline in an editor interface), and the default segmentation is re-segmented in response to a user query. The query can take the form of a keyword and one or more selected facets in a category of detected features. Keywords are searched for detected transcript words, detected object or action tags, or detected audio event tags that match the keywords. Selected facets are searched for detected instances of the selected facets. Each video segment that matches the query is re-segmented by solving a shortest path problem through a graph that models different segmentation options.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: November 7, 2023
    Assignee: ADOBE INC.
    Inventors: Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović
  • Patent number: 11455731
    Abstract: Embodiments are directed to video segmentation based on detected video features. More specifically, a segmentation of a video is computed by determining candidate boundaries from detected feature boundaries from one or more feature tracks; modeling different segmentation options by constructing a graph with nodes that represent candidate boundaries, edges that represent candidate segments, and edge weights that represent cut costs; and computing the video segmentation by solving a shortest path problem to find the path through the edges (segmentation) that minimizes the sum of edge weights along the path (cut costs). A representation of the video segmentation is presented, for example, using interactive tiles or a video timeline that represent(s) the video segments in the segmentation.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: September 27, 2022
    Assignee: Adobe Inc.
    Inventors: Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović
  • Publication number: 20220301179
    Abstract: Embodiments are directed to video segmentation based on detected video features. More specifically, a segmentation of a video is computed by determining candidate boundaries from detected feature boundaries from one or more feature tracks; modeling different segmentation options by constructing a graph with nodes that represent candidate boundaries, edges that represent candidate segments, and edge weights that represent cut costs; and computing the video segmentation by solving a shortest path problem to find the path through the edges (segmentation) that minimizes the sum of edge weights along the path (cut costs). A representation of the video segmentation is presented, for example, using interactive tiles or a video timeline that represent(s) the video segments in the segmentation.
    Type: Application
    Filed: June 8, 2022
    Publication date: September 22, 2022
    Inventors: Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic
  • Publication number: 20220076707
    Abstract: Embodiments are directed to a snap point segmentation that defines the locations of selection snap points for a selection of video segments. Candidate snap points are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate snap point separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation between consecutive snap points on a video timeline. The snap point segmentation is computed by solving a shortest path problem through a graph that models different snap point locations and separations. When a user clicks or taps on the video timeline and drags, a selection snaps to the snap points defined by the snap point segmentation. In some embodiments, the snap points are displayed during a drag operation and disappear when the drag operation is released.
    Type: Application
    Filed: May 26, 2021
    Publication date: March 10, 2022
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Publication number: 20220076025
    Abstract: Embodiments are directed to video segmentation based on a query. Initially, a first segmentation such as a default segmentation is displayed (e.g., as interactive tiles in a finder interface, as a video timeline in an editor interface), and the default segmentation is re-segmented in response to a user query. The query can take the form of a keyword and one or more selected facets in a category of detected features. Keywords are searched for detected transcript words, detected object or action tags, or detected audio event tags that match the keywords. Selected facets are searched for detected instances of the selected facets. Each video segment that matches the query is re-segmented by solving a shortest path problem through a graph that models different segmentation options.
    Type: Application
    Filed: May 26, 2021
    Publication date: March 10, 2022
    Inventors: Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic
  • Publication number: 20220076706
    Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.
    Type: Application
    Filed: May 26, 2021
    Publication date: March 10, 2022
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Publication number: 20220076026
    Abstract: Embodiments are directed to a thumbnail segmentation that defines the locations on a video timeline where thumbnails are displayed. Candidate thumbnail locations are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate thumbnail separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation (e.g., the width of a thumbnail) between consecutive thumbnail locations on a video timeline. The thumbnail segmentation is computed by solving a shortest path problem through a graph that models different thumbnail locations and separations. As such, a video timeline is displayed with thumbnails at locations on the timeline defined by the thumbnail segmentation, with each thumbnail depicting a portion of the video associated with the thumbnail location.
    Type: Application
    Filed: May 26, 2021
    Publication date: March 10, 2022
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Publication number: 20220076424
    Abstract: Embodiments are directed to video segmentation based on detected video features. More specifically, a segmentation of a video is computed by determining candidate boundaries from detected feature boundaries from one or more feature tracks; modeling different segmentation options by constructing a graph with nodes that represent candidate boundaries, edges that represent candidate segments, and edge weights that represent cut costs; and computing the video segmentation by solving a shortest path problem to find the path through the edges (segmentation) that minimizes the sum of edge weights along the path (cut costs). A representation of the video segmentation is presented, for example, using interactive tiles or a video timeline that represent(s) the video segments in the segmentation.
    Type: Application
    Filed: May 26, 2021
    Publication date: March 10, 2022
    Inventors: Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic
  • Patent number: 11145333
    Abstract: Systems and methods provide for capturing and presenting content creation tools of an application used in a video. Application data from the application for the duration of the video is received. The application data includes data identifiers and time markers corresponding to user interaction with an application in a video. The application data is processed to detect tool identifiers identifying tools used in the video based on the data identifiers. For each a tool identifier, a tool label and a corresponding time in the timeline is determined. A tool record storing the tool labels and the corresponding times in association with the video is generated. When a viewer requests to watch the video, the tool record is presented to the viewer in conjunction with the video.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: October 12, 2021
    Assignee: ADOBE INC.
    Inventors: William Hayes Allen, Lubomira Dontcheva, Haiqing Lu, Zachary Platt McCullough, David R. Stein, Christopher Nuuja, Benoit Ambry, Joel Richard Brandt, Cristin Ailidh Fraser, Joy Oakyung Kim, Hijung Shin
  • Publication number: 20210142827
    Abstract: Systems and methods provide for capturing and presenting content creation tools of an application used in a video. Application data from the application for the duration of the video is received. The application data includes data identifiers and time markers corresponding to user interaction with an application in a video. The application data is processed to detect tool identifiers identifying tools used in the video based on the data identifiers. For each a tool identifier, a tool label and a corresponding time in the timeline is determined. A tool record storing the tool labels and the corresponding times in association with the video is generated. When a viewer requests to watch the video, the tool record is presented to the viewer in conjunction with the video.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 13, 2021
    Inventors: William Hayes ALLEN, Lubomira DONTCHEVA, Haiqing LU, Zachary Platt MCCULLOUGH, David R. STEIN, Christopher NUUJA, Benoit AMBRY, Joel Richard Brandt, Cristin Ailidh FRASER, Joy Oakyung KIM, Hijung SHIN