Patents by Inventor Joel R. Brandt
Joel R. Brandt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11922695Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.Type: GrantFiled: June 2, 2022Date of Patent: March 5, 2024Assignee: Adobe Inc.Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
-
Patent number: 11893794Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.Type: GrantFiled: June 2, 2022Date of Patent: February 6, 2024Assignee: Adobe Inc.Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
-
Patent number: 11880408Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation by performing a metadata search. Generally, various types of metadata can be extracted from a video, such as a transcript of audio, keywords from the transcript, content or action tags visually extracted from video frames, and log event tags extracted from an associated temporal log. The extracted metadata is segmented into metadata segments and associated with corresponding video segments defined by a hierarchical video segmentation. As such, a metadata search can be performed to identify matching metadata segments and corresponding matching video segments defined by a particular level of the hierarchical segmentation. Matching metadata segments are emphasized in a composite list of the extracted metadata, and matching video segments are emphasized on the video timeline. Navigating to a different level of the hierarchy transforms the search results into corresponding coarser or finer segments defined by the level.Type: GrantFiled: September 10, 2020Date of Patent: January 23, 2024Assignee: Adobe Inc.Inventors: Seth Walker, Joy Oakyung Kim, Morgan Nicole Evans, Najika Skyler Halsema Yoo, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
-
Patent number: 11875568Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.Type: GrantFiled: June 2, 2022Date of Patent: January 16, 2024Assignee: Adobe Inc.Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
-
Patent number: 11822602Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation by performing a metadata search. Generally, various types of metadata can be extracted from a video, such as a transcript of audio, keywords from the transcript, content or action tags visually extracted from video frames, and log event tags extracted from an associated temporal log. The extracted metadata is segmented into metadata segments and associated with corresponding video segments defined by a hierarchical video segmentation. As such, a metadata search can be performed to identify matching metadata segments and corresponding matching video segments defined by a particular level of the hierarchical segmentation. Matching metadata segments are emphasized in a composite list of the extracted metadata, and matching video segments are emphasized on the video timeline. Navigating to a different level of the hierarchy transforms the search results into corresponding coarser or finer segments defined by the level.Type: GrantFiled: September 10, 2020Date of Patent: November 21, 2023Assignee: Adobe Inc.Inventors: Seth Walker, Joy Oakyung Kim, Morgan Nicole Evans, Najika Skyler Halsema Yoo, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
-
Patent number: 11631434Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation. In some embodiments, the finest level of the hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms. Each level of the hierarchical segmentation clusters the clip atoms with a corresponding degree of granularity into a corresponding set of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline (e.g., clicks, drags), by performing a metadata search, or through selection of corresponding metadata segments from a metadata panel. Navigating to a different level of the hierarchy transforms the selection into corresponding coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.Type: GrantFiled: September 10, 2020Date of Patent: April 18, 2023Assignee: Adobe Inc.Inventors: Seth Walker, Joy Oakyung Kim, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
-
Patent number: 11630562Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.Type: GrantFiled: September 10, 2020Date of Patent: April 18, 2023Assignee: Adobe Inc.Inventors: Seth Walker, Joy Oakyung Kim, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
-
Publication number: 20220301313Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.Type: ApplicationFiled: June 2, 2022Publication date: September 22, 2022Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
-
Patent number: 11450112Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.Type: GrantFiled: September 10, 2020Date of Patent: September 20, 2022Assignee: Adobe Inc.Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
-
Publication number: 20220292830Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.Type: ApplicationFiled: June 2, 2022Publication date: September 15, 2022Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
-
Publication number: 20220292831Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.Type: ApplicationFiled: June 2, 2022Publication date: September 15, 2022Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
-
Publication number: 20220076024Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a metadata panel with a composite list of video metadata. The composite list is segmented into selectable metadata segments at locations corresponding to boundaries of video segments defined by a hierarchical segmentation. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. One or more metadata segments can be selected in various ways, such as by clicking or tapping on a metadata segment or by performing a metadata search. When a metadata segment is selected, a corresponding video segment is emphasized on the video timeline, a playback cursor is moved to the first video frame of the video segment, and the first video frame is presented.Type: ApplicationFiled: September 10, 2020Publication date: March 10, 2022Inventors: Seth Walker, Joy Oakyung Kim, Hijung Shin, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Xue Bai
-
Publication number: 20220075820Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation by performing a metadata search. Generally, various types of metadata can be extracted from a video, such as a transcript of audio, keywords from the transcript, content or action tags visually extracted from video frames, and log event tags extracted from an associated temporal log. The extracted metadata is segmented into metadata segments and associated with corresponding video segments defined by a hierarchical video segmentation. As such, a metadata search can be performed to identify matching metadata segments and corresponding matching video segments defined by a particular level of the hierarchical segmentation. Matching metadata segments are emphasized in a composite list of the extracted metadata, and matching video segments are emphasized on the video timeline. Navigating to a different level of the hierarchy transforms the search results into corresponding coarser or finer segments defined by the level.Type: ApplicationFiled: September 10, 2020Publication date: March 10, 2022Inventors: Seth Walker, Joy Oakyung Kim, Morgan Nicole Evans, Najika Skyler Halsema Yoo, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
-
Publication number: 20220076023Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.Type: ApplicationFiled: September 10, 2020Publication date: March 10, 2022Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
-
Publication number: 20220075513Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.Type: ApplicationFiled: September 10, 2020Publication date: March 10, 2022Inventors: Seth Walker, Joy Oakyung Kim, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
-
Publication number: 20220076705Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation. In some embodiments, the finest level of the hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms. Each level of the hierarchical segmentation clusters the clip atoms with a corresponding degree of granularity into a corresponding set of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline (e.g., clicks, drags), by performing a metadata search, or through selection of corresponding metadata segments from a metadata panel. Navigating to a different level of the hierarchy transforms the selection into corresponding coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.Type: ApplicationFiled: September 10, 2020Publication date: March 10, 2022Inventors: Seth Walker, Joy Oakyung Kim, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
-
Patent number: 10908764Abstract: Inter-context coordination to facilitate synchronized presentation of image content is described. In example embodiments, an application includes multiple execution contexts that coordinate handling user interaction with a coordination policy established using an inter-context communication mechanism. The application produces first and second execution contexts that are responsible for user interaction with first and second image content, respectively. Generally, the second execution context provides a stipulation for the coordination policy to indicate which execution context is to handle a response to a given user input event. With an event routing policy, an event routing rule informs the first execution context if a user input event should be routed to the second execution context.Type: GrantFiled: August 22, 2018Date of Patent: February 2, 2021Assignee: Adobe Inc.Inventors: Ian A. Wehrman, John N. Fitzgerald, Joel R. Brandt, Jesper Storm Bache, David A. Tristram, Barkin Aygun
-
Publication number: 20180364873Abstract: Inter-context coordination to facilitate synchronized presentation of image content is described. In example embodiments, an application includes multiple execution contexts that coordinate handling user interaction with a coordination policy established using an inter-context communication mechanism. The application produces first and second execution contexts that are responsible for user interaction with first and second image content, respectively. Generally, the second execution context provides a stipulation for the coordination policy to indicate which execution context is to handle a response to a given user input event. With an event routing policy, an event routing rule informs the first execution context if a user input event should be routed to the second execution context.Type: ApplicationFiled: August 22, 2018Publication date: December 20, 2018Applicant: Adobe Systems IncorporatedInventors: Ian A. Wehrman, John N. Fitzgerald, Joel R. Brandt, Jesper Storm Bache, David A. Tristram, Barkin Aygun
-
Patent number: 10073583Abstract: Inter-context coordination to facilitate synchronized presentation of image content is described. In example embodiments, an application includes multiple execution contexts that coordinate handling user interaction with a coordination policy established using an inter-context communication mechanism. The application produces first and second execution contexts that are responsible for user interaction with first and second image content, respectively. Generally, the second execution context provides a stipulation for the coordination policy to indicate which execution context is to handle a response to a given user input event. With an event routing policy, an event routing rule informs the first execution context if a user input event should be routed to the second execution context.Type: GrantFiled: October 8, 2015Date of Patent: September 11, 2018Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Ian A. Wehrman, John N. Fitzgerald, Joel R. Brandt, Jesper Storm Bache, David A. Tristram, Barkin Aygun
-
Publication number: 20170102830Abstract: Inter-context coordination to facilitate synchronized presentation of image content is described. In example embodiments, an application includes multiple execution contexts that coordinate handling user interaction with a coordination policy established using an inter-context communication mechanism. The application produces first and second execution contexts that are responsible for user interaction with first and second image content, respectively. Generally, the second execution context provides a stipulation for the coordination policy to indicate which execution context is to handle a response to a given user input event. With an event routing policy, an event routing rule informs the first execution context if a user input event should be routed to the second execution context.Type: ApplicationFiled: October 8, 2015Publication date: April 13, 2017Inventors: Ian A. Wehrman, John N. Fitzgerald, Joel R. Brandt, Jesper Storm Bache, David A. Tristram, Barkin Aygun