Patents by Inventor Dingzeyu Li
Dingzeyu Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250139161Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for cutting down a user's larger input video into an edited video comprising the most important video segments and applying corresponding video effects. Some embodiments of the present invention are directed to adding captioning video effects to the trimmed video (e.g., applying face-aware and non-face-aware captioning to emphasize extracted video segment headings, important sentences, quotes, words of interest, extracted lists, etc.). For example, a prompt is provided to a generative language model to identify portions of a transcript (e.g., extracted scene summaries, important sentences, lists of items discussed in the video, etc.) to apply to corresponding video segments as captions depending on the type of caption (e.g., an extracted heading may be captioned at the start of a corresponding video segment, important sentences and/or extracted list items may be captioned when they are spoken).Type: ApplicationFiled: February 2, 2024Publication date: May 1, 2025Inventors: Deepali ANEJA, Zeyu JIN, Hijung SHIN, Anh Lan TRUONG, Dingzeyu LI, Hanieh DEILAMSALEHY, Rubaiat HABIB, Matthew David FISHER, Kim Pascal PIMMEL, Wilmot LI, Lubomira Assenova DONTCHEVA
-
Publication number: 20250140291Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for identifying the relevant segments that effectively summarize the larger input video and/or form a rough cut, and assembling them into one or more smaller trimmed videos. For example, visual scenes and corresponding scene captions are extracted from the input video and associated with an extracted diarized and timestamped transcript to generate an augmented transcript. The augmented transcript is applied to a large language model to extract sentences that characterize a trimmed version of the input video (e.g., a natural language summary, a representation of identified sentences from the transcript). As such, corresponding video segments are identified (e.g., using similarity to match each sentence in a generated summary with a corresponding transcript sentence) and assembled into one or more trimmed videos. In some embodiments, the trimmed video is generated based on a user's query and/or desired length.Type: ApplicationFiled: February 2, 2024Publication date: May 1, 2025Inventors: Hanieh DEILAMSALEHY, Jui-Hsien WANG, Zhengyang MA, Dingzeyu LI, Hijung SHIN, Aseem Omprakash AGARWALA, Kim Pascal PIMMEL, Lubomira Assenova DONTCHEVA
-
Patent number: 12206930Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.Type: GrantFiled: January 13, 2023Date of Patent: January 21, 2025Assignee: Adobe Inc.Inventors: Kim Pascal Pimmel, Stephen Joseph Diverdi, Jiaju MA, Rubaiat Habib, Li-Yi Wei, Hijung Shin, Deepali Aneja, John G. Nelson, Wilmot Li, Dingzeyu Li, Lubomira Assenova Dontcheva, Joel Richard Brandt
-
Patent number: 12125317Abstract: A method for detecting a cue (e.g., a visual cue or a visual cue combined with an audible cue) occurring together in an input video includes: presenting a user interface to record an example video of a user performing an act including the cue; determining a part of the example video where the cue occurs; applying a feature of the part to a neural network to generate a positive embedding; dividing the input video into a plurality of chunks and applying a feature of each chunk to the neural network to generate a plurality of negative embeddings; applying a feature of a given one of the chunks to the neural network to output a query embedding; and determining whether the cue occurs in the input video from the query embedding, the positive embedding, and the negative embeddings.Type: GrantFiled: December 1, 2021Date of Patent: October 22, 2024Assignee: ADOBE INC.Inventors: Jiyoung Lee, Justin Jonathan Salamon, Dingzeyu Li
-
Publication number: 20240244287Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.Type: ApplicationFiled: January 13, 2023Publication date: July 18, 2024Inventors: Kim Pascal PIMMEL, Stephen Joseph DIVERDI, Jiaju MA, Rubaiat HABIB, Li-Yi WEI, Hijung SHIN, Deepali ANEJA, John G. NELSON, Wilmot LI, Dingzeyu LI, Lubomira Assenova DONTCHEVA, Joel Richard BRANDT
-
Patent number: 12033669Abstract: Embodiments are directed to a snap point segmentation that defines the locations of selection snap points for a selection of video segments. Candidate snap points are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate snap point separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation between consecutive snap points on a video timeline. The snap point segmentation is computed by solving a shortest path problem through a graph that models different snap point locations and separations. When a user clicks or taps on the video timeline and drags, a selection snaps to the snap points defined by the snap point segmentation. In some embodiments, the snap points are displayed during a drag operation and disappear when the drag operation is released.Type: GrantFiled: May 26, 2021Date of Patent: July 9, 2024Assignee: Adobe Inc.Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
-
Patent number: 12014548Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.Type: GrantFiled: June 2, 2022Date of Patent: June 18, 2024Assignee: Adobe Inc.Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
-
Patent number: 11995894Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a metadata panel with a composite list of video metadata. The composite list is segmented into selectable metadata segments at locations corresponding to boundaries of video segments defined by a hierarchical segmentation. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. One or more metadata segments can be selected in various ways, such as by clicking or tapping on a metadata segment or by performing a metadata search. When a metadata segment is selected, a corresponding video segment is emphasized on the video timeline, a playback cursor is moved to the first video frame of the video segment, and the first video frame is presented.Type: GrantFiled: September 10, 2020Date of Patent: May 28, 2024Assignee: Adobe Inc.Inventors: Seth Walker, Joy Oakyung Kim, Hijung Shin, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Xue Bai
-
Publication number: 20240161335Abstract: Embodiments are disclosed for generating a gesture reenactment video sequence corresponding to a target audio sequence using a trained network based on a video motion graph generated from a reference speech video. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a first input including a reference speech video and generating a video motion graph representing the reference speech video, where each node is associated with a frame of the reference video sequence and reference audio features of the reference audio sequence. The disclosed systems and methods further comprise receiving a second input including a target audio sequence, generating target audio features, identifying a node path through the video motion graph based on the target audio features and the reference audio features, and generating an output media sequence based on the identified node path through the video motion graph paired with the target audio sequence.Type: ApplicationFiled: November 14, 2022Publication date: May 16, 2024Applicant: Adobe Inc.Inventors: Yang ZHOU, Jimei YANG, Jun SAITO, Dingzeyu LI, Deepali ANEJA
-
Publication number: 20240134597Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a question search for meaningful questions that appear in a video. In an example embodiment, an audio track from a video is transcribed, and the transcript is parsed to identify sentences that end with a question mark. Depending on the embodiment, one or more types of questions are filtered out, such as short questions less than a designated length or duration, logistical questions, and/or rhetorical questions. As such, in response to a command to perform a question search, the questions are identified, and search result tiles representing video segments of the questions are presented. Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.Type: ApplicationFiled: October 17, 2022Publication date: April 25, 2024Inventors: Lubomira Assenova DONTCHEVA, Anh Lan TRUONG, Hanieh DEILAMSALEHY, Kim Pascal PIMMEL, Aseem Omprakash AGARWALA, Dingzeyu Li, Joel Richard BRANDT, Joy Oakyung KIM
-
Publication number: 20240134909Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a visual and text search interface used to navigate a video transcript. In an example embodiment, a freeform text query triggers a visual search for frames of a loaded video that match the freeform text query (e.g., frame embeddings that match a corresponding embedding of the freeform query), and triggers a text search for matching words from a corresponding transcript or from tags of detected features from the loaded video. Visual search results are displayed (e.g., in a row of tiles that can be scrolled to the left and right), and textual search results are displayed (e.g., in a row of tiles that can be scrolled up and down). Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.Type: ApplicationFiled: October 17, 2022Publication date: April 25, 2024Inventors: Lubomira Assenova DONTCHEVA, Dingzeyu LI, Kim Pascal PIMMEL, Hijung SHIN, Hanieh DEILAMSALEHY, Aseem Omprakash AGARWALA, Joy Oakyung KIM, Joel Richard BRANDT, Cristin Ailidh Fraser
-
Patent number: 11922695Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.Type: GrantFiled: June 2, 2022Date of Patent: March 5, 2024Assignee: Adobe Inc.Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
-
Patent number: 11908036Abstract: The technology described herein is directed to a cross-domain training framework that iteratively trains a domain adaptive refinement agent to refine low quality real-world image acquisition data, e.g., depth maps, when accompanied by corresponding conditional data from other modalities, such as the underlying images or video from which the image acquisition data is computed. The cross-domain training framework includes a shared cross-domain encoder and two conditional decoder branch networks, e.g., a synthetic conditional depth prediction branch network and a real conditional depth prediction branch network. The shared cross-domain encoder converts synthetic and real-world image acquisition data into synthetic and real compact feature representations, respectively.Type: GrantFiled: September 28, 2020Date of Patent: February 20, 2024Assignee: Adobe Inc.Inventors: Oliver Wang, Jianming Zhang, Dingzeyu Li, Zekun Hao
-
Patent number: 11899917Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.Type: GrantFiled: October 19, 2022Date of Patent: February 13, 2024Assignee: ADOBE INC.Inventors: Seth Walker, Joy O Kim, Aseem Agarwala, Joel Richard Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
-
Patent number: 11893794Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.Type: GrantFiled: June 2, 2022Date of Patent: February 6, 2024Assignee: Adobe Inc.Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
-
Patent number: 11887371Abstract: Embodiments are directed to a thumbnail segmentation that defines the locations on a video timeline where thumbnails are displayed. Candidate thumbnail locations are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate thumbnail separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation (e.g., the width of a thumbnail) between consecutive thumbnail locations on a video timeline. The thumbnail segmentation is computed by solving a shortest path problem through a graph that models different thumbnail locations and separations. As such, a video timeline is displayed with thumbnails at locations on the timeline defined by the thumbnail segmentation, with each thumbnail depicting a portion of the video associated with the thumbnail location.Type: GrantFiled: May 26, 2021Date of Patent: January 30, 2024Assignee: Adobe Inc.Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
-
Patent number: D1056928Type: GrantFiled: October 17, 2022Date of Patent: January 7, 2025Assignee: Adobe Inc.Inventors: Kim Pascal Pimmel, Hijung Shin, Dingzeyu Li, Anh Lan Truong, Joy Oakyung Kim, Pankaj Kumar Nathani, Xuecong Xu, Cristin Ailidh Fraser, Kyratso George Karahalios, Zhengyang Ma, Joel Richard Brandt, Lubomira Assenova Dontcheva, Seth John Walker
-
Patent number: D1056942Type: GrantFiled: February 29, 2024Date of Patent: January 7, 2025Assignee: Adobe Inc.Inventors: Kim Pascal Pimmel, Hijung Shin, Dingzeyu Li, Anh Lan Truong, Joy Oakyung Kim, Pankaj Kumar Nathani, Xuecong Xu, Cristin Ailidh Fraser, Kyratso George Karahalios, Zhengyang Ma, Joel Richard Brandt, Lubomira Assenova Dontcheva, Seth John Walker
-
Patent number: D1056943Type: GrantFiled: February 29, 2024Date of Patent: January 7, 2025Assignee: Adobe Inc.Inventors: Kim Pascal Pimmel, Hijung Shin, Dingzeyu Li, Anh Lan Truong, Joy Oakyung Kim, Pankaj Kumar Nathani, Xuecong Xu, Cristin Ailidh Fraser, Kyratso George Karahalios, Zhengyang Ma, Joel Richard Brandt, Lubomira Assenova Dontcheva, Seth John Walker
-
Patent number: D1056944Type: GrantFiled: February 29, 2024Date of Patent: January 7, 2025Assignee: Adobe Inc.Inventors: Kim Pascal Pimmel, Hijung Shin, Dingzeyu Li, Anh Lan Truong, Joy Oakyung Kim, Pankaj Kumar Nathani, Xuecong Xu, Cristin Ailidh Fraser, Kyratso George Karahalios, Zhengyang Ma, Joel Richard Brandt, Lubomira Assenova Dontcheva, Seth John Walker