Patents by Inventor Richard Brandt
Richard Brandt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240134597Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a question search for meaningful questions that appear in a video. In an example embodiment, an audio track from a video is transcribed, and the transcript is parsed to identify sentences that end with a question mark. Depending on the embodiment, one or more types of questions are filtered out, such as short questions less than a designated length or duration, logistical questions, and/or rhetorical questions. As such, in response to a command to perform a question search, the questions are identified, and search result tiles representing video segments of the questions are presented. Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.Type: ApplicationFiled: October 17, 2022Publication date: April 25, 2024Inventors: Lubomira Assenova DONTCHEVA, Anh Lan TRUONG, Hanieh DEILAMSALEHY, Kim Pascal PIMMEL, Aseem Omprakash AGARWALA, Dingzeyu Li, Joel Richard BRANDT, Joy Oakyung KIM
-
Publication number: 20240134909Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a visual and text search interface used to navigate a video transcript. In an example embodiment, a freeform text query triggers a visual search for frames of a loaded video that match the freeform text query (e.g., frame embeddings that match a corresponding embedding of the freeform query), and triggers a text search for matching words from a corresponding transcript or from tags of detected features from the loaded video. Visual search results are displayed (e.g., in a row of tiles that can be scrolled to the left and right), and textual search results are displayed (e.g., in a row of tiles that can be scrolled up and down). Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.Type: ApplicationFiled: October 17, 2022Publication date: April 25, 2024Inventors: Lubomira Assenova DONTCHEVA, Dingzeyu LI, Kim Pascal PIMMEL, Hijung SHIN, Hanieh DEILAMSALEHY, Aseem Omprakash AGARWALA, Joy Oakyung KIM, Joel Richard BRANDT, Cristin Ailidh Fraser
-
Publication number: 20240135973Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for identifying candidate boundaries for video segments, video segment selection using those boundaries, and text-based video editing of video segments selected via transcript interactions. In an example implementation, boundaries of detected sentences and words are extracted from a transcript, the boundaries are retimed into an adjacent speech gap to a location where voice or audio activity is a minimum, and the resulting boundaries are stored as candidate boundaries for video segments. As such, a transcript interface presents the transcript, interprets input selecting transcript text as an instruction to select a video segment with corresponding boundaries selected from the candidate boundaries, and interprets commands that are traditionally thought of as text-based operations (e.g., cut, copy, paste) as an instruction to perform a corresponding video editing operation using the selected video segment.Type: ApplicationFiled: October 17, 2022Publication date: April 25, 2024Inventors: Xue BAI, Justin Jonathan SALAMON, Aseem Omprakash AGARWALA, Hijung SHIN, Haoran CAI, Joel Richard BRANDT, Lubomira Assenova DONTCHEVA, Cristin Ailidh Fraser
-
Publication number: 20240126994Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for segmenting a transcript into paragraphs. In an example embodiment, a transcript is segmented to start a new paragraph whenever there is a change in speaker and/or a long pause in speech. If any remaining paragraphs are longer than a designated length or duration (e.g., 50 or 100 words), each of those paragraphs is segmented using dynamic programming to minimize a cost function that penalizes candidate paragraphs based on divergence from a target paragraph length and/or that rewards candidate paragraphs that group semantically similar sentences. As such, the transcript is visualized, segmented at the identified paragraphs.Type: ApplicationFiled: October 17, 2022Publication date: April 18, 2024Inventors: Hanieh DEILAMSALEHY, Aseem Omprakash AGARWALA, Haoran CAI, Hijung SHIN, Joel Richard BRANDT, Lubomira Assenova DONTCHEVA
-
Publication number: 20240127855Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for selection of the best image of a particular speaker's face in a video, and visualization in a diarized transcript. In an example embodiment, candidate images of a face of a detected speaker are extracted from frames of a video identified by a detected face track for the face, and a representative image of the detected speaker's face is selected from the candidate images based on image quality, facial emotion (e.g., using an emotion classifier that generates a happiness score), a size factor (e.g., favoring larger images), and/or penalizing images that appear towards the beginning or end of a face track. As such, each segment of the transcript is presented with the representative image of the speaker who spoke that segment and/or input is accepted changing the representative image associated with each speaker.Type: ApplicationFiled: October 17, 2022Publication date: April 18, 2024Inventors: Lubomira Assenova DONTCHEVA, Xue BAI, Aseem Omprakash AGARWALA, Joel Richard BRANDT
-
Publication number: 20240127858Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for annotating transcript text with video metadata, and including thumbnail bars in the transcript to help users select a desired portion of a video through transcript interactions. In an example embodiment, a video editing interface includes a transcript interface that presents a transcript with transcript text that is annotated to indicate corresponding portions of the video where various features were detected (e.g., annotating via text stylization of transcript text and/or labeling the transcript text with a textual representation of a corresponding detected feature class). In some embodiments, the transcript interface displays a visual representation of detected non-speech audio or pauses (e.g., a sound bar) and/or video thumbnails corresponding to each line of transcript text (e.g., a thumbnail bar).Type: ApplicationFiled: October 17, 2022Publication date: April 18, 2024Inventors: Lubomira Assenova DONTCHEVA, Hijung SHIN, Joel Richard BRANDT, Joy Oakyung KIM
-
Patent number: 11899917Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.Type: GrantFiled: October 19, 2022Date of Patent: February 13, 2024Assignee: ADOBE INC.Inventors: Seth Walker, Joy O Kim, Aseem Agarwala, Joel Richard Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
-
Patent number: 11887371Abstract: Embodiments are directed to a thumbnail segmentation that defines the locations on a video timeline where thumbnails are displayed. Candidate thumbnail locations are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate thumbnail separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation (e.g., the width of a thumbnail) between consecutive thumbnail locations on a video timeline. The thumbnail segmentation is computed by solving a shortest path problem through a graph that models different thumbnail locations and separations. As such, a video timeline is displayed with thumbnails at locations on the timeline defined by the thumbnail segmentation, with each thumbnail depicting a portion of the video associated with the thumbnail location.Type: GrantFiled: May 26, 2021Date of Patent: January 30, 2024Assignee: Adobe Inc.Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
-
Patent number: 11887629Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.Type: GrantFiled: May 26, 2021Date of Patent: January 30, 2024Assignee: Adobe Inc.Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
-
Publication number: 20230422436Abstract: Disclosed is an immersion cooling unit including an immersion cell, defining an internal cavity. An electrical component is positioned in the internal cavity. A dielectric working fluid partially fills the internal cavity and at least partially immerses the electrical component. A condensing coil, is positioned above the dielectric working fluid. The dielectric working fluid comprises at least one of 1,1,1,2,2,5,5,6,6,6-decafluoro-3-hexene, (HFO-153-10mczz), or 1,1,1,4,5,5,5-heptafluoro-4-trifluoromethyl-2-pentene, (HFO-153-10mzzy).Type: ApplicationFiled: August 30, 2023Publication date: December 28, 2023Applicant: THE CHEMOURS COMPANY FC, LLCInventors: JASON R. JUHASZ, DREW RICHARD BRANDT, LUKE DAVID SIMONI, JONATHAN P. STEHMAN, VIACHESLAV A. PETROV, GUSTAVO POTTKER
-
Patent number: 11810358Abstract: Embodiments are directed to video segmentation based on a query. Initially, a first segmentation such as a default segmentation is displayed (e.g., as interactive tiles in a finder interface, as a video timeline in an editor interface), and the default segmentation is re-segmented in response to a user query. The query can take the form of a keyword and one or more selected facets in a category of detected features. Keywords are searched for detected transcript words, detected object or action tags, or detected audio event tags that match the keywords. Selected facets are searched for detected instances of the selected facets. Each video segment that matches the query is re-segmented by solving a shortest path problem through a graph that models different segmentation options.Type: GrantFiled: May 26, 2021Date of Patent: November 7, 2023Assignee: ADOBE INC.Inventors: Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović
-
Patent number: 11765859Abstract: Disclosed is an immersion cooling unit including an immersion cell, defining an internal cavity. An electrical component is positioned in the internal cavity. A dielectric working fluid partially fills the internal cavity and at least partially immerses the electrical component. A condensing coil is positioned above the dielectric working fluid. The dielectric working fluid comprises at least one of 1,1,1,2,2,5,5,6,6,6-decafluoro-3-hexene, (HFO-153-10mczz), or 1,1,1,4,5,5,5-heptafluoro-4-trifluoromethyl-2-pentene, (HFO-153-10mzzy). Also disclosed is a method of cooling an electrical component, comprising partially immersing an electrical component in a working fluid; and transferring heat from the electrical component using the working fluid.Type: GrantFiled: September 29, 2022Date of Patent: September 19, 2023Assignee: THE CHEMOURS COMPANY FC, LLCInventors: Jason R. Juhasz, Drew Richard Brandt, Luke David Simoni, Jonathan P. Stehman, Viacheslav A. Petrov, Gustavo Pottker
-
Publication number: 20230112841Abstract: Disclosed is an immersion cooling unit including an immersion cell, defining an internal cavity. An electrical component is positioned in the internal cavity. A dielectric working fluid partially fills the internal cavity and at least partially immerses the electrical component. A condensing coil is positioned above the dielectric working fluid. The dielectric working fluid comprises at least one of 1,1,1,2,2,5,5,6,6,6-decafluoro-3-hexene, (HFO-153-10mczz), or 1,1,1,4,5,5,5-heptafluoro-4-trifluoromethyl-2-pentene, (HFO-153-10mzzy).Type: ApplicationFiled: September 29, 2022Publication date: April 13, 2023Applicant: THE CHEMOURS COMPANY FC, LLCInventors: JASON R. JUHASZ, DREW RICHARD BRANDT, LUKE DAVID SIMONI, JONATHAN P. STEHMAN, VIACHESLAV A. PETROV, GUSTAVO POTTKER
-
Publication number: 20230043769Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.Type: ApplicationFiled: October 19, 2022Publication date: February 9, 2023Inventors: Seth WALKER, Joy O KIM, Aseem AGARWALA, Joel Richard Brandt, Jovan POPOVIC, Lubomira DONTCHEVA, Dingzeyu LI, Hijung SHIN, Xue Bai
-
Publication number: 20220388929Abstract: A method of producing a fluoroolefin includes contacting a compound of formula (1), RfCX1?CHCF3, with a fluorinated ethylene compound of formula (2), CX2X3?CX4X5 in the presence of a catalyst. In the compound of formula (1), Rf is a linear or branched C1-C10 perfluorinated alkyl group and Xi is H, Br, Cl, or F. In the compound of formula (2), X2, X3, X4, and X5 are each independently H, Br, Cl, or F and at least three of X2, X3, X4, and X5 are F. The resulting composition comprises a compound of formula (3), Rf(CF2)nCX6?CH(CF2)mCX7X8CFX9X10. In the compound of formula (3), X6, X7, X8, X9, and X10 are each independently H, Br, Cl, or F, and the total number of each of H, Br, Cl, and F corresponds to the total number of each of H, Br, Cl, and F provided by the compounds of formulae (1) and (2).Type: ApplicationFiled: December 9, 2020Publication date: December 8, 2022Applicant: THE CHEMOURS COMPANY FC, LLCInventors: VIACHESLAV A. PETROV, JASON R. JUHASZ, LUKE DAVID SIMONI, DREW RICHARD BRANDT, JONATHAN P. STEHMAN
-
Patent number: 11455731Abstract: Embodiments are directed to video segmentation based on detected video features. More specifically, a segmentation of a video is computed by determining candidate boundaries from detected feature boundaries from one or more feature tracks; modeling different segmentation options by constructing a graph with nodes that represent candidate boundaries, edges that represent candidate segments, and edge weights that represent cut costs; and computing the video segmentation by solving a shortest path problem to find the path through the edges (segmentation) that minimizes the sum of edge weights along the path (cut costs). A representation of the video segmentation is presented, for example, using interactive tiles or a video timeline that represent(s) the video segments in the segmentation.Type: GrantFiled: May 26, 2021Date of Patent: September 27, 2022Assignee: Adobe Inc.Inventors: Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović
-
Publication number: 20220301179Abstract: Embodiments are directed to video segmentation based on detected video features. More specifically, a segmentation of a video is computed by determining candidate boundaries from detected feature boundaries from one or more feature tracks; modeling different segmentation options by constructing a graph with nodes that represent candidate boundaries, edges that represent candidate segments, and edge weights that represent cut costs; and computing the video segmentation by solving a shortest path problem to find the path through the edges (segmentation) that minimizes the sum of edge weights along the path (cut costs). A representation of the video segmentation is presented, for example, using interactive tiles or a video timeline that represent(s) the video segments in the segmentation.Type: ApplicationFiled: June 8, 2022Publication date: September 22, 2022Inventors: Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic
-
Patent number: 11287339Abstract: This patent describes devices and methods to evaluate and compare the effectiveness of protective equipment in providing protection to players of contact sports, and to determine if a given protective product (pad) is compliant with a specified performance standard. To simulate the impacts experienced by these players, a pad-protected specially modified and instrumented manikin is impacted with solid loads of various weights at various speeds. The impacts are designed to model the impact forces and impact times encountered in typical game collisions. For each impact, measurements are made of the force exerted onto the pad, and the parts of this force that are transmitted through the pad onto various locations on the manikin, as a function of time.Type: GrantFiled: July 25, 2018Date of Patent: March 29, 2022Inventor: Richard A. Brandt
-
Publication number: 20220076707Abstract: Embodiments are directed to a snap point segmentation that defines the locations of selection snap points for a selection of video segments. Candidate snap points are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate snap point separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation between consecutive snap points on a video timeline. The snap point segmentation is computed by solving a shortest path problem through a graph that models different snap point locations and separations. When a user clicks or taps on the video timeline and drags, a selection snaps to the snap points defined by the snap point segmentation. In some embodiments, the snap points are displayed during a drag operation and disappear when the drag operation is released.Type: ApplicationFiled: May 26, 2021Publication date: March 10, 2022Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
-
Publication number: 20220076025Abstract: Embodiments are directed to video segmentation based on a query. Initially, a first segmentation such as a default segmentation is displayed (e.g., as interactive tiles in a finder interface, as a video timeline in an editor interface), and the default segmentation is re-segmented in response to a user query. The query can take the form of a keyword and one or more selected facets in a category of detected features. Keywords are searched for detected transcript words, detected object or action tags, or detected audio event tags that match the keywords. Selected facets are searched for detected instances of the selected facets. Each video segment that matches the query is re-segmented by solving a shortest path problem through a graph that models different segmentation options.Type: ApplicationFiled: May 26, 2021Publication date: March 10, 2022Inventors: Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic