Patents by Inventor Kim Pascal Pimmel

Kim Pascal Pimmel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250139161
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for cutting down a user's larger input video into an edited video comprising the most important video segments and applying corresponding video effects. Some embodiments of the present invention are directed to adding captioning video effects to the trimmed video (e.g., applying face-aware and non-face-aware captioning to emphasize extracted video segment headings, important sentences, quotes, words of interest, extracted lists, etc.). For example, a prompt is provided to a generative language model to identify portions of a transcript (e.g., extracted scene summaries, important sentences, lists of items discussed in the video, etc.) to apply to corresponding video segments as captions depending on the type of caption (e.g., an extracted heading may be captioned at the start of a corresponding video segment, important sentences and/or extracted list items may be captioned when they are spoken).
    Type: Application
    Filed: February 2, 2024
    Publication date: May 1, 2025
    Inventors: Deepali ANEJA, Zeyu JIN, Hijung SHIN, Anh Lan TRUONG, Dingzeyu LI, Hanieh DEILAMSALEHY, Rubaiat HABIB, Matthew David FISHER, Kim Pascal PIMMEL, Wilmot LI, Lubomira Assenova DONTCHEVA
  • Publication number: 20250140291
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for identifying the relevant segments that effectively summarize the larger input video and/or form a rough cut, and assembling them into one or more smaller trimmed videos. For example, visual scenes and corresponding scene captions are extracted from the input video and associated with an extracted diarized and timestamped transcript to generate an augmented transcript. The augmented transcript is applied to a large language model to extract sentences that characterize a trimmed version of the input video (e.g., a natural language summary, a representation of identified sentences from the transcript). As such, corresponding video segments are identified (e.g., using similarity to match each sentence in a generated summary with a corresponding transcript sentence) and assembled into one or more trimmed videos. In some embodiments, the trimmed video is generated based on a user's query and/or desired length.
    Type: Application
    Filed: February 2, 2024
    Publication date: May 1, 2025
    Inventors: Hanieh DEILAMSALEHY, Jui-Hsien WANG, Zhengyang MA, Dingzeyu LI, Hijung SHIN, Aseem Omprakash AGARWALA, Kim Pascal PIMMEL, Lubomira Assenova DONTCHEVA
  • Publication number: 20250140292
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for cutting down a user's larger input video into an edited video comprising the most important video segments and applying corresponding video effects. Some embodiments of the present invention are directed to adding face-aware scale magnification to the trimmed video (e.g., applying scale magnification to simulate a camera zoom effect that hides shot cuts with respect to the subject's face). For example, as the trimmed video transitions from one video segment to the next video segment, a scale magnification may be applied that zooms in on a detected face at a boundary between the video segments to smooth the transition between video segments.
    Type: Application
    Filed: February 2, 2024
    Publication date: May 1, 2025
    Inventors: Anh Lan TRUONG, Deepali ANEJA, Hijung SHIN, Rubaiat HABIB, Jakub FISER, Kishore RADHAKRISHNA, Joel Richard BRANDT, Matthew David FISHER, Zeyu JIN, Kim Pascal PIMMEL, Wilmot LI, Lubomira Assenova DONTCHEVA
  • Patent number: 12206930
    Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: January 21, 2025
    Assignee: Adobe Inc.
    Inventors: Kim Pascal Pimmel, Stephen Joseph Diverdi, Jiaju MA, Rubaiat Habib, Li-Yi Wei, Hijung Shin, Deepali Aneja, John G. Nelson, Wilmot Li, Dingzeyu Li, Lubomira Assenova Dontcheva, Joel Richard Brandt
  • Publication number: 20240244287
    Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.
    Type: Application
    Filed: January 13, 2023
    Publication date: July 18, 2024
    Inventors: Kim Pascal PIMMEL, Stephen Joseph DIVERDI, Jiaju MA, Rubaiat HABIB, Li-Yi WEI, Hijung SHIN, Deepali ANEJA, John G. NELSON, Wilmot LI, Dingzeyu LI, Lubomira Assenova DONTCHEVA, Joel Richard BRANDT
  • Publication number: 20240233769
    Abstract: Embodiments of the present disclosure provide systems, methods, and computer storage media providing visualizations and mechanisms utilized when performing video edits using wrapped timelines (e.g., effect bars/effect tracks) interspersed between text lines representing video effects being applied to text segments in a transcript. An example embodiment provides a transcript using an audio track from a transcribed video. A transcript interface presents the transcript and accepts an input selecting sentences or words from the transcript. The identified boundaries corresponding to the selected text segment are used as boundaries for a selected video segment. Using the selected text segment, a user selects a video effect in which to apply to the corresponding video segment and within the transcript interface, a wrapped timeline is placed in the transcript along the selected text segment to indicate that the video effect is applied to the corresponding video segment.
    Type: Application
    Filed: January 10, 2023
    Publication date: July 11, 2024
    Inventors: David Tamas Kutas, Lubomira Assenova Dontcheva, Kim Pascal Pimmel, Hijung Shin
  • Publication number: 20240134597
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a question search for meaningful questions that appear in a video. In an example embodiment, an audio track from a video is transcribed, and the transcript is parsed to identify sentences that end with a question mark. Depending on the embodiment, one or more types of questions are filtered out, such as short questions less than a designated length or duration, logistical questions, and/or rhetorical questions. As such, in response to a command to perform a question search, the questions are identified, and search result tiles representing video segments of the questions are presented. Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 25, 2024
    Inventors: Lubomira Assenova DONTCHEVA, Anh Lan TRUONG, Hanieh DEILAMSALEHY, Kim Pascal PIMMEL, Aseem Omprakash AGARWALA, Dingzeyu Li, Joel Richard BRANDT, Joy Oakyung KIM
  • Publication number: 20240134909
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a visual and text search interface used to navigate a video transcript. In an example embodiment, a freeform text query triggers a visual search for frames of a loaded video that match the freeform text query (e.g., frame embeddings that match a corresponding embedding of the freeform query), and triggers a text search for matching words from a corresponding transcript or from tags of detected features from the loaded video. Visual search results are displayed (e.g., in a row of tiles that can be scrolled to the left and right), and textual search results are displayed (e.g., in a row of tiles that can be scrolled up and down). Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 25, 2024
    Inventors: Lubomira Assenova DONTCHEVA, Dingzeyu LI, Kim Pascal PIMMEL, Hijung SHIN, Hanieh DEILAMSALEHY, Aseem Omprakash AGARWALA, Joy Oakyung KIM, Joel Richard BRANDT, Cristin Ailidh Fraser
  • Patent number: 10248192
    Abstract: A gaze target is recognized via an eye tracking camera. An application launcher is displayed, via a display, at the gaze target based on a user trigger. The application launcher presents a plurality of applications selectable for launching. A user selection of one of the plurality of applications is recognized. The application launcher is replaced with the selected one of the plurality of applications at the gaze target via the display.
    Type: Grant
    Filed: December 3, 2014
    Date of Patent: April 2, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Taylor Lehman, Kim Pascal Pimmel, Xerxes Beharry, Ranjib Badh, Jiashi Zhang
  • Patent number: 10222981
    Abstract: Embodiments that relate to displaying holographic keyboard and hand images in a holographic environment are provided. In one embodiment depth information of an actual position of a user's hand is received from a capture device. The user's hand is spaced by an initial actual distance from the capture device, and a holographic keyboard image is displayed spatially separated by a virtual distance from a holographic hand image. The user's hand is determined to move to an updated actual distance from the capture device. In response, the holographic keyboard image is maintained spatially separated by substantially the virtual distance from the holographic hand image.
    Type: Grant
    Filed: September 6, 2017
    Date of Patent: March 5, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Rotem Bennet, Lewey Geselowitz, Wei Zhang, Adam G. Poulos, John Bevis, Kim Pascal Pimmel, Nicholas Gervase Fajt
  • Publication number: 20170364261
    Abstract: Embodiments that relate to displaying holographic keyboard and hand images in a holographic environment are provided. In one embodiment depth information of an actual position of a user's hand is received from a capture device. The user's hand is spaced by an initial actual distance from the capture device, and a holographic keyboard image is displayed spatially separated by a virtual distance from a holographic hand image. The user's hand is determined to move to an updated actual distance from the capture device. In response, the holographic keyboard image is maintained spatially separated by substantially the virtual distance from the holographic hand image.
    Type: Application
    Filed: September 6, 2017
    Publication date: December 21, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Rotem Bennet, Lewey Geselowitz, Wei Zhang, Adam G. Poulos, John Bevis, Kim Pascal Pimmel, Nicholas Gervase Fajt
  • Patent number: 9766806
    Abstract: Embodiments that relate to displaying holographic keyboard and hand images in a holographic environment are provided. In one embodiment depth information of an actual position of a user's hand is received. Using the depth information, a holographic hand image representing the user's hand is displayed in a virtual hand plane in the holographic environment. In response to receiving a keyboard activation input from the user and using the depth information, the holographic keyboard image is adaptively displayed in a virtual keyboard plane in the holographic environment at a virtual distance under the holographic hand image representing the user's hand.
    Type: Grant
    Filed: July 15, 2014
    Date of Patent: September 19, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Rotem Bennet, Lewey Geselowitz, Wei Zhang, Adam G. Poulos, John Bevis, Kim Pascal Pimmel, Nicholas Gervase Fajt
  • Publication number: 20160162020
    Abstract: Embodiments are disclosed for a method of controlling a displayed position of an application launcher. An example method includes recognizing, via an eye-tracking camera, a gaze target, and responsive to a user trigger, displaying, via a display, an application launcher at the gaze target, the application launcher presenting a plurality of applications selectable for launching. The example method further includes recognizing a user selection of one of the plurality of applications, and replacing, via the display, the application launcher with the selected one of the plurality of applications at the gaze target.
    Type: Application
    Filed: December 3, 2014
    Publication date: June 9, 2016
    Inventors: Taylor Lehman, Kim Pascal Pimmel, Xerxes Beharry, Ranjib Badh, Jiashi Zhang
  • Publication number: 20160018985
    Abstract: Embodiments that relate to displaying holographic keyboard and hand images in a holographic environment are provided. In one embodiment depth information of an actual position of a user's hand is received. Using the depth information, a holographic hand image representing the user's hand is displayed in a virtual hand plane in the holographic environment. In response to receiving a keyboard activation input from the user and using the depth information, the holographic keyboard image is adaptively displayed in a virtual keyboard plane in the holographic environment at a virtual distance under the holographic hand image representing the user's hand.
    Type: Application
    Filed: July 15, 2014
    Publication date: January 21, 2016
    Inventors: Rotem Bennet, Lewey Geselowitz, Wei Zhang, Adam G. Poulos, John Bevis, Kim Pascal Pimmel, Nicholas Gervase Fajt
  • Publication number: 20130165180
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, include structures and techniques for integrating operations of consumer electronic devices. In one aspect, a method includes identifying a program operating on a second device; selecting a code set, from among multiple code sets, based on the identified program operating on the second device; modifying, at a first device, operation of an application installed on the first device by running the selected code set at the first device; and controlling a function of the program operating on the second device using the modified application on the first device.
    Type: Application
    Filed: September 27, 2010
    Publication date: June 27, 2013
    Applicant: ADOBE SYSTEMS INCORPORATED
    Inventors: Yohko Aurora Fukuda Kelley, Kim Pascal Pimmel, Matthew Soper Snow
  • Patent number: D1056928
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: January 7, 2025
    Assignee: Adobe Inc.
    Inventors: Kim Pascal Pimmel, Hijung Shin, Dingzeyu Li, Anh Lan Truong, Joy Oakyung Kim, Pankaj Kumar Nathani, Xuecong Xu, Cristin Ailidh Fraser, Kyratso George Karahalios, Zhengyang Ma, Joel Richard Brandt, Lubomira Assenova Dontcheva, Seth John Walker
  • Patent number: D1056942
    Type: Grant
    Filed: February 29, 2024
    Date of Patent: January 7, 2025
    Assignee: Adobe Inc.
    Inventors: Kim Pascal Pimmel, Hijung Shin, Dingzeyu Li, Anh Lan Truong, Joy Oakyung Kim, Pankaj Kumar Nathani, Xuecong Xu, Cristin Ailidh Fraser, Kyratso George Karahalios, Zhengyang Ma, Joel Richard Brandt, Lubomira Assenova Dontcheva, Seth John Walker
  • Patent number: D1056943
    Type: Grant
    Filed: February 29, 2024
    Date of Patent: January 7, 2025
    Assignee: Adobe Inc.
    Inventors: Kim Pascal Pimmel, Hijung Shin, Dingzeyu Li, Anh Lan Truong, Joy Oakyung Kim, Pankaj Kumar Nathani, Xuecong Xu, Cristin Ailidh Fraser, Kyratso George Karahalios, Zhengyang Ma, Joel Richard Brandt, Lubomira Assenova Dontcheva, Seth John Walker
  • Patent number: D1056944
    Type: Grant
    Filed: February 29, 2024
    Date of Patent: January 7, 2025
    Assignee: Adobe Inc.
    Inventors: Kim Pascal Pimmel, Hijung Shin, Dingzeyu Li, Anh Lan Truong, Joy Oakyung Kim, Pankaj Kumar Nathani, Xuecong Xu, Cristin Ailidh Fraser, Kyratso George Karahalios, Zhengyang Ma, Joel Richard Brandt, Lubomira Assenova Dontcheva, Seth John Walker