Patents by Inventor Markus Woodson

Markus Woodson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11949964
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for automatic tagging of videos. In particular, in one or more embodiments, the disclosed systems generate a set of tagged feature vectors (e.g., tagged feature vectors based on action-rich digital videos) to utilize to generate tags for an input digital video. For instance, the disclosed systems can extract a set of frames for the input digital video and generate feature vectors from the set of frames. In some embodiments, the disclosed systems generate aggregated feature vectors from the feature vectors. Furthermore, the disclosed systems can utilize the feature vectors (or aggregated feature vectors) to identify similar tagged feature vectors from the set of tagged feature vectors. Additionally, the disclosed systems can generate a set of tags for the input digital videos by aggregating one or more tags corresponding to identified similar tagged feature vectors.
    Type: Grant
    Filed: September 9, 2021
    Date of Patent: April 2, 2024
    Assignee: Adobe Inc.
    Inventors: Bryan Russell, Ruppesh Nalwaya, Markus Woodson, Joon-Young Lee, Hailin Jin
  • Publication number: 20230276084
    Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that generate a temporally remapped video that satisfies a desired target duration while preserving natural video dynamics. In certain instances, the disclosed systems utilize a playback speed prediction machine-learning model that recognizes and localizes temporally varying changes in video playback speed to re-time a digital video with varying frame-change speeds. For instance, to re-time the digital video, the disclosed systems utilize the playback speed prediction machine-learning model to infer the slowness of individual video frames. Subsequently, in certain embodiments, the disclosed systems determine, from frames of a digital video, a temporal frame sub-sampling that is consistent with the slowness predictions and fit within a target video duration.
    Type: Application
    Filed: March 16, 2023
    Publication date: August 31, 2023
    Inventors: Simon Jenni, Markus Woodson, Fabian David Caba Heilbron
  • Patent number: 11610606
    Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that generate a temporally remapped video that satisfies a desired target duration while preserving natural video dynamics. In certain instances, the disclosed systems utilize a playback speed prediction machine-learning model that recognizes and localizes temporally varying changes in video playback speed to re-time a digital video with varying frame-change speeds. For instance, to re-time the digital video, the disclosed systems utilize the playback speed prediction machine-learning model to infer the slowness of individual video frames. Subsequently, in certain embodiments, the disclosed systems determine, from frames of a digital video, a temporal frame sub-sampling that is consistent with the slowness predictions and fit within a target video duration.
    Type: Grant
    Filed: February 25, 2022
    Date of Patent: March 21, 2023
    Assignee: Adobe Inc.
    Inventors: Simon Jenni, Markus Woodson, Fabian David Caba Heilbron
  • Patent number: 11244204
    Abstract: In implementations of determining video cuts in video clips, a video cut detection system can receive a video clip that includes a sequence of digital video frames that depict one or more scenes. The video cut detection system can determine scene characteristics for the digital video frames. The video cut detection system can determine, from the scene characteristics, a probability of a video cut between two adjacent digital video frames having a boundary between the two adjacent digital video frames that is centered in the sequence of digital video frames. The video cut detection system can then compare the probability of the video cut to a cut threshold to determine whether the video cut exists between the two adjacent digital video frames.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: February 8, 2022
    Assignee: Adobe Inc.
    Inventors: Oliver Wang, Nico Alexander Becherer, Markus Woodson, Federico Perazzi, Nikhil Kalra
  • Publication number: 20210409836
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for automatic tagging of videos. In particular, in one or more embodiments, the disclosed systems generate a set of tagged feature vectors (e.g., tagged feature vectors based on action-rich digital videos) to utilize to generate tags for an input digital video. For instance, the disclosed systems can extract a set of frames for the input digital video and generate feature vectors from the set of frames. In some embodiments, the disclosed systems generate aggregated feature vectors from the feature vectors. Furthermore, the disclosed systems can utilize the feature vectors (or aggregated feature vectors) to identify similar tagged feature vectors from the set of tagged feature vectors. Additionally, the disclosed systems can generate a set of tags for the input digital videos by aggregating one or more tags corresponding to identified similar tagged feature vectors.
    Type: Application
    Filed: September 9, 2021
    Publication date: December 30, 2021
    Inventors: Bryan Russell, Ruppesh Nalwaya, Markus Woodson, Joon-Young Lee, Hailin Jin
  • Publication number: 20210365742
    Abstract: In implementations of determining video cuts in video clips, a video cut detection system can receive a video clip that includes a sequence of digital video frames that depict one or more scenes. The video cut detection system can determine scene characteristics for the digital video frames. The video cut detection system can determine, from the scene characteristics, a probability of a video cut between two adjacent digital video frames having a boundary between the two adjacent digital video frames that is centered in the sequence of digital video frames. The video cut detection system can then compare the probability of the video cut to a cut threshold to determine whether the video cut exists between the two adjacent digital video frames.
    Type: Application
    Filed: May 20, 2020
    Publication date: November 25, 2021
    Applicant: Adobe Inc.
    Inventors: Oliver Wang, Nico Alexander Becherer, Markus Woodson, Federico Perazzi, Nikhil Kalra
  • Patent number: 11146862
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for automatic tagging of videos. In particular, in one or more embodiments, the disclosed systems generate a set of tagged feature vectors (e.g., tagged feature vectors based on action-rich digital videos) to utilize to generate tags for an input digital video. For instance, the disclosed systems can extract a set of frames for the input digital video and generate feature vectors from the set of frames. In some embodiments, the disclosed systems generate aggregated feature vectors from the feature vectors. Furthermore, the disclosed systems can utilize the feature vectors (or aggregated feature vectors) to identify similar tagged feature vectors from the set of tagged feature vectors. Additionally, the disclosed systems can generate a set of tags for the input digital videos by aggregating one or more tags corresponding to identified similar tagged feature vectors.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: October 12, 2021
    Assignee: ADOBE INC.
    Inventors: Bryan Russell, Ruppesh Nalwaya, Markus Woodson, Joon-Young Lee, Hailin Jin
  • Publication number: 20200336802
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for automatic tagging of videos. In particular, in one or more embodiments, the disclosed systems generate a set of tagged feature vectors (e.g., tagged feature vectors based on action-rich digital videos) to utilize to generate tags for an input digital video. For instance, the disclosed systems can extract a set of frames for the input digital video and generate feature vectors from the set of frames. In some embodiments, the disclosed systems generate aggregated feature vectors from the feature vectors. Furthermore, the disclosed systems can utilize the feature vectors (or aggregated feature vectors) to identify similar tagged feature vectors from the set of tagged feature vectors. Additionally, the disclosed systems can generate a set of tags for the input digital videos by aggregating one or more tags corresponding to identified similar tagged feature vectors.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 22, 2020
    Inventors: Bryan Russell, Ruppesh Nalwaya, Markus Woodson, Joon-Young Lee, Hailin Jin