Patents by Inventor Cees G. M. Snoek

Cees G. M. Snoek has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960576
    Abstract: Videos captured in low light conditions can be processed in order to identify an activity being performed in the video. The processing may use both the video and audio streams for identifying the activity in the low light video. The video portion is processed to generate a darkness-aware feature which may be used to modulate the features generated from the audio and video features. The audio features may be used to generate a video attention feature and the video features may be used to generate an audio attention feature. The audio and video attention features may also be used in modulating the audio video features. The modulated audio and video features may be used to predict an activity occurring in the video.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: April 16, 2024
    Assignee: Inception Institute of Artificial Intelligence Ltd
    Inventors: Yunhua Zhang, Xiantong Zhen, Ling Shao, Cees G. M. Snoek
  • Patent number: 11694442
    Abstract: Repetitive activities can be captured in audio video content. The AV content can be processed in order to predict the number of repetitive activities present in the AV content. The accuracy of the predicted number may be improved, especially for AV content with challenging conditions, by basing the predictions on both the audio and video portions of the AV content.
    Type: Grant
    Filed: June 18, 2021
    Date of Patent: July 4, 2023
    Assignee: Inception Institute of Artificial Intelligence Ltd
    Inventors: Yunhua Zhang, Cees G. M. Snoek, Ling Shao
  • Publication number: 20230039641
    Abstract: Videos captured in low light conditions can be processed in order to identify an activity being performed in the video. The processing may use both the video and audio streams for identifying the activity in the low light video. The video portion is processed to generate a darkness-aware feature which may be used to modulate the features generated from the audio and video features. The audio features may be used to generate a video attention feature and the video features may be used to generate an audio attention feature. The audio and video attention features may also be used in modulating the audio video features. The modulated audio and video features may be used to predict an activity occurring in the video.
    Type: Application
    Filed: July 20, 2021
    Publication date: February 9, 2023
    Inventors: Yunhua Zhang, Xiantong Zhen, Ling Shao, Cees G.M. Snoek
  • Publication number: 20220156501
    Abstract: Repetitive activities can be captured in audio video content. The AV content can be processed in order to predict the number of repetitive activities present in the AV content. The accuracy of the predicted number may be improved, especially for AV content with challenging conditions, by basing the predictions on both the audio and video portions of the AV content.
    Type: Application
    Filed: June 18, 2021
    Publication date: May 19, 2022
    Inventors: Yunhua Zhang, Cees G. M. Snoek, Ling Shao