Patents by Inventor Noureldien Mahmoud Elsayed HUSSEIN

Noureldien Mahmoud Elsayed HUSSEIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135708
    Abstract: A method for recognizing long-range activities in videos includes segmenting an input video stream to generate multiple frame sets. For each of the frame sets, a frame with a highest likelihood of including one or more actions of a set of predefined actions is identified regardless of its order in the frame set. A global representation of the input stream is generated based on pooled representations of the identified frames. A long-range activity in the video stream is classified based on the global representation.
    Type: Application
    Filed: November 13, 2020
    Publication date: April 25, 2024
    Inventors: Noureldien Mahmoud Elsayed HUSSEIN, Efstratios GAVVES, Arnold Wilhelmus Maria SMEULDERS
  • Publication number: 20240135712
    Abstract: A method for classifying a human-object interaction includes identifying a human-object interaction in the input. Context features of the input are identified. Each identified context feature is compared with the identified human-object interaction. An importance of the identified context feature is determined for the identified human-object interaction. The context feature is fused with the identified human-object interaction when the importance is greater than a threshold.
    Type: Application
    Filed: November 14, 2020
    Publication date: April 25, 2024
    Inventors: Mert KILICKAYA, Noureldien Mahmoud Elsayed HUSSEIN, Efstratios GAVVES, Arnold Wilhelmus Maria SMEULDERS
  • Patent number: 11443514
    Abstract: A method for classifying subject activities in videos includes learning latent (previously generated) concepts that are analogous to nodes of a graph to be generated for an activity in a video. The method also includes receiving video segments of the video. A similarity between the video segments and the previously generated concepts is measured to obtain segment representations as a weighted set of latent concepts. The method further includes determining a relationship between the segment representations and their transitioning pattern over time to determine a reduced set of nodes and/or edges for the graph. The graph of the activity in the video represented by the video segments is generated based on the reduced set of nodes and/or edges. The nodes of the graph are represented by the latent concepts. Subject activities in the video are classified based on the graph.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: September 13, 2022
    Assignee: Qualcomm Technologies, Inc.
    Inventors: Noureldien Mahmoud Elsayed Hussein, Efstratios Gavves, Arnold Wilhelmus Maria Smeulders
  • Publication number: 20200302185
    Abstract: A method for classifying subject activities in videos includes learning latent (previously generated) concepts that are analogous to nodes of a graph to be generated for an activity in a video. The method also includes receiving video segments of the video. A similarity between the video segments and the previously generated concepts is measured to obtain segment representations as a weighted set of latent concepts. The method further includes determining a relationship between the segment representations and their transitioning pattern over time to determine a reduced set of nodes and/or edges for the graph. The graph of the activity in the video represented by the video segments is generated based on the reduced set of nodes and/or edges. The nodes of the graph are represented by the latent concepts. Subject activities in the video are classified based on the graph.
    Type: Application
    Filed: March 23, 2020
    Publication date: September 24, 2020
    Inventors: Noureldien Mahmoud Elsayed HUSSEIN, Efstratios GAVVES, Arnold Wilhelmus Maria SMEULDERS
  • Patent number: 10496885
    Abstract: A method, a computer-readable medium, and an apparatus for zero-exemplar event detection are provided. The apparatus may receive a plurality of text blocks, each of which may describe one of a plurality of pre-defined events. The apparatus may receive a plurality of training videos, each of which may be associated with one of the plurality of text blocks. The apparatus may propagate each text block through a neural network to obtain a textual representation in a joint space of textual and video representations. The apparatus may propagate each training video through the neural network to obtain a visual representation in the joint space. The apparatus may adjust parameters of the neural network to reduce, for each pair of associated text block and training video, the distance in the joint space between the textual representation of the associated text block and the visual representation of the associated training video.
    Type: Grant
    Filed: June 21, 2017
    Date of Patent: December 3, 2019
    Assignee: Qualcomm Incorporated
    Inventors: Noureldien Mahmoud Elsayed Hussein, Efstratios Gavves, Arnold Wilhelmus Maria Smeulders
  • Publication number: 20180137360
    Abstract: A method, a computer-readable medium, and an apparatus for zero-exemplar event detection are provided. The apparatus may receive a plurality of text blocks, each of which may describe one of a plurality of pre-defined events. The apparatus may receive a plurality of training videos, each of which may be associated with one of the plurality of text blocks. The apparatus may propagate each text block through a neural network to obtain a textual representation in a joint space of textual and video representations. The apparatus may propagate each training video through the neural network to obtain a visual representation in the joint space. The apparatus may adjust parameters of the neural network to reduce, for each pair of associated text block and training video, the distance in the joint space between the textual representation of the associated text block and the visual representation of the associated training video.
    Type: Application
    Filed: June 21, 2017
    Publication date: May 17, 2018
    Inventors: Noureldien Mahmoud Elsayed HUSSEIN, Efstratios GAVVES, Arnold Wilhelmus Maria SMEULDERS