Patents by Inventor Ala Addin Sidig

Ala Addin Sidig has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10268879
    Abstract: A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: April 23, 2019
    Assignee: King Fahd University of Petroleum and Minerals
    Inventors: Sabri A. Mahmoud, Ala Addin Sidig
  • Patent number: 10262198
    Abstract: A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures.
    Type: Grant
    Filed: September 21, 2018
    Date of Patent: April 16, 2019
    Assignee: King Fahd University of Petroleum and Minerals
    Inventors: Sabri A. Mahmoud, Ala Addin Sidig
  • Publication number: 20190050637
    Abstract: A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures.
    Type: Application
    Filed: June 27, 2018
    Publication date: February 14, 2019
    Applicant: King Fahd University of Petroleum and Minerals
    Inventors: Sabri A. MAHMOUD, Ala Addin SIDIG
  • Patent number: 10192105
    Abstract: A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: January 29, 2019
    Assignee: King Fahd University of Petroleum and Minerals
    Inventors: Sabri A. Mahmoud, Ala Addin Sidig
  • Publication number: 20190026546
    Abstract: A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures.
    Type: Application
    Filed: September 21, 2018
    Publication date: January 24, 2019
    Applicant: King Fahd University of Petroleum and Minerals
    Inventors: SABRI A. MAHMOUD, Ala Addin SIDIG
  • Publication number: 20190019018
    Abstract: A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures.
    Type: Application
    Filed: September 20, 2018
    Publication date: January 17, 2019
    Applicant: King Fahd University of Petroleum and Minerals
    Inventors: SABRI A. MAHMOUD, Ala Addin Sidig
  • Patent number: 10176367
    Abstract: A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: January 8, 2019
    Assignee: King Fahd University of Petroleum and Minerals
    Inventors: Sabri A. Mahmoud, Ala Addin Sidig
  • Publication number: 20180322338
    Abstract: A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures.
    Type: Application
    Filed: June 29, 2018
    Publication date: November 8, 2018
    Applicant: King Fahd University of Petroleum and Minerals
    Inventors: Sabri A. MAHMOUD, Ala Addin SIDIG
  • Patent number: 10037458
    Abstract: A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures.
    Type: Grant
    Filed: May 2, 2017
    Date of Patent: July 31, 2018
    Assignee: King Fahd University of Petroleum and Minerals
    Inventors: Sabri A. Mahmoud, Ala Addin Sidig