Patents by Inventor SAMIR MAHDAD

SAMIR MAHDAD has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10691735
    Abstract: Aspects detect or recognize shifts in topics in computer implemented speech recognition processes as a function of mapping keywords to non-verbal cues. An initial topic is mapped to one or more keywords extracted from a first spoken query within a user keyword ontology mapping. A query spoken subsequent in time to the first query is identified and distinguished by recognizing one or more non-verbal cues associated with the audio data input that include a time elapsed between the queries, and in some aspects a user's facial expression or motion activity. Aspects determine whether the second spoken query is directed to the initial topic or to a new topic that is different from the initial topic, as a function of mappings of the keyword(s) extracted from the first query to one or more keywords extracted from the second query and to the non-verbal cue(s) within the user ontology mapping.
    Type: Grant
    Filed: September 11, 2018
    Date of Patent: June 23, 2020
    Assignee: International Business Machines Corporation
    Inventors: Donna K. Byron, Barton W. Emanuel, Samir Mahdad, Sarbajit K. Rakshit, Craig M. Trim
  • Publication number: 20190005123
    Abstract: Aspects detect or recognize shifts in topics in computer implemented speech recognition processes as a function of mapping keywords to non-verbal cues. An initial topic is mapped to one or more keywords extracted from a first spoken query within a user keyword ontology mapping. A query spoken subsequent in time to the first query is identified and distinguished by recognizing one or more non-verbal cues associated with the audio data input that include a time elapsed between the queries, and in some aspects a user's facial expression or motion activity. Aspects determine whether the second spoken query is directed to the initial topic or to a new topic that is different from the initial topic, as a function of mappings of the keyword(s) extracted from the first query to one or more keywords extracted from the second query and to the non-verbal cue(s) within the user ontology mapping.
    Type: Application
    Filed: September 11, 2018
    Publication date: January 3, 2019
    Inventors: DONNA K. BYRON, BARTON W. EMANUEL, SAMIR MAHDAD, SARBAJIT K. RAKSHIT, CRAIG M. TRIM
  • Patent number: 10108702
    Abstract: Aspects detect or recognize shifts in topics in computer implemented speech recognition processes as a function of mapping keywords to non-verbal cues. An initial topic is mapped to one or more keywords extracted from a first spoken query within a user keyword ontology mapping. A query spoken subsequent in time to the first query is identified and distinguished by recognizing one or more non-verbal cues associated with the audio data input that include a time elapsed between the queries, and in some aspects a user's facial expression or motion activity. Aspects determine whether the second spoken query is directed to the initial topic or to a new topic that is different from the initial topic, as a function of mappings of the keyword(s) extracted from the first query to one or more keywords extracted from the second query and to the non-verbal cue(s) within the user ontology mapping.
    Type: Grant
    Filed: August 24, 2015
    Date of Patent: October 23, 2018
    Assignee: International Business Machines Corporation
    Inventors: Donna K. Byron, Barton W. Emanuel, Samir Mahdad, Sarbajit K. Rakshit, Craig M. Trim
  • Publication number: 20170060994
    Abstract: Aspects detect or recognize shifts in topics in computer implemented speech recognition processes as a function of mapping keywords to non-verbal cues. An initial topic is mapped to one or more keywords extracted from a first spoken query within a user keyword ontology mapping. A query spoken subsequent in time to the first query is identified and distinguished by recognizing one or more non-verbal cues associated with the audio data input that include a time elapsed between the queries, and in some aspects a user's facial expression or motion activity. Aspects determine whether the second spoken query is directed to the initial topic or to a new topic that is different from the initial topic, as a function of mappings of the keyword(s) extracted from the first query to one or more keywords extracted from the second query and to the non-verbal cue(s) within the user ontology mapping.
    Type: Application
    Filed: August 24, 2015
    Publication date: March 2, 2017
    Inventors: DONNA K. BYRON, BARTON W. EMANUEL, SAMIR MAHDAD, SARBAJIT K. RAKSHIT, CRAIG M. TRIM