Patents by Inventor Navroz Jehangir Daroga

Navroz Jehangir Daroga has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180129647
    Abstract: Aspects of the present disclosure are directed to systems and methods for evaluating an individual's affect or emotional state by extracting emotional meaning from audio, visual and/or textual input into a handset, mobile communication device or other peripheral device. The audio, visual and/or textual input may be collected, gathered or obtained using one or more data modules which may include, but are not limited to, a microphone, a camera, an accelerometer and a peripheral device. The data modules collect one or more sets of potential imprecise characteristics which may then be analyzed and/or evaluated. When analyzing and/or evaluating the imprecise characteristics, the imprecise characteristics may be assigned one or more weighted descriptive values and a weighted time value. The weighted descriptive values and the weighted time value are then compiled or fused to create one or more precise characteristics which may define the emotional state of an individual.
    Type: Application
    Filed: June 9, 2017
    Publication date: May 10, 2018
    Inventors: Thomas W. Meyer, Mark Stephen Meadows, Navroz Jehangir Daroga
  • Publication number: 20160004299
    Abstract: Aspects of the present disclosure are directed to systems, devices and methods for assessing, verifying and adjusting the affective state of a user. An electronic communication is received in a computer terminal from a user. The communication may be a verbal, visual and/or biometric communication. The electronic communication may be assigned at least weighted descriptive value and a weighted time value which are used to calculate a current affective state of the user. Optionally, the computer terminal may be triggered to interact with the user to verify the current affective state if the current affective state is ambiguous. The optional interaction may continue until the current affective state is achieved. Next, the computer terminal may be triggered to interact with the user to adjust the current affective state upon a determination that the current affective state is outside an acceptable range from a pre-defined affective state.
    Type: Application
    Filed: July 4, 2015
    Publication date: January 7, 2016
    Inventors: Thomas W. Meyer, Mark Stephen Meadows, Navroz Jehangir Daroga
  • Publication number: 20150324352
    Abstract: Aspects of the present disclosure are directed to systems and methods for evaluating an individual's affect or emotional state by extracting emotional meaning from audio, visual and/or textual input into a handset, mobile communication device or other peripheral device. The audio, visual and/or textual input may be collected, gathered or obtained using one or more data modules which may include, but are not limited to, a microphone, a camera, an accelerometer and a peripheral device. The data modules collect one or more sets of potential imprecise characteristics which may then be analyzed and/or evaluated. When analyzing and/or evaluating the imprecise characteristics, the imprecise characteristics may be assigned one or more weighted descriptive values and a weighted time value. The weighted descriptive values and the weighted time value are then compiled or fused to create one or more precise characteristics which may define the emotional state of an individual.
    Type: Application
    Filed: May 12, 2015
    Publication date: November 12, 2015
    Inventors: Thomas W. Meyer, Mark Stephen Meadows, Navroz Jehangir Daroga