Patents by Inventor Gauri Deshpande

Gauri Deshpande has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12002291
    Abstract: State of art techniques attempt in extracting insights from eye features, specifically pupil with focus on behavioral analysis than on confidence level detection. Embodiments of the present disclosure provide a method and system for confidence level detection from eye features using ML based approach. The method enables generating overall confidence level label based on the subject's performance during an interaction, wherein the interaction that is analyzed is captured as a video sequence focusing on face of the subject. For each frame facial features comprising an Eye-Aspect ratio, a mouth movement, Horizontal displacements, Vertical displacements, Horizontal Squeezes and Vertical Peaks, are computed, wherein HDs, VDs, HSs and VPs are features that are derived from points on eyebrow with reference to nose tip of the detected face. This is repeated for all frames in the window. A Bi-LSTM model is trained using the facial features to derive confidence level of the subject.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: June 4, 2024
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Sushovan Chanda, Gauri Deshpande, Sachin Patel
  • Patent number: 11996118
    Abstract: An important task in several wellness applications is detection of emotional valence from speech. Two types of features of speech signals are used to detect valence: acoustic features and text features. Acoustic features are derived from short frames of speech, while text features are derived from the text transcription. Present disclosure provides systems and methods that determine the effect of text on acoustic features. Acoustic features of speech segments carrying emotion words are to be treated differently from other segments that do not carry such words. Only specific speech segments of the input speech signal are considered based on a dictionary specific to a language to assess emotional valence. A model trained (or trained classifier) for specific language either by including the acoustic features of the emotion related words or by omitting it is used by the system for determining emotional valence in an input speech signal.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: May 28, 2024
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Ramesh Kumar Ramakrishnan, Venkata Subramanian Viraraghavan, Rahul Dasharath Gavas, Sachin Patel, Gauri Deshpande
  • Publication number: 20230018693
    Abstract: State of art techniques attempt in extracting insights from eye features, specifically pupil with focus on behavioral analysis than on confidence level detection. Embodiments of the present disclosure provide a method and system for confidence level detection from eye features using ML based approach. The method enables generating overall confidence level label based on the subject's performance during an interaction, wherein the interaction that is analyzed is captured as a video sequence focusing on face of the subject. For each frame facial features comprising an Eye-Aspect ratio, a mouth movement, Horizontal displacements, Vertical displacements, Horizontal Squeezes and Vertical Peaks, are computed, wherein HDs, VDs, HSs and VPs are features that are derived from points on eyebrow with reference to nose tip of the detected face. This is repeated for all frames in the window. A Bi-LSTM model is trained using the facial features to derive confidence level of the subject.
    Type: Application
    Filed: November 4, 2021
    Publication date: January 19, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: Sushovan Chanda, Gauri Deshpande, Sachin Patel
  • Publication number: 20220130414
    Abstract: An important task in several wellness applications is detection of emotional valence from speech. Two types of features of speech signals are used to detect valence: acoustic features and text features. Acoustic features are derived from short frames of speech, while text features are derived from the text transcription. Present disclosure provides systems and methods that determine the effect of text on acoustic features. Acoustic features of speech segments carrying emotion words are to be treated differently from other segments that do not carry such words. Only specific speech segments of the input speech signal are considered based on a dictionary specific to a language to assess emotional valence. A model trained (or trained classifier) for specific language either by including the acoustic features of the emotion related words or by omitting it is used by the system for determining emotional valence in an input speech signal.
    Type: Application
    Filed: October 19, 2021
    Publication date: April 28, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Ramesh Kumar RAMAKRISHNAN, Venkata Subramanian VIRARAGHAVAN, Rahul Dasharath GAVAS, Sachin PATEL, Gauri DESHPANDE
  • Publication number: 20160295129
    Abstract: Various disclosed embodiments include methods and systems for capturing images and creating time-lapse videos. A method includes receiving image data representative of an image that is to be used as part of an image sequence, and processing the image data using edge detection to assist a user to capture a subsequent image for the image sequence at substantially a same geographical location and substantially a same device orientation as that for other images in the image sequence.
    Type: Application
    Filed: March 30, 2015
    Publication date: October 6, 2016
    Inventor: Gauri Deshpande