Patents by Inventor Bilva Bhalachandra Navathe

Bilva Bhalachandra Navathe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9110501
    Abstract: A method and system for detecting temporal segments of talking faces in a video sequence using visual cues. The system detects talking segments by classifying talking and non-talking segments in a sequence of image frames using visual cues. The present disclosure detects temporal segments of talking faces in video sequences by first localizing face, eyes, and hence, a mouth region. Then, the localized mouth regions across the video frames are encoded in terms of integrated gradient histogram (IGH) of visual features and quantified using evaluated entropy of the IGH. The time series data of entropy values from each frame is further clustered using online temporal segmentation (K-Means clustering) algorithm to distinguish talking mouth patterns from other mouth movements. Such segmented time series data is then used to enhance the emotion recognition system.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: August 18, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sudha Velusamy, Viswanath Gopalakrishnan, Bilva Bhalachandra Navathe, Anshul Sharma
  • Publication number: 20130271361
    Abstract: A method and system for detecting temporal segments of talking faces in a video sequence using visual cues. The system detects talking segments by classifying talking and non-talking segments in a sequence of image frames using visual cues. The present disclosure detects temporal segments of talking faces in video sequences by first localizing face, eyes, and hence, a mouth region. Then, the localized mouth regions across the video frames are encoded in terms of integrated gradient histogram (IGH) of visual features and quantified using evaluated entropy of the IGH. The time series data of entropy values from each frame is further clustered using online temporal segmentation (K-Means clustering) algorithm to distinguish talking mouth patterns from other mouth movements. Such segmented time series data is then used to enhance the emotion recognition system.
    Type: Application
    Filed: March 13, 2013
    Publication date: October 17, 2013
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Sudha Velusamy, Viswanath Gopalakrishnan, Bilva Bhalachandra Navathe, Anshul Sharma