Patents by Inventor Seyedmohammad Mavadati

Seyedmohammad Mavadati has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170098122
    Abstract: Image content is analyzed in order to present an associated representation expression. Images of one or more individuals are obtained and the processors are used to identify the faces of the one or more individuals in the images. Facial features are extracted from the identified faces and facial landmark detection is performed. Classifiers are used to map the facial landmarks to various emotional content. The identified facial landmarks are translated into a representative icon, where the translation is based on classifiers. A set of emoji can be imported and the representative icon is selected from the set of emoji. The emoji selection is based on emotion content analysis of the face. The selected emoji can be static, animated, or cartoon representations of emotion. The individuals can share the selected emoji through insertion into email, texts, and social sharing websites.
    Type: Application
    Filed: December 9, 2016
    Publication date: April 6, 2017
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, May Amr Fouad, Abdelrahman Mahmoud, Seyedmohammad Mavadati, Daniel McDuff
  • Publication number: 20170011258
    Abstract: Facial expressions are evaluated for control of robots. One or more images of a face are captured. The images are analyzed for mental state data. The images are analyzed to determine a facial expression of the face within an identified a region of interest. Mental state information is generated. A context for the robot operation is determined. A context for the individual is determined. The actions of a robot are then controlled based on the facial expressions and the mental state information that was generated. Displays, color, sound, motion, and voice response for the robot are controlled based on the facial expressions of one or more people.
    Type: Application
    Filed: September 23, 2016
    Publication date: January 12, 2017
    Inventors: Boisy G. Pitre, Rana el Kaliouby, Abdelrahman Mahmoud, Seyedmohammad Mavadati, Daniel McDuff, Panu James Turcot, Gabriele Zijderveld
  • Publication number: 20160191995
    Abstract: Facial evaluation is performed on one or more videos captured from an individual viewing a display. The images are evaluated to determine whether the display was viewed by the individual. The individual views a media presentation that includes incorporated tags and is rendered on the display. Based on the tags, video of the individual is captured and evaluated using a classifier. The evaluating includes determining whether the individual is in front of the screen, facing the screen, and gazing at the screen. An engagement score and emotional responses are determined for media and images provided on the display.
    Type: Application
    Filed: March 4, 2016
    Publication date: June 30, 2016
    Inventors: Rana el Kaliouby, Nicholas Langeveld, Daniel McDuff, Seyedmohammad Mavadati