Patents by Inventor Javier Movellan

Javier Movellan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20140314310
    Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.
    Type: Application
    Filed: February 20, 2014
    Publication date: October 23, 2014
    Applicant: Emotient
    Inventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
  • Publication number: 20140314284
    Abstract: Apparatus, methods, and articles of manufacture for obtaining examples that break a visual expression classifier at user devices such as tablets, smartphones, personal computers, and cameras. The examples are sent from the user devices to a server. The server may use the examples to update the classifier, and then distribute the updated classifier code and/or updated classifier parameters to the user devices. The users of the devices may be incentivized to provide the examples that break the classifier, for example, by monetary reward, access to updated versions of the classifier, public ranking or recognition of the user, a self-rewarding game. The examples may be evaluated using a pipeline of untrained crowdsourcing providers and trained experts. The examples may contain user images and/or depersonalized information extracted from the user images.
    Type: Application
    Filed: February 11, 2014
    Publication date: October 23, 2014
    Applicant: EMOTIENT
    Inventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
  • Publication number: 20140315168
    Abstract: Apparatus, methods, and articles of manufacture facilitate diagnosis of affective mental and neurological disorders. Extended facial expression responses to various stimuli are evoked or spontaneously collected, and automatically evaluated using machine learning techniques and automatic facial expression measurement (AFEM) techniques. The stimuli may include pictures, videos, tasks of various emotion-eliciting paradigms, such as a reward-punishment paradigm, an anger eliciting paradigm, a fear eliciting paradigm, and a structured interview paradigm. The extended facial expression responses, which may include facial expression responses as well head pose responses and gesture responses, are analyzed using machine learning techniques to diagnose the subject, to estimate the likelihood that the subject suffers from a specific disorder, and/or to evaluate treatment efficacy.
    Type: Application
    Filed: February 12, 2014
    Publication date: October 23, 2014
    Applicant: Emotient
    Inventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
  • Publication number: 20140301636
    Abstract: An automatic facial action coding system and method can include processing an image to identify a face in the image, to detect and align one or more facial features shown in the image, and to define one or more windows on the image. One or more distributions of pixels and color intensities can be quantified in each of the one or more windows to derive one or more two-dimensional intensity distributions of one or more colors within the window. The one or more two-dimensional intensity distributions can be processed to select image features appearing in the one or more windows and to classify one or more predefined facial actions on the face in the image. A facial action code score that includes a value indicating a relative amount of the predefined facial action occurring in the face in the image can be determined for the face in the image for each of the one or more predefined facial actions.
    Type: Application
    Filed: June 23, 2014
    Publication date: October 9, 2014
    Inventors: Marian Stewart Bartlett, Gwen Littlewort-Ford, Javier Movellan, Ian Fasel, Mark Frank
  • Publication number: 20140242560
    Abstract: A machine learning classifier is trained to compute a quality measure of a facial expression with respect to a predetermined emotion, affective state, or situation. The expression may be of a person or an animated character. The quality measure may be provided to a person. The quality measure may also used to tune the appearance parameters of the animated character, including texture parameters. People may be trained to improve their expressiveness based on the feedback of the quality measure provided by the machine learning classifier, for example, to improve the quality of customer interactions, and to mitigate the symptoms of various affective and neurological disorders. The classifier may be built into a variety of mobile devices, including wearable devices such as Google Glass and smart watches.
    Type: Application
    Filed: February 17, 2014
    Publication date: August 28, 2014
    Applicant: Emotient
    Inventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Ken Denman, Jacob WHITEHILL
  • Patent number: 8798374
    Abstract: An automatic facial action coding system and method can include processing an image to identify a face in the image, to detect and align one or more facial features shown in the image, and to define one or more windows on the image. One or more distributions of pixels and color intensities can be quantified in each of the one or more windows to derive one or more two-dimensional intensity distributions of one or more colors within the window. The one or more two-dimensional intensity distributions can be processed to select image features appearing in the one or more windows and to classify one or more predefined facial actions on the face in the image. A facial action code score that includes a value indicating a relative amount of the predefined facial action occurring in the face in the image can be determined for the face in the image for each of the one or more predefined facial actions.
    Type: Grant
    Filed: August 26, 2009
    Date of Patent: August 5, 2014
    Assignees: The Regents of the University of California, The Research Foundation of State University of New York
    Inventors: Marian Stewart Bartlett, Gwen Littlewort-Ford, Javier Movellan, Ian Fasel, Mark Frank
  • Publication number: 20100086215
    Abstract: An automatic facial action coding system and method can include processing an image to identify a face in the image, to detect and align one or more facial features shown in the image, and to define one or more windows on the image. One or more distributions of pixels and color intensities can be quantified in each of the one or more windows to derive one or more two-dimensional intensity distributions of one or more colors within the window. The one or more two-dimensional intensity distributions can be processed to select image features appearing in the one or more windows and to classify one or more predefined facial actions on the face in the image. A facial action code score that includes a value indicating a relative amount of the predefined facial action occurring in the face in the image can be determined for the face in the image for each of the one or more predefined facial actions.
    Type: Application
    Filed: August 26, 2009
    Publication date: April 8, 2010
    Inventors: Marian Steward Bartlett, Gwen Littlewort-Ford, Javier Movellan, Ian Fasel, Mark Frank
  • Publication number: 20070198444
    Abstract: The present invention provides an interaction device adapted for setting own controller for maximizing expectation of information defined between a hypothesis about an interaction object and own input/output. Thus, the social robot can judge by using only simple input/output sensor whether or not the human being is present or absent at the outside world.
    Type: Application
    Filed: January 17, 2007
    Publication date: August 23, 2007
    Inventors: Javier Movellan, Fumihide Tanaka
  • Publication number: 20050102246
    Abstract: A facial expression recognition system that uses a face detection apparatus realizing efficient learning and high-speed detection processing based on ensemble learning when detecting an area representing a detection target and that is robust against shifts of face position included in images and capable of highly accurate expression recognition, and a learning method for the system, are provided. When learning data to be used by the face detection apparatus by Adaboost, processing to select high-performance weak hypotheses from all weak hypotheses, then generate new weak hypotheses from these high-performance weak hypotheses on the basis of statistical characteristics, and select one weak hypothesis having the highest discrimination performance from these weak hypotheses, is repeated to sequentially generate a weak hypothesis, and a final hypothesis is thus acquired.
    Type: Application
    Filed: June 17, 2004
    Publication date: May 12, 2005
    Inventors: Javier Movellan, Marian Bartlett, Gwendolen Littlewort, John Hershey, Ian Fasel, Eric Carlson, Josh Susskind, Kohtaro Sabe, Kenta Kawamoto, Kenichi Hidai