Patents by Inventor Javier Movellan

Javier Movellan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10248851
    Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: April 2, 2019
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Patent number: 10185869
    Abstract: A computer-implemented (including method implemented using laptop, desktop, mobile, and wearable devices) method for image filtering. The method includes analyzing each image to generate a content vector for the image; applying an interest operator to the content vector, the interest operator being based on a plurality of pictures with desirable characteristics, thereby obtaining an interest index for the image; comparing the interest index for the image to an interest threshold; and taking one or more actions or abstaining from one or more actions based on a result of the step of comparing. Also, related systems and articles of manufacture.
    Type: Grant
    Filed: August 11, 2016
    Date of Patent: January 22, 2019
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Ken Denman, Joshua Susskind
  • Patent number: 10032091
    Abstract: A collection of photos is organized by arranging a limited number of clusters of the photos on a predefined topology, so that similar photos are placed in the same cluster or a nearby cluster. Similarity is measured in attribute space. The attributes may include automatically recognized facial expression attributes.
    Type: Grant
    Filed: June 5, 2014
    Date of Patent: July 24, 2018
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Joshua Susskind, Ken Denman
  • Patent number: 10007921
    Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: June 26, 2018
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Publication number: 20180012067
    Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
    Type: Application
    Filed: September 25, 2017
    Publication date: January 11, 2018
    Applicant: Emotient, Inc.
    Inventors: Javier MOVELLAN, Marian Stewart BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
  • Patent number: 9779289
    Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
    Type: Grant
    Filed: March 12, 2015
    Date of Patent: October 3, 2017
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Patent number: 9552535
    Abstract: Apparatus, methods, and articles of manufacture for obtaining examples that break a visual expression classifier at user devices such as tablets, smartphones, personal computers, and cameras. The examples are sent from the user devices to a server. The server may use the examples to update the classifier, and then distribute the updated classifier code and/or updated classifier parameters to the user devices. The users of the devices may be incentivized to provide the examples that break the classifier, for example, by monetary reward, access to updated versions of the classifier, public ranking or recognition of the user, a self-rewarding game. The examples may be evaluated using a pipeline of untrained crowdsourcing providers and trained experts. The examples may contain user images and/or depersonalized information extracted from the user images.
    Type: Grant
    Filed: February 11, 2014
    Date of Patent: January 24, 2017
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Patent number: 9530048
    Abstract: An automatic facial action coding system and method can include processing an image to identify a face in the image, to detect and align one or more facial features shown in the image, and to define one or more windows on the image. One or more distributions of pixels and color intensities can be quantified in each of the one or more windows to derive one or more two-dimensional intensity distributions of one or more colors within the window. The one or more two-dimensional intensity distributions can be processed to select image features appearing in the one or more windows and to classify one or more predefined facial actions on the face in the image. A facial action code score that includes a value indicating a relative amount of the predefined facial action occurring in the face in the image can be determined for the face in the image for each of the one or more predefined facial actions.
    Type: Grant
    Filed: June 23, 2014
    Date of Patent: December 27, 2016
    Assignees: The Regents Of The University Of California, The Research Foundation of State University of New York
    Inventors: Marian Stewart Bartlett, Gwen Littlewort-Ford, Javier Movellan, Ian Fasel, Mark Frank
  • Publication number: 20160350588
    Abstract: A computer-implemented (including method implemented using laptop, desktop, mobile, and wearable devices) method for image filtering. The method includes analyzing each image to generate a content vector for the image; applying an interest operator to the content vector, the interest operator being based on a plurality of pictures with desirable characteristics, thereby obtaining an interest index for the image; comparing the interest index for the image to an interest threshold; and taking one or more actions or abstaining from one or more actions based on a result of the step of comparing. Also, related systems and articles of manufacture.
    Type: Application
    Filed: August 11, 2016
    Publication date: December 1, 2016
    Inventors: Javier MOVELLAN, Ken DENMAN, Joshua SUSSKIND
  • Patent number: 9443167
    Abstract: A computer-implemented (including method implemented using laptop, desktop, mobile, and wearable devices) method for image filtering. The method includes analyzing each image to generate a content vector for the image; applying an interest operator to the content vector, the interest operator being based on a plurality of pictures with desirable characteristics, thereby obtaining an interest index for the image; comparing the interest index for the image to an interest threshold; and taking one or more actions or abstaining from one or more actions based on a result of the step of comparing. Also, related systems and articles of manufacture.
    Type: Grant
    Filed: August 4, 2014
    Date of Patent: September 13, 2016
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Ken Denman, Joshua Susskind
  • Patent number: 9202110
    Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: December 1, 2015
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Publication number: 20150287054
    Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.
    Type: Application
    Filed: June 18, 2015
    Publication date: October 8, 2015
    Applicant: Emotient
    Inventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
  • Publication number: 20150186712
    Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
    Type: Application
    Filed: March 12, 2015
    Publication date: July 2, 2015
    Applicant: EMOTIENT
    Inventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
  • Patent number: 9008416
    Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
    Type: Grant
    Filed: February 10, 2014
    Date of Patent: April 14, 2015
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Publication number: 20150071557
    Abstract: A collection of photos is organized by arranging a limited number of clusters of the photos on a predefined topology, so that similar photos are placed in the same cluster or a nearby cluster. Similarity is measured in attribute space. The attributes may include automatically recognized facial expression attributes.
    Type: Application
    Filed: June 5, 2014
    Publication date: March 12, 2015
    Applicant: EMOTIENT
    Inventors: Javier MOVELLAN, Joshua SUSSKIND, Ken DENMAN
  • Publication number: 20150049953
    Abstract: A computer-implemented method of mapping. The method includes analyzing images of faces in a plurality of pictures to generate content vectors, obtaining information regarding one or more vector dimensions of interest, at least some of the one or more dimensions of interest corresponding to facial expressions of emotion, and generating a representation of the location. Appearance of regions in the map varies in accordance with values of the content vectors for the one or more vector dimensions of interest. The method also includes using the representation, the step of using comprising at least one of storing, transmitting, and displaying.
    Type: Application
    Filed: August 15, 2014
    Publication date: February 19, 2015
    Inventors: Javier MOVELLAN, Josua SUSSKIND
  • Publication number: 20150036934
    Abstract: A computer-implemented (including method implemented using laptop, desktop, mobile, and wearable devices) method for image filtering. The method includes analyzing each image to generate a content vector for the image; applying an interest operator to the content vector, the interest operator being based on a plurality of pictures with desirable characteristics, thereby obtaining an interest index for the image; comparing the interest index for the image to an interest threshold; and taking one or more actions or abstaining from one or more actions based on a result of the step of comparing. Also, related systems and articles of manufacture.
    Type: Application
    Filed: August 4, 2014
    Publication date: February 5, 2015
    Inventors: Javier MOVELLAN, Ken DENMAN, Joshua SUSSKIND
  • Publication number: 20140365310
    Abstract: A computer system obtains an image or video of a person, such as a shopper. The image or video includes the face of the person. The system extracts low level features from the image or video. The low level features may be Gabor features. The system examines the low level features to designate stimuli that are likely to result in preferred behaviors associated with the person. The system analyzes the plurality of designated stimuli based on predetermined criteria to select one or more selected stimuli, and then causes the selected stimuli to be rendered to the person. The predetermined criteria may be economic criteria, such as a requirement to select the stimulus with the highest expected economic benefit from among the various designated stimuli.
    Type: Application
    Filed: June 5, 2013
    Publication date: December 11, 2014
    Applicant: MACHINE PERCEPTION TECHNOLOGIES, INC.
    Inventor: Javier MOVELLAN
  • Publication number: 20140321737
    Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
    Type: Application
    Filed: February 10, 2014
    Publication date: October 30, 2014
    Applicant: EMOTIENT
    Inventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
  • Publication number: 20140316881
    Abstract: Apparatus, methods, and articles of manufacture facilitate analysis of a person's affective valence and arousal. A machine learning classifier is trained using training data created by (1) exposing individuals to eliciting stimuli, (2) recording extended facial expression appearances of the individuals when the individuals are exposed to the eliciting stimuli, and (3) obtaining ground truth of valence and arousal evoked from the individuals by the eliciting stimuli. The classifier is thus trained to analyze images with extended facial expressions (such as facial expressions, head poses, and/or gestures) evoked by various stimuli or spontaneously obtained, to estimate the valence and arousal of the persons in the images. The classifier may be deployed in sales kiosks, online trough mobile and other devices, and in other settings.
    Type: Application
    Filed: February 13, 2014
    Publication date: October 23, 2014
    Applicant: EMOTIENT
    Inventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL