Patents Assigned to Emotient, Inc.
  • Patent number: 10319130
    Abstract: A method facilitates the use of facial images through anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least some of the original attributes of the corresponding original facial image.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: June 11, 2019
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 10248851
    Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: April 2, 2019
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Patent number: 10185869
    Abstract: A computer-implemented (including method implemented using laptop, desktop, mobile, and wearable devices) method for image filtering. The method includes analyzing each image to generate a content vector for the image; applying an interest operator to the content vector, the interest operator being based on a plurality of pictures with desirable characteristics, thereby obtaining an interest index for the image; comparing the interest index for the image to an interest threshold; and taking one or more actions or abstaining from one or more actions based on a result of the step of comparing. Also, related systems and articles of manufacture.
    Type: Grant
    Filed: August 11, 2016
    Date of Patent: January 22, 2019
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Ken Denman, Joshua Susskind
  • Patent number: 10032091
    Abstract: A collection of photos is organized by arranging a limited number of clusters of the photos on a predefined topology, so that similar photos are placed in the same cluster or a nearby cluster. Similarity is measured in attribute space. The attributes may include automatically recognized facial expression attributes.
    Type: Grant
    Filed: June 5, 2014
    Date of Patent: July 24, 2018
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Joshua Susskind, Ken Denman
  • Patent number: 10007921
    Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: June 26, 2018
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Publication number: 20180012067
    Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
    Type: Application
    Filed: September 25, 2017
    Publication date: January 11, 2018
    Applicant: Emotient, Inc.
    Inventors: Javier MOVELLAN, Marian Stewart BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
  • Patent number: 9852327
    Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.
    Type: Grant
    Filed: January 13, 2017
    Date of Patent: December 26, 2017
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 9779289
    Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
    Type: Grant
    Filed: March 12, 2015
    Date of Patent: October 3, 2017
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Patent number: 9639743
    Abstract: A method facilitates the use of facial images through anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least some of the original attributes of the corresponding original facial image.
    Type: Grant
    Filed: July 17, 2015
    Date of Patent: May 2, 2017
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 9552535
    Abstract: Apparatus, methods, and articles of manufacture for obtaining examples that break a visual expression classifier at user devices such as tablets, smartphones, personal computers, and cameras. The examples are sent from the user devices to a server. The server may use the examples to update the classifier, and then distribute the updated classifier code and/or updated classifier parameters to the user devices. The users of the devices may be incentivized to provide the examples that break the classifier, for example, by monetary reward, access to updated versions of the classifier, public ranking or recognition of the user, a self-rewarding game. The examples may be evaluated using a pipeline of untrained crowdsourcing providers and trained experts. The examples may contain user images and/or depersonalized information extracted from the user images.
    Type: Grant
    Filed: February 11, 2014
    Date of Patent: January 24, 2017
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Patent number: 9547808
    Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.
    Type: Grant
    Filed: July 17, 2015
    Date of Patent: January 17, 2017
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 9443167
    Abstract: A computer-implemented (including method implemented using laptop, desktop, mobile, and wearable devices) method for image filtering. The method includes analyzing each image to generate a content vector for the image; applying an interest operator to the content vector, the interest operator being based on a plurality of pictures with desirable characteristics, thereby obtaining an interest index for the image; comparing the interest index for the image to an interest threshold; and taking one or more actions or abstaining from one or more actions based on a result of the step of comparing. Also, related systems and articles of manufacture.
    Type: Grant
    Filed: August 4, 2014
    Date of Patent: September 13, 2016
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Ken Denman, Joshua Susskind
  • Patent number: 9202110
    Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: December 1, 2015
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Patent number: 9104907
    Abstract: A system facilitates automatic recognition of facial expressions. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.
    Type: Grant
    Filed: July 17, 2013
    Date of Patent: August 11, 2015
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 9104905
    Abstract: A method facilitates selection of candidate matches for an individual from a database of potential applicants. A filter is calculated for the individual by processing images of people in conjunction with the individual's preferences with respect to those images. Feature sets are calculated for the potential applicants by processing images of the potential applicants. The filter is then applied to the feature sets to select candidate matches for the individual.
    Type: Grant
    Filed: May 2, 2013
    Date of Patent: August 11, 2015
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 9105119
    Abstract: A method facilitates training of an automatic facial expression recognition system through distributed anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least part of the emotional expression contained in the corresponding original facial image.
    Type: Grant
    Filed: May 2, 2013
    Date of Patent: August 11, 2015
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 9008416
    Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
    Type: Grant
    Filed: February 10, 2014
    Date of Patent: April 14, 2015
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill