Patents by Inventor Ian Fasel

Ian Fasel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10319130
    Abstract: A method facilitates the use of facial images through anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least some of the original attributes of the corresponding original facial image.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: June 11, 2019
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 10248851
    Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: April 2, 2019
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Patent number: 10007921
    Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: June 26, 2018
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Publication number: 20180012067
    Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
    Type: Application
    Filed: September 25, 2017
    Publication date: January 11, 2018
    Applicant: Emotient, Inc.
    Inventors: Javier MOVELLAN, Marian Stewart BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
  • Patent number: 9852327
    Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.
    Type: Grant
    Filed: January 13, 2017
    Date of Patent: December 26, 2017
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Publication number: 20170301121
    Abstract: A method facilitates the use of facial images through anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least some of the original attributes of the corresponding original facial image.
    Type: Application
    Filed: May 1, 2017
    Publication date: October 19, 2017
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 9779289
    Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
    Type: Grant
    Filed: March 12, 2015
    Date of Patent: October 3, 2017
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Publication number: 20170213075
    Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.
    Type: Application
    Filed: January 13, 2017
    Publication date: July 27, 2017
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 9639743
    Abstract: A method facilitates the use of facial images through anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least some of the original attributes of the corresponding original facial image.
    Type: Grant
    Filed: July 17, 2015
    Date of Patent: May 2, 2017
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 9552535
    Abstract: Apparatus, methods, and articles of manufacture for obtaining examples that break a visual expression classifier at user devices such as tablets, smartphones, personal computers, and cameras. The examples are sent from the user devices to a server. The server may use the examples to update the classifier, and then distribute the updated classifier code and/or updated classifier parameters to the user devices. The users of the devices may be incentivized to provide the examples that break the classifier, for example, by monetary reward, access to updated versions of the classifier, public ranking or recognition of the user, a self-rewarding game. The examples may be evaluated using a pipeline of untrained crowdsourcing providers and trained experts. The examples may contain user images and/or depersonalized information extracted from the user images.
    Type: Grant
    Filed: February 11, 2014
    Date of Patent: January 24, 2017
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Patent number: 9547808
    Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.
    Type: Grant
    Filed: July 17, 2015
    Date of Patent: January 17, 2017
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 9530048
    Abstract: An automatic facial action coding system and method can include processing an image to identify a face in the image, to detect and align one or more facial features shown in the image, and to define one or more windows on the image. One or more distributions of pixels and color intensities can be quantified in each of the one or more windows to derive one or more two-dimensional intensity distributions of one or more colors within the window. The one or more two-dimensional intensity distributions can be processed to select image features appearing in the one or more windows and to classify one or more predefined facial actions on the face in the image. A facial action code score that includes a value indicating a relative amount of the predefined facial action occurring in the face in the image can be determined for the face in the image for each of the one or more predefined facial actions.
    Type: Grant
    Filed: June 23, 2014
    Date of Patent: December 27, 2016
    Assignees: The Regents Of The University Of California, The Research Foundation of State University of New York
    Inventors: Marian Stewart Bartlett, Gwen Littlewort-Ford, Javier Movellan, Ian Fasel, Mark Frank
  • Patent number: 9202110
    Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: December 1, 2015
    Assignee: Emotient, Inc.
    Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
  • Publication number: 20150324633
    Abstract: A method facilitates the use of facial images through anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least some of the original attributes of the corresponding original facial image.
    Type: Application
    Filed: July 17, 2015
    Publication date: November 12, 2015
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Publication number: 20150324632
    Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.
    Type: Application
    Filed: July 17, 2015
    Publication date: November 12, 2015
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Publication number: 20150287054
    Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.
    Type: Application
    Filed: June 18, 2015
    Publication date: October 8, 2015
    Applicant: Emotient
    Inventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
  • Patent number: 9104905
    Abstract: A method facilitates selection of candidate matches for an individual from a database of potential applicants. A filter is calculated for the individual by processing images of people in conjunction with the individual's preferences with respect to those images. Feature sets are calculated for the potential applicants by processing images of the potential applicants. The filter is then applied to the feature sets to select candidate matches for the individual.
    Type: Grant
    Filed: May 2, 2013
    Date of Patent: August 11, 2015
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 9105119
    Abstract: A method facilitates training of an automatic facial expression recognition system through distributed anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least part of the emotional expression contained in the corresponding original facial image.
    Type: Grant
    Filed: May 2, 2013
    Date of Patent: August 11, 2015
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 9104907
    Abstract: A system facilitates automatic recognition of facial expressions. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.
    Type: Grant
    Filed: July 17, 2013
    Date of Patent: August 11, 2015
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Publication number: 20150186712
    Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
    Type: Application
    Filed: March 12, 2015
    Publication date: July 2, 2015
    Applicant: EMOTIENT
    Inventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL