Patents by Inventor Ian Fasel
Ian Fasel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10319130Abstract: A method facilitates the use of facial images through anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least some of the original attributes of the corresponding original facial image.Type: GrantFiled: May 1, 2017Date of Patent: June 11, 2019Assignee: Emotient, Inc.Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Patent number: 10248851Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.Type: GrantFiled: September 25, 2017Date of Patent: April 2, 2019Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Patent number: 10007921Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.Type: GrantFiled: June 18, 2015Date of Patent: June 26, 2018Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Publication number: 20180012067Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.Type: ApplicationFiled: September 25, 2017Publication date: January 11, 2018Applicant: Emotient, Inc.Inventors: Javier MOVELLAN, Marian Stewart BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
-
Patent number: 9852327Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.Type: GrantFiled: January 13, 2017Date of Patent: December 26, 2017Assignee: Emotient, Inc.Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Publication number: 20170301121Abstract: A method facilitates the use of facial images through anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least some of the original attributes of the corresponding original facial image.Type: ApplicationFiled: May 1, 2017Publication date: October 19, 2017Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Patent number: 9779289Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.Type: GrantFiled: March 12, 2015Date of Patent: October 3, 2017Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Publication number: 20170213075Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.Type: ApplicationFiled: January 13, 2017Publication date: July 27, 2017Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Patent number: 9639743Abstract: A method facilitates the use of facial images through anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least some of the original attributes of the corresponding original facial image.Type: GrantFiled: July 17, 2015Date of Patent: May 2, 2017Assignee: Emotient, Inc.Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Patent number: 9552535Abstract: Apparatus, methods, and articles of manufacture for obtaining examples that break a visual expression classifier at user devices such as tablets, smartphones, personal computers, and cameras. The examples are sent from the user devices to a server. The server may use the examples to update the classifier, and then distribute the updated classifier code and/or updated classifier parameters to the user devices. The users of the devices may be incentivized to provide the examples that break the classifier, for example, by monetary reward, access to updated versions of the classifier, public ranking or recognition of the user, a self-rewarding game. The examples may be evaluated using a pipeline of untrained crowdsourcing providers and trained experts. The examples may contain user images and/or depersonalized information extracted from the user images.Type: GrantFiled: February 11, 2014Date of Patent: January 24, 2017Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Patent number: 9547808Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.Type: GrantFiled: July 17, 2015Date of Patent: January 17, 2017Assignee: Emotient, Inc.Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Patent number: 9530048Abstract: An automatic facial action coding system and method can include processing an image to identify a face in the image, to detect and align one or more facial features shown in the image, and to define one or more windows on the image. One or more distributions of pixels and color intensities can be quantified in each of the one or more windows to derive one or more two-dimensional intensity distributions of one or more colors within the window. The one or more two-dimensional intensity distributions can be processed to select image features appearing in the one or more windows and to classify one or more predefined facial actions on the face in the image. A facial action code score that includes a value indicating a relative amount of the predefined facial action occurring in the face in the image can be determined for the face in the image for each of the one or more predefined facial actions.Type: GrantFiled: June 23, 2014Date of Patent: December 27, 2016Assignees: The Regents Of The University Of California, The Research Foundation of State University of New YorkInventors: Marian Stewart Bartlett, Gwen Littlewort-Ford, Javier Movellan, Ian Fasel, Mark Frank
-
Patent number: 9202110Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.Type: GrantFiled: February 20, 2014Date of Patent: December 1, 2015Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Publication number: 20150324633Abstract: A method facilitates the use of facial images through anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least some of the original attributes of the corresponding original facial image.Type: ApplicationFiled: July 17, 2015Publication date: November 12, 2015Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Publication number: 20150324632Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.Type: ApplicationFiled: July 17, 2015Publication date: November 12, 2015Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Publication number: 20150287054Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.Type: ApplicationFiled: June 18, 2015Publication date: October 8, 2015Applicant: EmotientInventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
-
Patent number: 9104905Abstract: A method facilitates selection of candidate matches for an individual from a database of potential applicants. A filter is calculated for the individual by processing images of people in conjunction with the individual's preferences with respect to those images. Feature sets are calculated for the potential applicants by processing images of the potential applicants. The filter is then applied to the feature sets to select candidate matches for the individual.Type: GrantFiled: May 2, 2013Date of Patent: August 11, 2015Assignee: Emotient, Inc.Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Patent number: 9105119Abstract: A method facilitates training of an automatic facial expression recognition system through distributed anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least part of the emotional expression contained in the corresponding original facial image.Type: GrantFiled: May 2, 2013Date of Patent: August 11, 2015Assignee: Emotient, Inc.Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Patent number: 9104907Abstract: A system facilitates automatic recognition of facial expressions. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.Type: GrantFiled: July 17, 2013Date of Patent: August 11, 2015Assignee: Emotient, Inc.Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Publication number: 20150186712Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.Type: ApplicationFiled: March 12, 2015Publication date: July 2, 2015Applicant: EMOTIENTInventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL