Patents by Inventor Jacob WHITEHILL
Jacob WHITEHILL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11436714Abstract: Embodiments of the innovation relate to an emotional quality estimation device comprising a controller having a memory and a processor, the controller configured to execute a training engine with labelled training data to train a neural network and generate a classroom analysis machine, the labelled training data including historical video data and an associated classroom quality score table; receive a classroom observation video from a classroom environment; execute the classroom analysis machine relative to the classroom observation video from the classroom environment to generate an emotional quality score relating to the emotional quality of the classroom environment; and output the emotional quality score for the classroom environment.Type: GrantFiled: August 21, 2020Date of Patent: September 6, 2022Assignees: Worcester Polytechnic Institute, University of Virginia Patent FoundationInventors: Jacob Whitehill, Anand Ramakrishnan, Erin Ottmar, Jennifer LoCasale-Crouch
-
Publication number: 20210056676Abstract: Embodiments of the innovation relate to an emotional quality estimation device comprising a controller having a memory and a processor, the controller configured to execute a training engine with labelled training data to train a neural network and generate a classroom analysis machine, the labelled training data including historical video data and an associated classroom quality score table; receive a classroom observation video from a classroom environment; execute the classroom analysis machine relative to the classroom observation video from the classroom environment to generate an emotional quality score relating to the emotional quality of the classroom environment; and output the emotional quality score for the classroom environment.Type: ApplicationFiled: August 21, 2020Publication date: February 25, 2021Applicants: WORCESTER POLYTECHNIC INSTITUTE, UNIVERSITY OF VIRGINIA PATENT FOUNDATIONInventors: Jacob Whitehill, Anand Ramakrishnan, Erin Ottmar, Jennifer LoCasale-Crouch
-
Patent number: 10319130Abstract: A method facilitates the use of facial images through anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least some of the original attributes of the corresponding original facial image.Type: GrantFiled: May 1, 2017Date of Patent: June 11, 2019Assignee: Emotient, Inc.Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Patent number: 10248851Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.Type: GrantFiled: September 25, 2017Date of Patent: April 2, 2019Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Patent number: 10007921Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.Type: GrantFiled: June 18, 2015Date of Patent: June 26, 2018Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Publication number: 20180012067Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.Type: ApplicationFiled: September 25, 2017Publication date: January 11, 2018Applicant: Emotient, Inc.Inventors: Javier MOVELLAN, Marian Stewart BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
-
Patent number: 9852327Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.Type: GrantFiled: January 13, 2017Date of Patent: December 26, 2017Assignee: Emotient, Inc.Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Publication number: 20170301121Abstract: A method facilitates the use of facial images through anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least some of the original attributes of the corresponding original facial image.Type: ApplicationFiled: May 1, 2017Publication date: October 19, 2017Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Patent number: 9779289Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.Type: GrantFiled: March 12, 2015Date of Patent: October 3, 2017Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Publication number: 20170213075Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.Type: ApplicationFiled: January 13, 2017Publication date: July 27, 2017Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Patent number: 9639743Abstract: A method facilitates the use of facial images through anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least some of the original attributes of the corresponding original facial image.Type: GrantFiled: July 17, 2015Date of Patent: May 2, 2017Assignee: Emotient, Inc.Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Patent number: 9552535Abstract: Apparatus, methods, and articles of manufacture for obtaining examples that break a visual expression classifier at user devices such as tablets, smartphones, personal computers, and cameras. The examples are sent from the user devices to a server. The server may use the examples to update the classifier, and then distribute the updated classifier code and/or updated classifier parameters to the user devices. The users of the devices may be incentivized to provide the examples that break the classifier, for example, by monetary reward, access to updated versions of the classifier, public ranking or recognition of the user, a self-rewarding game. The examples may be evaluated using a pipeline of untrained crowdsourcing providers and trained experts. The examples may contain user images and/or depersonalized information extracted from the user images.Type: GrantFiled: February 11, 2014Date of Patent: January 24, 2017Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Patent number: 9547808Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.Type: GrantFiled: July 17, 2015Date of Patent: January 17, 2017Assignee: Emotient, Inc.Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Patent number: 9202110Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.Type: GrantFiled: February 20, 2014Date of Patent: December 1, 2015Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Publication number: 20150324633Abstract: A method facilitates the use of facial images through anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least some of the original attributes of the corresponding original facial image.Type: ApplicationFiled: July 17, 2015Publication date: November 12, 2015Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Publication number: 20150324632Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.Type: ApplicationFiled: July 17, 2015Publication date: November 12, 2015Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Publication number: 20150287054Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.Type: ApplicationFiled: June 18, 2015Publication date: October 8, 2015Applicant: EmotientInventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
-
Patent number: 9104905Abstract: A method facilitates selection of candidate matches for an individual from a database of potential applicants. A filter is calculated for the individual by processing images of people in conjunction with the individual's preferences with respect to those images. Feature sets are calculated for the potential applicants by processing images of the potential applicants. The filter is then applied to the feature sets to select candidate matches for the individual.Type: GrantFiled: May 2, 2013Date of Patent: August 11, 2015Assignee: Emotient, Inc.Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Patent number: 9104907Abstract: A system facilitates automatic recognition of facial expressions. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.Type: GrantFiled: July 17, 2013Date of Patent: August 11, 2015Assignee: Emotient, Inc.Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
-
Patent number: 9105119Abstract: A method facilitates training of an automatic facial expression recognition system through distributed anonymization of facial images, thereby allowing people to submit their own facial images without divulging their identities. Original facial images are accessed and perturbed to generate synthesized facial images. Personal identities contained in the original facial images are no longer discernable from the synthesized facial images. At the same time, each synthesized facial image preserves at least part of the emotional expression contained in the corresponding original facial image.Type: GrantFiled: May 2, 2013Date of Patent: August 11, 2015Assignee: Emotient, Inc.Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel