Patents by Inventor Joshua SUSSKIND
Joshua SUSSKIND has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11967015Abstract: The subject technology provides a framework for learning neural scene representations directly from images, without three-dimensional (3D) supervision, by a machine-learning model. In the disclosed systems and methods, 3D structure can be imposed by ensuring that the learned representation transforms like a real 3D scene. For example, a loss function can be provided which enforces equivariance of the scene representation with respect to 3D rotations. Because naive tensor rotations may not be used to define models that are equivariant with respect to 3D rotations, a new operation called an invertible shear rotation is disclosed, which has the desired equivariance property. In some implementations, the model can be used to generate a 3D representation, such as mesh, of an object from an image of the object.Type: GrantFiled: January 8, 2021Date of Patent: April 23, 2024Assignee: Apple Inc.Inventors: Qi Shan, Joshua Susskind, Aditya Sankar, Robert Alex Colburn, Emilien Dupont, Miguel Angel Bautista Martin
-
Publication number: 20210248811Abstract: The subject technology provides a framework for learning neural scene representations directly from images, without three-dimensional (3D) supervision, by a machine-learning model. In the disclosed systems and methods, 3D structure can be imposed by ensuring that the learned representation transforms like a real 3D scene. For example, a loss function can be provided which enforces equivariance of the scene representation with respect to 3D rotations. Because naive tensor rotations may not be used to define models that are equivariant with respect to 3D rotations, a new operation called an invertible shear rotation is disclosed, which has the desired equivariance property. In some implementations, the model can be used to generate a 3D representation, such as mesh, of an object from an image of the object.Type: ApplicationFiled: January 8, 2021Publication date: August 12, 2021Inventors: Qi SHAN, Joshua SUSSKIND, Aditya SANKAR, Robert Alex COLBURN, Emilien DUPONT, Miguel Angel BAUTISTA MARTIN
-
Patent number: 10248851Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.Type: GrantFiled: September 25, 2017Date of Patent: April 2, 2019Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Patent number: 10185869Abstract: A computer-implemented (including method implemented using laptop, desktop, mobile, and wearable devices) method for image filtering. The method includes analyzing each image to generate a content vector for the image; applying an interest operator to the content vector, the interest operator being based on a plurality of pictures with desirable characteristics, thereby obtaining an interest index for the image; comparing the interest index for the image to an interest threshold; and taking one or more actions or abstaining from one or more actions based on a result of the step of comparing. Also, related systems and articles of manufacture.Type: GrantFiled: August 11, 2016Date of Patent: January 22, 2019Assignee: Emotient, Inc.Inventors: Javier Movellan, Ken Denman, Joshua Susskind
-
Patent number: 10032091Abstract: A collection of photos is organized by arranging a limited number of clusters of the photos on a predefined topology, so that similar photos are placed in the same cluster or a nearby cluster. Similarity is measured in attribute space. The attributes may include automatically recognized facial expression attributes.Type: GrantFiled: June 5, 2014Date of Patent: July 24, 2018Assignee: Emotient, Inc.Inventors: Javier Movellan, Joshua Susskind, Ken Denman
-
Patent number: 10007921Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.Type: GrantFiled: June 18, 2015Date of Patent: June 26, 2018Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Publication number: 20180012067Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.Type: ApplicationFiled: September 25, 2017Publication date: January 11, 2018Applicant: Emotient, Inc.Inventors: Javier MOVELLAN, Marian Stewart BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
-
Patent number: 9779289Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.Type: GrantFiled: March 12, 2015Date of Patent: October 3, 2017Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Patent number: 9552535Abstract: Apparatus, methods, and articles of manufacture for obtaining examples that break a visual expression classifier at user devices such as tablets, smartphones, personal computers, and cameras. The examples are sent from the user devices to a server. The server may use the examples to update the classifier, and then distribute the updated classifier code and/or updated classifier parameters to the user devices. The users of the devices may be incentivized to provide the examples that break the classifier, for example, by monetary reward, access to updated versions of the classifier, public ranking or recognition of the user, a self-rewarding game. The examples may be evaluated using a pipeline of untrained crowdsourcing providers and trained experts. The examples may contain user images and/or depersonalized information extracted from the user images.Type: GrantFiled: February 11, 2014Date of Patent: January 24, 2017Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Stewart Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Publication number: 20160350588Abstract: A computer-implemented (including method implemented using laptop, desktop, mobile, and wearable devices) method for image filtering. The method includes analyzing each image to generate a content vector for the image; applying an interest operator to the content vector, the interest operator being based on a plurality of pictures with desirable characteristics, thereby obtaining an interest index for the image; comparing the interest index for the image to an interest threshold; and taking one or more actions or abstaining from one or more actions based on a result of the step of comparing. Also, related systems and articles of manufacture.Type: ApplicationFiled: August 11, 2016Publication date: December 1, 2016Inventors: Javier MOVELLAN, Ken DENMAN, Joshua SUSSKIND
-
Patent number: 9443167Abstract: A computer-implemented (including method implemented using laptop, desktop, mobile, and wearable devices) method for image filtering. The method includes analyzing each image to generate a content vector for the image; applying an interest operator to the content vector, the interest operator being based on a plurality of pictures with desirable characteristics, thereby obtaining an interest index for the image; comparing the interest index for the image to an interest threshold; and taking one or more actions or abstaining from one or more actions based on a result of the step of comparing. Also, related systems and articles of manufacture.Type: GrantFiled: August 4, 2014Date of Patent: September 13, 2016Assignee: Emotient, Inc.Inventors: Javier Movellan, Ken Denman, Joshua Susskind
-
Patent number: 9202110Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.Type: GrantFiled: February 20, 2014Date of Patent: December 1, 2015Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Publication number: 20150287054Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.Type: ApplicationFiled: June 18, 2015Publication date: October 8, 2015Applicant: EmotientInventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
-
Publication number: 20150186712Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.Type: ApplicationFiled: March 12, 2015Publication date: July 2, 2015Applicant: EMOTIENTInventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
-
Patent number: 9008416Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.Type: GrantFiled: February 10, 2014Date of Patent: April 14, 2015Assignee: Emotient, Inc.Inventors: Javier Movellan, Marian Steward Bartlett, Ian Fasel, Gwen Ford Littlewort, Joshua Susskind, Jacob Whitehill
-
Publication number: 20150071557Abstract: A collection of photos is organized by arranging a limited number of clusters of the photos on a predefined topology, so that similar photos are placed in the same cluster or a nearby cluster. Similarity is measured in attribute space. The attributes may include automatically recognized facial expression attributes.Type: ApplicationFiled: June 5, 2014Publication date: March 12, 2015Applicant: EMOTIENTInventors: Javier MOVELLAN, Joshua SUSSKIND, Ken DENMAN
-
Publication number: 20150036934Abstract: A computer-implemented (including method implemented using laptop, desktop, mobile, and wearable devices) method for image filtering. The method includes analyzing each image to generate a content vector for the image; applying an interest operator to the content vector, the interest operator being based on a plurality of pictures with desirable characteristics, thereby obtaining an interest index for the image; comparing the interest index for the image to an interest threshold; and taking one or more actions or abstaining from one or more actions based on a result of the step of comparing. Also, related systems and articles of manufacture.Type: ApplicationFiled: August 4, 2014Publication date: February 5, 2015Inventors: Javier MOVELLAN, Ken DENMAN, Joshua SUSSKIND
-
Publication number: 20140321737Abstract: Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.Type: ApplicationFiled: February 10, 2014Publication date: October 30, 2014Applicant: EMOTIENTInventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
-
Publication number: 20140314310Abstract: In selected embodiments, one or more wearable mobile devices provide videos and other sensor data of one or more participants in an interaction, such as a customer service or a sales interaction between a company employee and a customer. A computerized system uses machine learning expression classifiers, temporal filters, and a machine learning function approximator to estimate the quality of the interaction. The computerized system may include a recommendation selector configured to select suggestions for improving the current interaction and/or future interactions, based on the quality estimates and the weights of the machine learning approximator.Type: ApplicationFiled: February 20, 2014Publication date: October 23, 2014Applicant: EmotientInventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL
-
Publication number: 20140316881Abstract: Apparatus, methods, and articles of manufacture facilitate analysis of a person's affective valence and arousal. A machine learning classifier is trained using training data created by (1) exposing individuals to eliciting stimuli, (2) recording extended facial expression appearances of the individuals when the individuals are exposed to the eliciting stimuli, and (3) obtaining ground truth of valence and arousal evoked from the individuals by the eliciting stimuli. The classifier is thus trained to analyze images with extended facial expressions (such as facial expressions, head poses, and/or gestures) evoked by various stimuli or spontaneously obtained, to estimate the valence and arousal of the persons in the images. The classifier may be deployed in sales kiosks, online trough mobile and other devices, and in other settings.Type: ApplicationFiled: February 13, 2014Publication date: October 23, 2014Applicant: EMOTIENTInventors: Javier MOVELLAN, Marian Steward BARTLETT, Ian FASEL, Gwen Ford LITTLEWORT, Joshua SUSSKIND, Jacob WHITEHILL