Patents by Inventor Boisy G. Pitre
Boisy G. Pitre has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11410438Abstract: Analysis for convolutional processing is performed using logic encoded in a semiconductor processor. The semiconductor chip evaluates pixels within an image of a person in a vehicle, where the analysis identifies a facial portion of the person. The facial portion of the person can include facial landmarks or regions. The semiconductor chip identifies one or more facial expressions based on the facial portion. The facial expressions can include a smile, frown, smirk, or grimace. The semiconductor chip classifies the one or more facial expressions for cognitive response content. The semiconductor chip evaluates the cognitive response content to produce cognitive state information for the person. The semiconductor chip enables manipulation of the vehicle based on communication of the cognitive state information to a component of the vehicle.Type: GrantFiled: November 8, 2019Date of Patent: August 9, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Boisy G. Pitre, Panu James Turcot, Andrew Todd Zeilman
-
Publication number: 20200226012Abstract: File system manipulation using machine learning is described. Access to a machine learning system is obtained. A connection between a file system and an application is structured. The connection is managed through an application programming interface (API). The connection provides two-way data transfer through the API between the application and the file system. The connection provides distribution of one or more data files through the API. The connection provides enablement of processing of the one or more data files. The processing uses classifiers running on the machine learning system. Data files are retrieved from the file system connected through the interface. The file system is network-connected to the application through the interface. The data files comprise image data of one or more people. Cognitive state analysis is performed by the machine learning system. The application programming interface is generated by a software development kit (SDK).Type: ApplicationFiled: March 24, 2020Publication date: July 16, 2020Applicant: Affectiva, Inc.Inventors: Boisy G. Pitre, Rana el Kaliouby, Youssef Kashef
-
Publication number: 20200175262Abstract: Techniques for performing robotic assistance are disclosed. A plurality of images of an individual is obtained by an imagery module associated with an autonomous mobile robot. Cognitive state data including facial data for the individual in the plurality of images is identified by an analysis module associated with the autonomous mobile robot. A facial expression metric, based on the facial data for the individual in the plurality of images, is calculated. A cognitive state metric for the individual is generated by the analysis module based on the cognitive state data. The autonomous mobile robot initiates one or more responses based on the cognitive state metric. The one or more responses include one or more electromechanical responses. The one or more electromechanical responses cause the robot to change locations.Type: ApplicationFiled: February 4, 2020Publication date: June 4, 2020Applicant: Affectiva, Inc.Inventors: Boisy G. Pitre, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Daniel McDuff, Panu James Turcot, Gabriele Zijderveld
-
Publication number: 20200074154Abstract: Analysis for convolutional processing is performed using logic encoded in a semiconductor processor. The semiconductor chip evaluates pixels within an image of a person in a vehicle, where the analysis identifies a facial portion of the person. The facial portion of the person can include facial landmarks or regions. The semiconductor chip identifies one or more facial expressions based on the facial portion. The facial expressions can include a smile, frown, smirk, or grimace. The semiconductor chip classifies the one or more facial expressions for cognitive response content. The semiconductor chip evaluates the cognitive response content to produce cognitive state information for the person. The semiconductor chip enables manipulation of the vehicle based on communication of the cognitive state information to a component of the vehicle.Type: ApplicationFiled: November 8, 2019Publication date: March 5, 2020Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Boisy G. Pitre, Panu James Turcot, Andrew Todd Zeilman
-
Patent number: 10474875Abstract: Image analysis for facial evaluation is performed using logic encoded in a semiconductor processor. The semiconductor chip analyzes video images that are captured using one or more cameras and evaluates the videos to identify one or more persons in the videos. When a person is identified, the semiconductor chip locates the face of the evaluated person in the video. Facial regions of interest are extracted and differences in the regions of interest in the face are identified. The semiconductor chip uses classifiers to map facial regions for emotional response content and evaluate the emotional response content to produce an emotion score. The classifiers provide gender, age, or ethnicity with an associated probability. Localization logic within the chip is used to localize a second face when one is evaluated in the video. The one or more faces are tracked, and identifiers for the faces are provided.Type: GrantFiled: November 20, 2015Date of Patent: November 12, 2019Assignee: Affectiva, Inc.Inventors: Boisy G Pitre, Rana el Kaliouby, Panu James Turcot
-
Publication number: 20170011258Abstract: Facial expressions are evaluated for control of robots. One or more images of a face are captured. The images are analyzed for mental state data. The images are analyzed to determine a facial expression of the face within an identified a region of interest. Mental state information is generated. A context for the robot operation is determined. A context for the individual is determined. The actions of a robot are then controlled based on the facial expressions and the mental state information that was generated. Displays, color, sound, motion, and voice response for the robot are controlled based on the facial expressions of one or more people.Type: ApplicationFiled: September 23, 2016Publication date: January 12, 2017Inventors: Boisy G. Pitre, Rana el Kaliouby, Abdelrahman Mahmoud, Seyedmohammad Mavadati, Daniel McDuff, Panu James Turcot, Gabriele Zijderveld
-
Publication number: 20160078279Abstract: Image analysis for facial evaluation is performed using logic encoded in a semiconductor processor. The semiconductor chip analyzes video images that are captured using one or more cameras and evaluates the videos to identify one or more persons in the videos. When a person is identified, the semiconductor chip locates the face of the evaluated person in the video. Facial regions of interest are extracted and differences in the regions of interest in the face are identified. The semiconductor chip uses classifiers to map facial regions for emotional response content and evaluate the emotional response content to produce an emotion score. The classifiers provide gender, age, or ethnicity with an associated probability. Localization logic within the chip is used to localize a second face when one is evaluated in the video. The one or more faces are tracked, and identifiers for the faces are provided.Type: ApplicationFiled: November 20, 2015Publication date: March 17, 2016Applicant: AFFECTIVA, INC.Inventors: Boisy G Pitre, Rana el Kaliouby, Panu James Turcot
-
Publication number: 20140357976Abstract: A mobile device is emotionally enabled using an application programming interface (API) in order to infer a user's emotions and make the emotions available for sharing. Images of an individual or individuals are captured and send through the API. The images are evaluated to determine the individual's mental state. Mental state analysis is output to an app running on the device on which the API resides for further sharing, analysis, or transmission. A software development kit (SDK) can be used to generate the API or to otherwise facilitate the emotional enablement of a mobile device and the apps that run on the device.Type: ApplicationFiled: August 15, 2014Publication date: December 4, 2014Inventors: Boisy G. Pitre, Rana el Kaliouby, Youssef Kashef