Patents by Inventor Boisy G. Pitre

Boisy G. Pitre has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11410438
    Abstract: Analysis for convolutional processing is performed using logic encoded in a semiconductor processor. The semiconductor chip evaluates pixels within an image of a person in a vehicle, where the analysis identifies a facial portion of the person. The facial portion of the person can include facial landmarks or regions. The semiconductor chip identifies one or more facial expressions based on the facial portion. The facial expressions can include a smile, frown, smirk, or grimace. The semiconductor chip classifies the one or more facial expressions for cognitive response content. The semiconductor chip evaluates the cognitive response content to produce cognitive state information for the person. The semiconductor chip enables manipulation of the vehicle based on communication of the cognitive state information to a component of the vehicle.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: August 9, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Boisy G. Pitre, Panu James Turcot, Andrew Todd Zeilman
  • Publication number: 20200226012
    Abstract: File system manipulation using machine learning is described. Access to a machine learning system is obtained. A connection between a file system and an application is structured. The connection is managed through an application programming interface (API). The connection provides two-way data transfer through the API between the application and the file system. The connection provides distribution of one or more data files through the API. The connection provides enablement of processing of the one or more data files. The processing uses classifiers running on the machine learning system. Data files are retrieved from the file system connected through the interface. The file system is network-connected to the application through the interface. The data files comprise image data of one or more people. Cognitive state analysis is performed by the machine learning system. The application programming interface is generated by a software development kit (SDK).
    Type: Application
    Filed: March 24, 2020
    Publication date: July 16, 2020
    Applicant: Affectiva, Inc.
    Inventors: Boisy G. Pitre, Rana el Kaliouby, Youssef Kashef
  • Publication number: 20200175262
    Abstract: Techniques for performing robotic assistance are disclosed. A plurality of images of an individual is obtained by an imagery module associated with an autonomous mobile robot. Cognitive state data including facial data for the individual in the plurality of images is identified by an analysis module associated with the autonomous mobile robot. A facial expression metric, based on the facial data for the individual in the plurality of images, is calculated. A cognitive state metric for the individual is generated by the analysis module based on the cognitive state data. The autonomous mobile robot initiates one or more responses based on the cognitive state metric. The one or more responses include one or more electromechanical responses. The one or more electromechanical responses cause the robot to change locations.
    Type: Application
    Filed: February 4, 2020
    Publication date: June 4, 2020
    Applicant: Affectiva, Inc.
    Inventors: Boisy G. Pitre, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Daniel McDuff, Panu James Turcot, Gabriele Zijderveld
  • Publication number: 20200074154
    Abstract: Analysis for convolutional processing is performed using logic encoded in a semiconductor processor. The semiconductor chip evaluates pixels within an image of a person in a vehicle, where the analysis identifies a facial portion of the person. The facial portion of the person can include facial landmarks or regions. The semiconductor chip identifies one or more facial expressions based on the facial portion. The facial expressions can include a smile, frown, smirk, or grimace. The semiconductor chip classifies the one or more facial expressions for cognitive response content. The semiconductor chip evaluates the cognitive response content to produce cognitive state information for the person. The semiconductor chip enables manipulation of the vehicle based on communication of the cognitive state information to a component of the vehicle.
    Type: Application
    Filed: November 8, 2019
    Publication date: March 5, 2020
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Boisy G. Pitre, Panu James Turcot, Andrew Todd Zeilman
  • Patent number: 10474875
    Abstract: Image analysis for facial evaluation is performed using logic encoded in a semiconductor processor. The semiconductor chip analyzes video images that are captured using one or more cameras and evaluates the videos to identify one or more persons in the videos. When a person is identified, the semiconductor chip locates the face of the evaluated person in the video. Facial regions of interest are extracted and differences in the regions of interest in the face are identified. The semiconductor chip uses classifiers to map facial regions for emotional response content and evaluate the emotional response content to produce an emotion score. The classifiers provide gender, age, or ethnicity with an associated probability. Localization logic within the chip is used to localize a second face when one is evaluated in the video. The one or more faces are tracked, and identifiers for the faces are provided.
    Type: Grant
    Filed: November 20, 2015
    Date of Patent: November 12, 2019
    Assignee: Affectiva, Inc.
    Inventors: Boisy G Pitre, Rana el Kaliouby, Panu James Turcot
  • Publication number: 20170011258
    Abstract: Facial expressions are evaluated for control of robots. One or more images of a face are captured. The images are analyzed for mental state data. The images are analyzed to determine a facial expression of the face within an identified a region of interest. Mental state information is generated. A context for the robot operation is determined. A context for the individual is determined. The actions of a robot are then controlled based on the facial expressions and the mental state information that was generated. Displays, color, sound, motion, and voice response for the robot are controlled based on the facial expressions of one or more people.
    Type: Application
    Filed: September 23, 2016
    Publication date: January 12, 2017
    Inventors: Boisy G. Pitre, Rana el Kaliouby, Abdelrahman Mahmoud, Seyedmohammad Mavadati, Daniel McDuff, Panu James Turcot, Gabriele Zijderveld
  • Publication number: 20160078279
    Abstract: Image analysis for facial evaluation is performed using logic encoded in a semiconductor processor. The semiconductor chip analyzes video images that are captured using one or more cameras and evaluates the videos to identify one or more persons in the videos. When a person is identified, the semiconductor chip locates the face of the evaluated person in the video. Facial regions of interest are extracted and differences in the regions of interest in the face are identified. The semiconductor chip uses classifiers to map facial regions for emotional response content and evaluate the emotional response content to produce an emotion score. The classifiers provide gender, age, or ethnicity with an associated probability. Localization logic within the chip is used to localize a second face when one is evaluated in the video. The one or more faces are tracked, and identifiers for the faces are provided.
    Type: Application
    Filed: November 20, 2015
    Publication date: March 17, 2016
    Applicant: AFFECTIVA, INC.
    Inventors: Boisy G Pitre, Rana el Kaliouby, Panu James Turcot
  • Publication number: 20140357976
    Abstract: A mobile device is emotionally enabled using an application programming interface (API) in order to infer a user's emotions and make the emotions available for sharing. Images of an individual or individuals are captured and send through the API. The images are evaluated to determine the individual's mental state. Mental state analysis is output to an app running on the device on which the API resides for further sharing, analysis, or transmission. A software development kit (SDK) can be used to generate the API or to otherwise facilitate the emotional enablement of a mobile device and the apps that run on the device.
    Type: Application
    Filed: August 15, 2014
    Publication date: December 4, 2014
    Inventors: Boisy G. Pitre, Rana el Kaliouby, Youssef Kashef