Patents by Inventor Graham John Page

Graham John Page has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11887352
    Abstract: Analytics are used for live streaming based on analysis within a shared digital environment. An interactive digital environment is accessed, where the interactive digital environment is a shared digital environment for a plurality of participants. The participants include presenters and viewers. A plurality of images is obtained from a first set of participants within the plurality of participants involved in the interactive digital environment. Cognitive state content is analyzed within the plurality of images for the first set of participants within the plurality of participants. Results of the analyzing cognitive state content are provided to a second set of participants within the plurality of participants. The obtaining and the analyzing are accomplished on a device local to a participant such that images of the first set of participants are not transmitted to a non-local device. The analyzing cognitive state content is augmented with evaluation of audio information.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: January 30, 2024
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Graham John Page, Gabriele Zijderveld
  • Patent number: 11700420
    Abstract: Data on a user interacting with a media presentation is collected at a client device. The data includes facial image data of the user. The facial image data is analyzed to extract cognitive state content of the user. One or more emotional intensity metrics are generated. The metrics are based on the cognitive state content. The media presentation is manipulated, based on the emotional intensity metrics and the cognitive state content. An engagement score for the media presentation is provided. The engagement score is based on the emotional intensity metric. A facial expression metric and a cognitive state metric are generated for the user. The manipulating includes optimization of the previously viewed media presentation. The optimization changes various aspects of the media presentation, including the length of different portions of the media presentation, the overall length of the media presentation, character selection, music selection, advertisement placement, and brand reveal time.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: July 11, 2023
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Melissa Sue Burke, Andrew Edwin Dreisch, Graham John Page, Panu James Turcot, Evan Kodra
  • Publication number: 20200314490
    Abstract: Data on a user interacting with a media presentation is collected at a client device. The data includes facial image data of the user. The facial image data is analyzed to extract cognitive state content of the user. One or more emotional intensity metrics are generated. The metrics are based on the cognitive state content. The media presentation is manipulated, based on the emotional intensity metrics and the cognitive state content. An engagement score for the media presentation is provided. The engagement score is based on the emotional intensity metric. A facial expression metric and a cognitive state metric are generated for the user. The manipulating includes optimization of the previously viewed media presentation. The optimization changes various aspects of the media presentation, including the length of different portions of the media presentation, the overall length of the media presentation, character selection, music selection, advertisement placement, and brand reveal time.
    Type: Application
    Filed: June 12, 2020
    Publication date: October 1, 2020
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Melissa Sue Burke, Andrew Edwin Dreisch, Graham John Page, Panu James Turcot, Evan Kodra
  • Publication number: 20200228359
    Abstract: Analytics are used for live streaming based on analysis within a shared digital environment. An interactive digital environment is accessed, where the interactive digital environment is a shared digital environment for a plurality of participants. The participants include presenters and viewers. A plurality of images is obtained from a first set of participants within the plurality of participants involved in the interactive digital environment. Cognitive state content is analyzed within the plurality of images for the first set of participants within the plurality of participants. Results of the analyzing cognitive state content are provided to a second set of participants within the plurality of participants. The obtaining and the analyzing are accomplished on a device local to a participant such that images of the first set of participants are not transmitted to a non-local device. The analyzing cognitive state content is augmented with evaluation of audio information.
    Type: Application
    Filed: March 25, 2020
    Publication date: July 16, 2020
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Graham John Page, Gabriele Zijderveld