Patents by Inventor Brock David Moore

Brock David Moore has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220101858
    Abstract: A synchronised soundtrack for an audiobook. The soundtrack has a soundtrack timeline having one or more audio regions that are configured for synchronised playback with corresponding narration regions in the audiobook playback timeline. Each audio region having a position along the soundtrack timeline that is dynamically adjustable to maintain synchronization of the audio regions of the soundtrack with their respective narration regions in the audiobook based on a narration speed variable indicative of the playback narration speed of the audiobook.
    Type: Application
    Filed: December 10, 2021
    Publication date: March 31, 2022
    Applicant: Booktrack Holdings Limited
    Inventors: Paul Charles Cameron, Craig Andrew Wilson, Brock David Moore, Mark Anthony Buer
  • Patent number: 11244683
    Abstract: A synchronised soundtrack for an audiobook. The soundtrack has a soundtrack timeline having one or more audio regions that are configured for synchronised playback with corresponding narration regions in the audiobook playback timeline. Each audio region having a position along the soundtrack timeline that is dynamically adjustable to maintain synchronization of the audio regions of the soundtrack with their respective narration regions in the audiobook based on a narration speed variable indicative of the playback narration speed of the audiobook.
    Type: Grant
    Filed: December 23, 2016
    Date of Patent: February 8, 2022
    Assignee: Booktrack Holdings Limited
    Inventors: Paul Charles Cameron, Craig Andrew Wilson, Brock David Moore, Mark Anthony Buer
  • Patent number: 10698951
    Abstract: A method of automatically generating a digital soundtrack intended for synchronised playback with associated speech audio, the method executed by a processing device or devices having associated memory. The method comprises syntactically and/or semantically analysing text representing or corresponding to the speech audio at a text segment level to generate an emotional profile for each text segment in the context of a continuous emotion model. The method further comprises generating a soundtrack for the speech audio comprising one or more audio regions that are configured or selected for playback during corresponding speech regions of the speech audio, and wherein the audio configured for playback in the audio regions is based on or a function of the emotional profile of one or more of the text segments within the respective speech regions.
    Type: Grant
    Filed: July 28, 2017
    Date of Patent: June 30, 2020
    Assignee: Booktrack Holdings Limited
    Inventors: Paul Charles Cameron, Craig Andrew Wilson, Petrus Matheus Godefridus De Vocht, Brock David Moore
  • Publication number: 20180032611
    Abstract: A method of automatically generating a digital soundtrack for playback in an environment comprising live speech audio generated by one or more persons speaking in the environment, the method executed by a processing device or devices having associated memory. The method comprises syntactically and/or semantically analysing an incoming text data stream or streams representing or corresponding to the live speech audio in portions to generate an emotional profile for each text portion of the text data stream(s) in the context of a continuous emotion model. The method further comprises generating in real-time a customised soundtrack for the live speech audio comprising music tracks that are played back in the environment in real-time with the live speech audio. Each music track is selected for playback in the soundtrack based at least partly on the determined emotional profile or profiles associated with the most recently processed portion or portions of text from the text data stream(s).
    Type: Application
    Filed: July 28, 2017
    Publication date: February 1, 2018
    Inventors: Paul Charles Cameron, Mark Steven Cameron, Craig Andrew Wilson, Petrus Matheus Godefridus De Vocht, Brock David Moore
  • Publication number: 20180032305
    Abstract: A method of automatically generating a digital soundtrack intended for synchronised playback with the reading of an associated text, the method executed by a processing device or devices having associated memory. The method comprises syntactically and/or semantically analysing the text at a text segment level to generate an emotional profile for each text segment in the context of a continuous emotion model. The method further comprises generating a soundtrack for the text comprising one or more audio regions that are configured or selected for playback during corresponding text regions of the text, and wherein the audio configured for playback in the audio regions is based on or a function of the emotional profile of one or more of the text segments within the respective text regions.
    Type: Application
    Filed: July 28, 2017
    Publication date: February 1, 2018
    Inventors: Paul Charles Cameron, Craig Andrew Wilson, Petrus Matheus Godefridus De Vocht, Brock David Moore
  • Publication number: 20180032610
    Abstract: A method of automatically generating a digital soundtrack intended for synchronised playback with associated speech audio, the method executed by a processing device or devices having associated memory. The method comprises syntactically and/or semantically analysing text representing or corresponding to the speech audio at a text segment level to generate an emotional profile for each text segment in the context of a continuous emotion model. The method further comprises generating a soundtrack for the speech audio comprising one or more audio regions that are configured or selected for playback during corresponding speech regions of the speech audio, and wherein the audio configured for playback in the audio regions is based on or a function of the emotional profile of one or more of the text segments within the respective speech regions.
    Type: Application
    Filed: July 28, 2017
    Publication date: February 1, 2018
    Inventors: Paul Charles Cameron, Craig Andrew Wilson, Petrus Matheus Godefridus De Vocht, Brock David Moore