Patents by Inventor Simon Andrew St John Brislin

Simon Andrew St John Brislin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240082707
    Abstract: A method for identifying a cutscene in gameplay footage, the method comprising: receiving a first video signal and a second video signal each comprising a plurality of images; creating a first video fingerprint comprising a plurality of signatures, each signature of the plurality of signatures based on at least one image of the plurality of images in the first video signal; creating a second video fingerprint comprising a plurality of signatures, each signature of the plurality of signatures based on at least one image of the plurality of images in the second video signal; comparing the first video fingerprint with the second video fingerprint; and identifying a cutscene when at least a portion of the first video fingerprint has at least a threshold level of similarity with at least a portion of the second video fingerprint.
    Type: Application
    Filed: September 11, 2023
    Publication date: March 14, 2024
    Applicant: Sony Interactive Entertainment Europe Limited
    Inventors: Simon Andrew St John Brislin, Nicholas Anthony Edward Ryan
  • Patent number: 11766618
    Abstract: Computer-implemented systems and methods for providing contextual game guidance are described herein. An example method includes determining based on contextual information regarding an application an objective of the user; automatically deriving based on the contextual information and the objective contextual guidance to assist the user; generating a user interface having the contextual guidance; and transmitting the user interface to a client device.
    Type: Grant
    Filed: May 23, 2022
    Date of Patent: September 26, 2023
    Assignee: Sony Interactive Entertainment LLC
    Inventors: Warren Benedetto, Landon Noss, Adil Sherwani, Nitin Mohan, Matthew Ito, Xifan Chen, Hugh Alexander Dinsdale Spencer, Paul Edridge, Andrew John Nicholas Jones, Simon Andrew St. John Brislin, Nicholas Anthony Edward Ryan, Charles Wayne Denison, II, Matthew Stewart Bloom-Carlin, Derek Andrew Parker
  • Publication number: 20220280875
    Abstract: Computer-implemented systems and methods for providing contextual game guidance are described herein. An example method includes determining based on contextual information regarding an application an objective of the user; automatically deriving based on the contextual information and the objective contextual guidance to assist the user; generating a user interface having the contextual guidance; and transmitting the user interface to a client device.
    Type: Application
    Filed: May 23, 2022
    Publication date: September 8, 2022
    Inventors: Warren Benedetto, Landon Noss, Adil Sherwani, Nitin Mohan, Matthew Ito, Xifan Chen, Hugh Alexander Dinsdale Spencer, Paul Edridge, Andrew John Nicholas Jones, Simon Andrew St. John Brislin, Nicholas Anthony Edward Ryan, Charles Wayne Denison, II, Matthew Stewart Bloom-Carlin, Derek Andrew Parker
  • Patent number: 11423944
    Abstract: A method of generating audio-visual content from video game footage is provided. The method comprises obtaining a user-selected audio track and obtaining video game footage. Statistical analysis is performed on the audio track so as to determine an excitement level associated with respective portions of the audio track. Statistical analysis is performed on the video game footage so as to determine an excitement level associated with respective portions of the video game footage. Portions of the video game footage are matched with portions of the audio track, based on a correspondence in determined excitement level. Based on said matching, a combined audio-visual content comprising the portions of the video game footage matched to corresponding portions of the audio track is generated. In this way, calm and exciting moments within the video footage are matched to corresponding moments in the audio track. A corresponding system is also provided.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: August 23, 2022
    Inventor: Simon Andrew St. John Brislin
  • Patent number: 11338210
    Abstract: Computer-implemented systems and methods for providing contextual game guidance are described herein. An example method includes determining based on contextual information regarding an application an objective of the user; automatically deriving based on the contextual information and the objective contextual guidance to assist the user; generating a user interface having the contextual guidance; and transmitting the user interface to a client device.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: May 24, 2022
    Inventors: Warren Benedetto, Landon Noss, Adil Sherwani, Nitin Mohan, Matthew Ito, Xifan Chen, Hugh Alexander Dinsdale Spencer, Paul Edridge, Andrew John Nicholas Jones, Simon Andrew St. John Brislin, Nicholas Anthony Edward Ryan, Charles Wayne Denison, II, Matthew Stewart Bloom-Carlin, Derek Andrew Parker
  • Patent number: 11325037
    Abstract: A method of mapping a virtual environment comprises obtaining a first sequence of video images output by a videogame title; obtaining a corresponding sequence of in-game virtual camera positions at which the video images were created; obtaining a corresponding sequence of depth buffer values for a depth buffer used by the videogame whilst creating the video images; and for each of a plurality of video images and corresponding depth buffer values of the obtained sequences, obtain mapping points corresponding to a sampling distribution of points over the area of a respective video image and their associated depth values; wherein respective mapping points are obtained by projecting co-ordinated derived from the sample points from the video image and associated depth values back into a 3D game world co-ordinate system of the videogame title; thereby obtaining a point cloud dataset of mapping points corresponding to the first sequence of video images.
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: May 10, 2022
    Inventors: Andrew Swann, Pritpal Singh Panesar, Simon Andrew St. John Brislin, Hugh Alexander Dinsdale Spencer
  • Publication number: 20210178266
    Abstract: A method of mapping a virtual environment comprises obtaining a first sequence of video images output by a videogame title; obtaining a corresponding sequence of in-game virtual camera positions at which the video images were created; obtaining a corresponding sequence of depth buffer values for a depth buffer used by the videogame whilst creating the video images; and for each of a plurality of video images and corresponding depth buffer values of the obtained sequences, obtain mapping points corresponding to a sampling distribution of points over the area of a respective video image and their associated depth values; wherein respective mapping points are obtained by projecting co-ordinated derived from the sample points from the video image and associated depth values back into a 3D game world co-ordinate system of the videogame title; thereby obtaining a point cloud dataset of mapping points corresponding to the first sequence of video images.
    Type: Application
    Filed: February 20, 2019
    Publication date: June 17, 2021
    Applicant: Sony Interactive Entertainment Europe Limited
    Inventors: Andrew Swann, Pritpal Singh Panesar, Simon Andrew St. John Brislin, Hugh Alexander Dinsdale Spencer
  • Patent number: 11007436
    Abstract: A method of detecting significant footage for recording from a videogame includes obtaining position information for a target object within a virtual environment of the videogame, obtaining depth buffer information for a current position of a virtual camera used to generate a current image of the virtual environment for display by the videogame, calculating a first distance along a line between the current position of the virtual camera and the obtained position of the target object, detecting whether a depth buffer value along the line corresponds to at least a threshold distance from the virtual camera, the threshold distance being based upon the calculated first distance, and if so, associating the current image with an indicator that the image is significant for the purposes of recording footage from the videogame.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: May 18, 2021
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Hugh Alexander Dinsdale Spencer, Andrew Swann, Simon Andrew St John Brislin, Nicholas Anthony Edward Ryan
  • Patent number: 10874948
    Abstract: A method of mapping a virtual environment includes: obtaining a first sequence of video images output by a videogame title; obtaining a corresponding sequence of in-game virtual camera positions at which the video images were created; obtaining a corresponding sequence of depth buffer values for a depth buffer used by the videogame whilst creating the video images; and, for each of a plurality of video images and corresponding depth buffer values of the obtained sequences, obtain mapping points corresponding to a selected predetermined set of depth values corresponding to a predetermined set of positions within a respective video image; where for each pair of depth values and video image positions, a mapping point has a distance from the virtual camera position based upon the depth value, and a position based upon the relative positions of the virtual camera and the respective video image position, thereby obtaining a map dataset of mapping points corresponding to the first sequence of video images.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: December 29, 2020
    Assignee: Sony Interactive Entertainment Europe Limited
    Inventors: Nicholas Anthony Edward Ryan, Hugh Alexander Dinsdale Spencer, Andrew Swann, Simon Andrew St John Brislin, Pritpal Singh Panesar
  • Publication number: 20200251146
    Abstract: A method of generating audio-visual content from video game footage is provided. The method comprises obtaining a user-selected audio track and obtaining video game footage. Statistical analysis is performed on the audio track so as to determine an excitement level associated with respective portions of the audio track. Statistical analysis is performed on the video game footage so as to determine an excitement level associated with respective portions of the video game footage. Portions of the video game footage are matched with portions of the audio track, based on a correspondence in determined excitement level. Based on said matching, a combined audio-visual content comprising the portions of the video game footage matched to corresponding portions of the audio track is generated. In this way, calm and exciting moments within the video footage are matched to corresponding moments in the audio track. A corresponding system is also provided.
    Type: Application
    Filed: January 23, 2020
    Publication date: August 6, 2020
    Applicant: Sony Interactive Entertainment Europe Limited
    Inventor: Simon Andrew St. John Brislin
  • Publication number: 20200122043
    Abstract: Computer-implemented systems and methods for providing contextual game guidance are described herein. An example method includes determining based on contextual information regarding an application an objective of the user; automatically deriving based on the contextual information and the objective contextual guidance to assist the user; generating a user interface having the contextual guidance; and transmitting the user interface to a client device.
    Type: Application
    Filed: October 15, 2019
    Publication date: April 23, 2020
    Inventors: Warren Benedetto, Landon Noss, Adil Sherwani, Nitin Mohan, Matthew Ito, Xifan Chen, Hugh Alexander Dinsdale Spencer, Paul Edridge, Andrew John Nicholas Jones, Simon Andrew St. John Brislin, Nicholas Anthony Edward Ryan, Charles Wayne Denison, II, Matthew Stewart Bloom-Carlin, Derek Andrew Parker
  • Publication number: 20200016499
    Abstract: A method of mapping a virtual environment includes: obtaining a first sequence of video images output by a videogame title; obtaining a corresponding sequence of in-game virtual camera positions at which the video images were created; obtaining a corresponding sequence of depth buffer values for a depth buffer used by the videogame whilst creating the video images; and, for each of a plurality of video images and corresponding depth buffer values of the obtained sequences, obtain mapping points corresponding to a selected predetermined set of depth values corresponding to a predetermined set of positions within a respective video image; where for each pair of depth values and video image positions, a mapping point has a distance from the virtual camera position based upon the depth value, and a position based upon the relative positions of the virtual camera and the respective video image position, thereby obtaining a map dataset of mapping points corresponding to the first sequence of video images.
    Type: Application
    Filed: July 2, 2019
    Publication date: January 16, 2020
    Applicant: Sony Interactive Entertainment Europe Limited
    Inventors: Nicholas Anthony Edward Ryan, Hugh Alexander Dinsdale Spencer, Andrew Swann, Simon Andrew St John Brislin, Pritpal Singh Panesar
  • Publication number: 20190392221
    Abstract: A method of detecting significant footage for recording from a videogame comprises obtaining position information for a target object within a virtual environment of the videogame, obtaining depth buffer information for a current position of a virtual camera used to generate a current image of the virtual environment for display by the videogame, calculating a first distance along a line between the current position of the virtual camera and the obtained position of the target object, detecting whether a depth buffer value along the line corresponds to at least a threshold distance from the virtual camera, the threshold distance being based upon the calculated first distance, and if so, associating the current image with an indicator that the image is significant for the purposes of recording footage from the videogame.
    Type: Application
    Filed: February 6, 2019
    Publication date: December 26, 2019
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Hugh Alexander Dinsdale Spencer, Andrew Swann, Simon Andrew St John Brislin, Nicholas Anthony Edward Ryan