Patents by Inventor Oliver Hume

Oliver Hume has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11281014
    Abstract: A head-mountable display device includes: a display element observable by a user; an image generator operable to generate an image for display by the display element; a plurality of ultrasound transducers, the ultrasound transducers being operable and arranged to emit ultrasound signals towards at least a first eye of the user when the head-mountable display is being worn by the user; one or more sensors operable and arranged to detect reflections of the emitted ultrasound signals; an eye imaging unit operable to generate a representation of the user's eye based on the ultrasound signals received at the one or more sensors; and an eye position detector configured to detect the position of the eye relative to the position of the head-mountable display device based on the representation, the eye position detector being configured to detect whether the position of the eye is offset from a desired position.
    Type: Grant
    Filed: June 5, 2019
    Date of Patent: March 22, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Colin Jonathan Hughes, Oliver Hume, Patrick John Connor
  • Publication number: 20220062771
    Abstract: A content modification system comprising a content receiving unit operable to receive content comprising a virtual environment and one or more active elements, an input receiving unit operable to receive inputs from one or more users, an element addition unit operable to generate one or more virtual elements within the virtual environment in response to the received inputs, the virtual elements being unable to be interacted with by the one or more active elements, and a content generation unit operable to generate modified content comprising the virtual environment, the one or more active elements, and the one or more generated virtual elements.
    Type: Application
    Filed: August 23, 2021
    Publication date: March 3, 2022
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Maria Chiara Monti, Fabio Cappello, Matthew Sanders, Timothy Bradley, Oliver Hume, Jason Craig Millson
  • Publication number: 20220062770
    Abstract: A content generation system, the system comprising an input obtaining unit operable to obtain one or more samples of input text and/or audio relating to a first content, an input analysis unit operable to generate n-grams representing one or more elements of the obtained inputs, a representation generating unit operable to generate a visual representation of one or more of the generated n-grams, and a display generation unit operable to generate second content comprising one or more elements of the visual representation in association with the first content.
    Type: Application
    Filed: August 23, 2021
    Publication date: March 3, 2022
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Fabio Cappello, Maria Chiara Monti, Matthew Sanders, Timothy Bradley, Oliver Hume, Jason Craig Millson
  • Patent number: 11244489
    Abstract: A method of determining identifiers for tagging frames of animation with is provided. The method comprises obtaining data indicating motion of an animated object in a plurality of frames and detecting the object as performing a pre-determined motion in at least some of the plurality of frames. For a given frame, it is determined based on the detected pre-determined motion, whether to associate an identifier with the pre-determined motion, the identifier indicating an event that is to be triggered in response to the pre-determined motion. The frames of the animation comprising the detected pre-determined motion are tagged, in response to a determination of an identifier. The pre-determined motion and corresponding identifier are determined by inputting the obtained data to machine learning model. A corresponding system is also provided.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: February 8, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Fabio Cappello, Oliver Hume
  • Patent number: 11146907
    Abstract: A system for identifying the contribution of a given sound source to a composite audio track, the system comprising an audio input unit operable to receive an input composite audio track comprising two or more sound sources, including the given sound source, an audio generation unit operable to generate, using a model of a sound source, an approximation of the contribution of the given sound source to the composite audio track, an audio comparison unit operable to compare the generated audio to at least a portion of the composite audio track to determine whether the generated audio provides an approximation of the composite audio track that meets a threshold degree of similarity, and an audio identification unit operable to identify, when the threshold is met, the generated audio as a suitable representation of the contribution of the sound source to the composite audio track.
    Type: Grant
    Filed: April 3, 2020
    Date of Patent: October 12, 2021
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Fabio Cappello, Oliver Hume
  • Patent number: 11065542
    Abstract: A method of determining user engagement in a game includes: receiving data from a plurality of remote entertainment devices at a server, the data from a respective entertainment device associating at least a first feature state of the game with an action by a user of that respective entertainment device indicative of a predetermined degree of engagement by the user with the game, aggregating the data received from the plurality of entertainment devices, and determining a level of correspondence between one or more feature states and user actions indicative of the predetermined degree of engagement.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: July 20, 2021
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Hogarth Andall, Oliver Hume
  • Publication number: 20210124996
    Abstract: An encoding apparatus is provided. The apparatus comprises an input unit operable to obtain a plurality of training images, said training images being for use in training a machine learning model. The apparatus also comprises a label unit operable to obtain a class label associated with the training images; and a key unit operable to obtain a secret key for use in encoding the training images. The apparatus further comprises an image noise generator operable to generate, based on the obtained secret key, noise for introducing into the training images. The image noise generator is configured to generate noise that correlates with the class label associated with the training images such that a machine learning model subsequently trained with the modified training images learns to associate the introduced noise with the class label for those images. A corresponding decoding apparatus is also provided.
    Type: Application
    Filed: October 20, 2020
    Publication date: April 29, 2021
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Mark Jacobus Breugelmans, Oliver Hume, Fabio Cappello, Nigel John Williams
  • Publication number: 20210050023
    Abstract: A system for determining prioritisation values for two or more sounds within an audio clip includes: a feature extraction unit operable to extract characteristic features from the two or more sounds, a feature combination unit operable to generate a combined mix comprising extracted features from the two or more sounds, an audio assessment unit operable to identify the contribution of one or more of the features to the combined mix, a feature classification unit operable to assign a saliency score to each of the features in the combined mix, and an audio prioritisation unit operable to determine relative priority values for the two or more sounds in dependence upon the assigned saliency scores for each of one or more features of the sounds.
    Type: Application
    Filed: August 5, 2020
    Publication date: February 18, 2021
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Oliver Hume, Fabio Cappello, Marina Villanueva-Barreiro, Michael Lee Jones
  • Publication number: 20200374647
    Abstract: A method of obtaining a head-related transfer function for a user is provided. The method comprises generating an audio signal for output by a handheld device and outputting the generated audio signal at a plurality of locations by moving the handheld device to those locations. The audio output by the handheld device is detected at left-ear and right-ear microphones. A pose of the handheld device relative to the user's head is determined for at least some of the locations. One or more personalised HRTF features are then determined based on the detected audio and corresponding determined poses of the handheld device. The one or more personalised HRTF features are then mapped to a higher-quality HRTF for the user, wherein the higher-quality HRTF corresponds to an HRTF measured in an anechoic environment. This mapping may be learned using machine learning, for example. A corresponding system is also provided.
    Type: Application
    Filed: May 15, 2020
    Publication date: November 26, 2020
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Fabio Cappello, Marina Villanueva-Barreiro, Oliver Hume
  • Publication number: 20200327871
    Abstract: A system for identifying the contribution of a given sound source to a composite audio track, the system comprising an audio input unit operable to receive an input composite audio track comprising two or more sound sources, including the given sound source, an audio generation unit operable to generate, using a model of a sound source, an approximation of the contribution of the given sound source to the composite audio track, an audio comparison unit operable to compare the generated audio to at least a portion of the composite audio track to determine whether the generated audio provides an approximation of the composite audio track that meets a threshold degree of similarity, and an audio identification unit operable to identify, when the threshold is met, the generated audio as a suitable representation of the contribution of the sound source to the composite audio track.
    Type: Application
    Filed: April 3, 2020
    Publication date: October 15, 2020
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Fabio Cappello, Oliver Hume
  • Publication number: 20200329331
    Abstract: A system for generating audio content in dependence upon an input audio track comprising audio corresponding to one or more sound sources, the system comprising an audio input unit operable to input the input audio track to one or more models, each representing one or more of the sound sources, and an audio generation unit operable to generate, using the one or more models, one or more audio tracks each comprising a representation of the audio contribution of the corresponding sound sources of the input audio track, wherein the generated audio tracks comprise one or more variations relative to the corresponding portion of the input audio track.
    Type: Application
    Filed: April 7, 2020
    Publication date: October 15, 2020
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Fabio Cappello, Marina Villanueva-Barreiro, Oliver Hume
  • Publication number: 20200222804
    Abstract: A method of determining blending coefficients for respective animations includes: obtaining animation data, the animation data defining at least two different animations that are at least in part to be simultaneously applied to the animated object, each animation comprising a plurality of frames; obtaining corresponding video game data, the video game data comprising an in-game state of the object; inputting the animation data and video game data into a machine learning model, the machine learning model being trained to determine, based on the animation data and corresponding video game data, a blending coefficient for each of the animations in the animation data; determining, based on the output of the machine learning model, one or more blending coefficients for at least one of the animations, the or each blending coefficient defining a relative weighting with which each animation is to be applied to the animated object; and blending the at least simultaneously applied part of the two animations using the o
    Type: Application
    Filed: January 13, 2020
    Publication date: July 16, 2020
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Fabio Cappello, Oliver Hume
  • Publication number: 20200179806
    Abstract: A method of determining user engagement in a game includes: receiving data from a plurality of remote entertainment devices at a server, the data from a respective entertainment device associating at least a first feature state of the game with an action by a user of that respective entertainment device indicative of a predetermined degree of engagement by the user with the game, aggregating the data received from the plurality of entertainment devices, and determining a level of correspondence between one or more feature states and user actions indicative of the predetermined degree of engagement.
    Type: Application
    Filed: December 3, 2019
    Publication date: June 11, 2020
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Hogarth Andall, Oliver Hume
  • Publication number: 20200167984
    Abstract: A method of determining identifiers for tagging frames of animation with is provided. The method comprises obtaining data indicating motion of an animated object in a plurality of frames and detecting the object as performing a pre-determined motion in at least some of the plurality of frames. For a given frame, it is determined based on the detected pre-determined motion, whether to associate an identifier with the pre-determined motion, the identifier indicating an event that is to be triggered in response to the pre-determined motion. The frames of the animation comprising the detected pre-determined motion are tagged, in response to a determination of an identifier. The pre-determined motion and corresponding identifier are determined by inputting the obtained data to machine learning model. A corresponding system is also provided.
    Type: Application
    Filed: November 13, 2019
    Publication date: May 28, 2020
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Fabio Cappello, Oliver Hume
  • Publication number: 20190377191
    Abstract: A head-mountable display device includes: a display element observable by a user; an image generator operable to generate an image for display by the display element; a plurality of ultrasound transducers, the ultrasound transducers being operable and arranged to emit ultrasound signals towards at least a first eye of the user when the head-mountable display is being worn by the user; one or more sensors operable and arranged to detect reflections of the emitted ultrasound signals; an eye imaging unit operable to generate a representation of the user's eye based on the ultrasound signals received at the one or more sensors; and an eye position detector configured to detect the position of the eye relative to the position of the head-mountable display device based on the representation, the eye position detector being configured to detect whether the position of the eye is offset from a desired position.
    Type: Application
    Filed: June 5, 2019
    Publication date: December 12, 2019
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Colin Jonathan Hughes, Oliver Hume, Patrick John Connor
  • Patent number: 10462598
    Abstract: A system for generating a head-related transfer function, HRTF, for a given position with respect to a listener, the system comprising a dividing unit operable to divide each of a plurality of existing HRTFs, each corresponding to a respective plurality of positions, into first and second components, an interaural time difference determination unit operable to determine an interaural time difference expected by a user for a sound source located at the given position in dependence upon the respective first components, an interpolation unit operable to generate an interpolated second component by interpolating generated second components using a weighting dependent upon the respective positions for the corresponding HRFTs and the given position, and a generation unit operable to generate an HRTF for the given position in dependence upon the interaural time difference and the interpolated second component.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: October 29, 2019
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Marina Villanueva-Barreiro, Oliver Hume, Scott Wardle
  • Publication number: 20060274902
    Abstract: An audio processing apparatus operable to determine, for each loudspeaker of a plurality of loudspeakers, the respective volume at which an audio signal is to be output through that loudspeaker, the volume being determined in dependence on a desired characteristic of a simulated source for the audio signal, the position of a listening location for listening to the audio signal and the position of the loudspeaker.
    Type: Application
    Filed: May 5, 2006
    Publication date: December 7, 2006
    Inventors: Oliver Hume, Jason Page
  • Publication number: 20060269086
    Abstract: An audio processing apparatus operable to mix a plurality of input audio streams to form an output audio stream, the apparatus comprising: a mixer operable to receive the input audio streams and to output a mixed frequency-based audio stream in a frequency-based representation; and a frequency-to-time converter operable to convert the mixed frequency-based audio stream from the frequency-based representation to a time-based representation to form the output audio stream.
    Type: Application
    Filed: May 8, 2006
    Publication date: November 30, 2006
    Inventors: Jason Page, Oliver Hume, Nicholas Kennedy, Paul Scargill