Patents by Inventor Pierre Yves Oudeyer

Pierre Yves Oudeyer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7672913
    Abstract: In order to promote efficient learning of relationships inherent in a system or setup S described by system-state and context parameters, the next action to take, affecting the setup, is determined based on the knowledge gain expected to result from this action. Knowledge-gain is assessed “locally” by comparing the value of a knowledge-indicator parameter after the action with the value of this indicator on one or more previous occasions when the system-state/context parameter(s) and action variable(s) had similar values to the current ones. Preferably the “level of knowledge” is assessed based on the accuracy of predictions made by a prediction module. This technique can be applied to train a prediction machine by causing it to participate in the selection of a sequence of actions. This technique can also be applied for managing development of a self-developing device or system, the self-developing device or system performing a sequence of actions selected according to the action-selection technique.
    Type: Grant
    Filed: July 26, 2005
    Date of Patent: March 2, 2010
    Assignee: Sony France S.A.
    Inventors: Frederic Kaplan, Pierre-Yves Oudeyer
  • Patent number: 7478073
    Abstract: A self-developing device (1) capable of open-ended development makes use of a special motivational system for selecting which action should be taken on the environment by an associated sensory-motor apparatus (2). For a given candidate action, a motivational module (11) calculates a reward associated with the corresponding values that would be taken by one or more motivational variables that are independent of the nature of the associated sensory-motor apparatus. Preferred motivational variables are dependent on the developmental history of the device (1), and include variables quantifying the predictability, familiarity and stability of sensory-motor variables serving as the inputs to the device (1). The sensory-motor variables represent the status of the external environment and/or the internal resources (3) of the sensory-motor apparatus (2) whose behavior is controlled by the self-developing device (1).
    Type: Grant
    Filed: June 4, 2004
    Date of Patent: January 13, 2009
    Assignee: Sony France
    Inventors: Frederic Kaplan, Pierre-Yves Oudeyer
  • Publication number: 20080319929
    Abstract: In order to promote efficient learning of relationships inherent in a system or setup S described by system-state and context parameters, the next action to take, affecting the setup, is determined based on the knowledge gain expected to result from this action. Knowledge-gain is assessed “locally” by comparing the value of a knowledge-indicator parameter after the action with the value of this indicator on one or more previous occasions when the system-state/context parameter(s) and action variable(s)=had similar values to the current ones. Preferably the “level of knowledge” is assessed based on the accuracy of predictions made by a prediction module. This technique can be applied to train a prediction machine by causing it to participate in the selection of a sequence of actions. This technique can also be applied for managing development of a self-developing device or system, the self-developing device or system performing a sequence of actions selected according to the action-selection technique.
    Type: Application
    Filed: July 26, 2005
    Publication date: December 25, 2008
    Inventors: Frederic Kaplan, Pierre-Yves Oudeyer
  • Patent number: 7457752
    Abstract: Method and apparatus for controlling the operation of an emotion synthesizing device, notably of the type where the emotion is conveyed by a sound, having at least one input parameter whose value is used to set a type of emotion to be conveyed, by making at least one parameter a variable parameter over a determined control range, thereby to confer a variability in an amount of the type of emotion to be conveyed. The variable parameter can be made variable according to a variation model over the control range, the model relating a quantity of emotion control variable to the variable parameter, whereby said control variable is used to variably establish a value of said variable parameter. Preferably the variation obeys a linear model, the variable parameter being made to vary linearly with a variation in a quantity of emotion control variable.
    Type: Grant
    Filed: August 12, 2002
    Date of Patent: November 25, 2008
    Assignee: Sony France S.A.
    Inventor: Pierre Yves Oudeyer
  • Patent number: 7451079
    Abstract: Emotion recognition is performed by extracting a set comprising at least one feature derived from a signal, and processing the set of extracted feature(s) to detect an emotion therefrom. The voice signal is low pass filtered prior to extracting therefrom at least one feature of the set. The cut-off frequency for the low pass filtering is typically centered around 250 Hz. The features are e.g. statistical quantities extracted from sampling a signal of the intensity or pitch of the voice signal.
    Type: Grant
    Filed: July 12, 2002
    Date of Patent: November 11, 2008
    Assignee: Sony France S.A.
    Inventor: Pierre-Yves Oudeyer
  • Patent number: 7412390
    Abstract: The emotion is to be added to the synthesized speech as the prosodic feature of the language is maintained. In a speech synthesis device 200, a language processor 201 generates a string of pronunciation marks from the text, and a prosodic data generating unit 202 creates prosodic data, expressing the time duration, pitch, sound volume or the like parameters of phonemes, based on the string of pronunciation marks. A constraint information generating unit 203 is fed with the prosodic data and with the string of pronunciation marks to generate the constraint information which limits the changes in the parameters to add the so generated constraint information to the prosodic data. A emotion filter 204, fed with the prosodic data, to which has been added the constraint information, changes the parameters of the prosodic data, within the constraint, responsive to the feeling state information, imparted to it.
    Type: Grant
    Filed: March 13, 2003
    Date of Patent: August 12, 2008
    Assignees: Sony France S.A., Sony Corporation
    Inventors: Erika Kobayashi, Toshiyuki Kumakura, Makoto Akabane, Kenichiro Kobayashi, Nobuhide Yamazaki, Tomoaki Nitta, Pierre Yves Oudeyer
  • Publication number: 20050021483
    Abstract: A self-developing device (1) capable of open-ended development makes use of a special motivational system for selecting which action should be taken on the environment by an associated sensory-motor apparatus (2). For a given candidate action, a motivational module (11) calculates a reward associated with the corresponding values that would be taken by one or more motivational variables that are independent of the nature of the associated sensory-motor apparatus. Preferred motivational variables are dependent on the developmental history of the device (1), and include variables quantifying the predictability, familiarity and stability of sensory-motor variables serving as the inputs to the device (1). The sensory-motor variables represent the status of the external environment and/or the internal resources (3) of the sensory-motor apparatus (2) whose behaviour is controlled by the self-developing device (1).
    Type: Application
    Filed: June 4, 2004
    Publication date: January 27, 2005
    Inventors: Frederic Kaplan, Pierre-Yves Oudeyer
  • Patent number: 6760645
    Abstract: A clicker-training technique developed for animal training is adapted for training robots, notably autonomous animal-like robots. In this robot-training method, a behaviour (for example, (DIG)) is broken down into smaller achievable responses ((SIT)-(HELLO)-(DIG)) that will eventually lead to the desired final behaviour. The robot is guided progressively to the correct behaviour through the use, normally the repeated use, of a secondary reinforcer. When the correct behaviour has been achieved, a primary reinforcer is applied so that the desired behaviour can be “captured”. This method can be used for training a robot to perform, on command, rare behaviours or a sequence of behaviours (typically actions). This method can also be used to ensure that a robot is focusing its attention upon a desired object.
    Type: Grant
    Filed: April 29, 2002
    Date of Patent: July 6, 2004
    Assignee: Sony France S.A.
    Inventors: Frédéric Kaplan, Pierre-Yves Oudeyer
  • Publication number: 20040019484
    Abstract: The emotion is to be added to the synthesized speech as the prosodic feature of the language is maintained. In a speech synthesis device 200, a language processor 201 generates a string of pronunciation marks from the text, and a prosodic data generating unit 202 creates prosodic data, expressing the time duration, pitch, sound volume or the like parameters of phonemes, based on the string of pronunciation marks. A constraint information generating unit 203 is fed with the prosodic data and with the string of pronunciation marks to generate the constraint information which limits the changes in the parameters to add the so generated constraint information to the prosodic data. A emotion filter 204, fed with the prosodic data, to which has been added the constraint information, changes the parameters of the prosodic data, within the constraint, responsive to the feeling state information, imparted to it.
    Type: Application
    Filed: March 13, 2003
    Publication date: January 29, 2004
    Inventors: Erika Kobayashi, Toshiyuki Kumakura, Makoto Akabane, Kenichiro Kobayashi, Nobuhide Yamazaki, Tomoaki Nitta, Pierre Yves Oudeyer
  • Publication number: 20030093280
    Abstract: An emotion conveyed on a sound is synthesised by selectively modifying elementary sound portions (S) thereof prior to delivering the sound through an operator application step (S10, S16, S20) in which at least one operator (OP, OD; OI) is selectively applied to the elementary sound portions (S) to impose a specific modification in a characteristic, such as pitch and/or duration in accordance with an emotion to be synthesised.
    Type: Application
    Filed: July 11, 2002
    Publication date: May 15, 2003
    Inventor: Pierre-Yves Oudeyer
  • Publication number: 20030055654
    Abstract: The emotion recognition is performed by:
    Type: Application
    Filed: July 12, 2002
    Publication date: March 20, 2003
    Inventor: Pierre Yves Oudeyer
  • Publication number: 20030040911
    Abstract: The invention controls the operation of an emotion synthesising device (12), notably of the type where the emotion is conveyed on a sound, having at least one input parameter (Pi) whose value (Ei) is used to set a type of emotion to be conveyed, by making at least one parameter a variable parameter (VPi) over a determined control range, thereby to confer a variability in an amount of the type of emotion to be conveyed.
    Type: Application
    Filed: August 12, 2002
    Publication date: February 27, 2003
    Inventor: Pierre Yves Oudeyer
  • Publication number: 20020198717
    Abstract: A robot apparatus (1) is capable of audibly expressing an emotion in a manner similar to that performed by a living animal. The robot apparatus (1) utters a sentence by means of voice synthesis by performing a process including the steps of: an emotional state discrimination step (S1) for discriminating an emotional state of an emotion model (73); a sentence output step (S2) for outputting a sentence representing a content to be uttered in the form of a voice; a parameter control step (S3) for controlling a parameter for use in voice synthesis, depending upon the emotional state discriminated in the emotional state discrimination step (S1); and a voice synthesis step (S4) for inputting, to a voice synthesis unit, the sentence output in the sentence output step (S2) and synthesizing a voice in accordance with the controlled parameter.
    Type: Application
    Filed: May 9, 2002
    Publication date: December 26, 2002
    Inventors: Pierre Yves Oudeyer, Kotaro Sabe
  • Publication number: 20020183895
    Abstract: A clicker-training technique developed for animal training is adapted for training robots, notably autonomous animal-like robots. In this robot-training method, a behaviour (for example, [DIG]) is broken down into smaller achievable responses ([SIT]—[HELLO]—[DIG]) that will eventually lead to the desired final behaviour. The robot is guided progressively to the correct behaviour through the use, normally the repeated use, of a secondary reinforcer. When the correct behaviour has been achieved, a primary reinforcer is applied so that the desired behaviour can be “captured”. This method can be used for training a robot to perform, on command, rare behaviours or a sequence of behaviours (typically actions). This method can also be used to ensure that a robot is focusing its attention upon a desired object.
    Type: Application
    Filed: April 29, 2002
    Publication date: December 5, 2002
    Inventors: Frederic Kaplan, Pierre-Yves Oudeyer