Patents by Inventor Erika Kobayashi

Erika Kobayashi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7984076
    Abstract: The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.
    Type: Grant
    Filed: December 28, 2007
    Date of Patent: July 19, 2011
    Assignee: Sony Corporation
    Inventors: Kenichiro Kobayashi, Makoto Akabane, Tomoaki Nitta, Nobuhide Yamazaki, Erika Kobayashi
  • Patent number: 7912696
    Abstract: A natural language processing apparatus includes an input section for inputting natural language, a representation converting section for converting representation of the natural language, a display section for displaying, for confirmation, sentence converted at the representation converting section, a machine translation section for carrying out machine translation of the confirmed sentence, and a control section for controlling these respective sections, thus to provide natural language processing in which confirmation operation of user is reduced.
    Type: Grant
    Filed: August 31, 1999
    Date of Patent: March 22, 2011
    Assignee: Sony Corporation
    Inventors: Yasuharu Asano, Atsuo Hiroe, Masato Shimakawa, Tetsuya Kagami, Erika Kobayashi
  • Publication number: 20080256120
    Abstract: The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.
    Type: Application
    Filed: December 28, 2007
    Publication date: October 16, 2008
    Applicant: Sony Corporation
    Inventors: Kenichiro Kobayashi, Makoto Akabane, Tomoaki Nitta, Nobuhide Yamazaki, Erika Kobayashi
  • Patent number: 7412390
    Abstract: The emotion is to be added to the synthesized speech as the prosodic feature of the language is maintained. In a speech synthesis device 200, a language processor 201 generates a string of pronunciation marks from the text, and a prosodic data generating unit 202 creates prosodic data, expressing the time duration, pitch, sound volume or the like parameters of phonemes, based on the string of pronunciation marks. A constraint information generating unit 203 is fed with the prosodic data and with the string of pronunciation marks to generate the constraint information which limits the changes in the parameters to add the so generated constraint information to the prosodic data. A emotion filter 204, fed with the prosodic data, to which has been added the constraint information, changes the parameters of the prosodic data, within the constraint, responsive to the feeling state information, imparted to it.
    Type: Grant
    Filed: March 13, 2003
    Date of Patent: August 12, 2008
    Assignees: Sony France S.A., Sony Corporation
    Inventors: Erika Kobayashi, Toshiyuki Kumakura, Makoto Akabane, Kenichiro Kobayashi, Nobuhide Yamazaki, Tomoaki Nitta, Pierre Yves Oudeyer
  • Patent number: 7379871
    Abstract: Various sensors detect conditions outside a robot and an operation applied to the robot, and output the results of detection to a robot-motion-system control section. The robot-motion-system control section determines a behavior state according to a behavior model. A robot-thinking-system control section determines an emotion state according to an emotion model. A speech-synthesizing-control-information selection section determines a field on a speech-synthesizing-control-information table according to the behavior state and the emotion state. A language processing section analyzes in grammar a text for speech synthesizing sent from the robot-thinking-system control section, converts a predetermined portion according to a speech-synthesizing control information, and outputs to a rule-based speech synthesizing section. The rule-based speech synthesizing section synthesizes a speech signal corresponding to the text for speech synthesizing.
    Type: Grant
    Filed: December 27, 2000
    Date of Patent: May 27, 2008
    Assignee: Sony Corporation
    Inventors: Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi, Makoto Akabane, Kenichiro Kobayashi, Keiichi Yamada, Tomoaki Nitta
  • Patent number: 7315867
    Abstract: The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.
    Type: Grant
    Filed: July 20, 2005
    Date of Patent: January 1, 2008
    Assignee: Sony Corporation
    Inventors: Kenichiro Kobayashi, Makoto Akabane, Tomoaki Nitta, Nobuhide Yamazaki, Erika Kobayashi
  • Patent number: 7222076
    Abstract: The present invention relates to a voice output apparatus capable of, in response to a particular stimulus, stopping outputting a voice and outputting a reaction. The voice output apparatus is capable of outputting a voice in a natural manner. A rule-based synthesizer 24 produces a synthesized voice and outputs it. For example, when a synthesized voice “Where is an exit” was produced and outputting of the synthesized voice data has proceeded until “Where is an e” has been output, if a user taps a robot, then a reaction generator 30 determines, by referring to a reaction database 31, that a reaction voice “Ouch!” should be output in response to being tapped. The reaction generator 30 then controls an output controller 27 so as to stop outputting the synthesized voice “Where is an exit?” and output the reaction voice “Ouch!”.
    Type: Grant
    Filed: March 22, 2002
    Date of Patent: May 22, 2007
    Assignee: Sony Corporation
    Inventors: Erika Kobayashi, Makoto Akabane, Tomoaki Nitta, Hideki Kishi, Rika Horinaka, Masashi Takeda
  • Patent number: 7111011
    Abstract: The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.
    Type: Grant
    Filed: May 10, 2002
    Date of Patent: September 19, 2006
    Assignee: Sony Corporation
    Inventors: Kenichiro Kobayashi, Makoto Akabane, Tomoaki Nitta, Nobuhide Yamazaki, Erika Kobayashi
  • Patent number: 7080015
    Abstract: In a synchronization control apparatus, a voice-language-information generating section generates the voice language information of a word which a robot utters. A voice synthesizing section calculates phoneme information and a phoneme continuation duration according to the voice language information, and also generates synthesized-voice data according to an adjusted phoneme continuation duration. An articulation-operation generating section calculates an articulation-operation period according to the phoneme information. A voice-operation adjusting section adjusts the phoneme continuation duration and the articulation-operation period. An articulation-operation executing section operates an organ of articulation according to the adjusted articulation-operation period.
    Type: Grant
    Filed: August 26, 2004
    Date of Patent: July 18, 2006
    Assignee: Sony Corporation
    Inventors: Keiichi Yamada, Kenichiro Kobayashi, Tomoaki Nitta, Makoto Akabane, Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi
  • Publication number: 20050251737
    Abstract: The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.
    Type: Application
    Filed: July 20, 2005
    Publication date: November 10, 2005
    Applicant: Sony Corporation
    Inventors: Kenichiro Kobayashi, Makoto Akabane, Tomoaki Nitta, Nobuhide Yamazaki, Erika Kobayashi
  • Patent number: 6865535
    Abstract: In a synchronization control apparatus, a voice-language-information generating section generates the voice language information of a word which a robot utters. A voice synthesizing section calculates phoneme information and a phoneme continuation duration according to the voice language information, and also generates synthesized-voice data according to an adjusted phoneme continuation duration. An articulation-operation generating section calculates an articulation-operation period according to the phoneme information. A voice-operation adjusting section adjusts the phoneme continuation duration and the articulation-operation period. An articulation-operation executing section operates an organ of articulation according to the adjusted articulation-operation period.
    Type: Grant
    Filed: December 27, 2000
    Date of Patent: March 8, 2005
    Assignee: Sony Corporation
    Inventors: Keiichi Yamada, Kenichiro Kobayashi, Tomoaki Nitta, Makoto Akabane, Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi
  • Publication number: 20050027540
    Abstract: In a synchronization control apparatus, a voice-language-information generating section generates the voice language information of a word which a robot utters. A voice synthesizing section calculates phoneme information and a phoneme continuation duration according to the voice language information, and also generates synthesized-voice data according to an adjusted phoneme continuation duration. An articulation-operation generating section calculates an articulation-operation period according to the phoneme information. A voice-operation adjusting section adjusts the phoneme continuation duration and the articulation-operation period. An articulation-operation executing section operates an organ of articulation according to the adjusted articulation-operation period.
    Type: Application
    Filed: August 26, 2004
    Publication date: February 3, 2005
    Inventors: Keiichi Yamada, Kenichiro Kobayashi, Tomoaki Nitta, Makoto Akabane, Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi
  • Patent number: 6845247
    Abstract: When a seal-like recording medium with a built-in memory that stores information specifying a communicates is placed in the close proximity of a medium read/write unit of a communication terminal apparatus and a button is operated, information (operator in case of seal-like medium, and telephone number in case of seal-like medium) stored in the seal-like medium is read out.
    Type: Grant
    Filed: December 8, 1999
    Date of Patent: January 18, 2005
    Assignee: Sony Corporation
    Inventors: Takashi Sasai, Hiroaki Ogawa, Shuji Yonekura, Tomoaki Nitta, Erika Kobayashi
  • Publication number: 20040054519
    Abstract: The present invention relates to a language processing apparatus capable of generating an effective synthesized sound by performing language processing taking into account an onomatopoeia or a mimetic word. An effective synthesized voice is produced from a given text such that the synthesized voice includes a “sound” representing the meaning of an onomatopoeia or a mimetic word included in the given text. An onomatopoeic/mimetic word analyzer 21 extracts the onomatopoeia or the mimetic word from the text, and an onomatopoeic/mimetic word processing unit 27 produces acoustic data of a sound effect corresponding to the extracted onomatopoeia or mimetic word. A voice mixer 26 superimposes the acoustic data produced by the onomatopoeic/mimetic word processing unit 27 on the whole or a part of the synthesized voice data, corresponding to the text, produced by a rule-based synthesizer 24. The present invention may be applied to a robot having a voice synthesizer.
    Type: Application
    Filed: May 23, 2003
    Publication date: March 18, 2004
    Inventors: Erika Kobayashi, Makoto Akabane, Tomoaki Nitta, Hideki Kishi, Rika Horinaka, Masashi Takeda
  • Publication number: 20040019484
    Abstract: The emotion is to be added to the synthesized speech as the prosodic feature of the language is maintained. In a speech synthesis device 200, a language processor 201 generates a string of pronunciation marks from the text, and a prosodic data generating unit 202 creates prosodic data, expressing the time duration, pitch, sound volume or the like parameters of phonemes, based on the string of pronunciation marks. A constraint information generating unit 203 is fed with the prosodic data and with the string of pronunciation marks to generate the constraint information which limits the changes in the parameters to add the so generated constraint information to the prosodic data. A emotion filter 204, fed with the prosodic data, to which has been added the constraint information, changes the parameters of the prosodic data, within the constraint, responsive to the feeling state information, imparted to it.
    Type: Application
    Filed: March 13, 2003
    Publication date: January 29, 2004
    Inventors: Erika Kobayashi, Toshiyuki Kumakura, Makoto Akabane, Kenichiro Kobayashi, Nobuhide Yamazaki, Tomoaki Nitta, Pierre Yves Oudeyer
  • Publication number: 20030171850
    Abstract: The present invention relates to a voice output apparatus capable of, in response to a particular stimulus, stopping outputting a voice and outputting a reaction. The voice output apparatus is capable of outputting a voice in a natural manner. A rule-based synthesizer 24 produces a synthesized voice and outputs it. For example, when a synthesized voice “Where is an exit” was produced and outputting of the synthesized voice data has proceeded until “Where is an e” has been output, if a user taps a robot, then a reaction generator 30 determines, by referring to a reaction database 31, that a reaction voice “Ouch!” should be output in response to being tapped. The reaction generator 30 then controls an output controller 27 so as to stop outputting the synthesized voice “Where is an exit?” and output the reaction voice “Ouch!”.
    Type: Application
    Filed: May 9, 2003
    Publication date: September 11, 2003
    Inventors: Erika Kobayashi, Makoto Akabane, Tomoaki Nitta, Hideki Kishi, Rika Horinaka, Masashi Takeda
  • Publication number: 20030007397
    Abstract: The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.
    Type: Application
    Filed: May 10, 2002
    Publication date: January 9, 2003
    Inventors: Kenichiro Kobayashi, Makoto Akabane, Tomoaki Nitta, Nobuhide Yamazaki, Erika Kobayashi
  • Publication number: 20010021907
    Abstract: Various sensors detect conditions outside a robot and an operation applied to the robot, and output the results of detection to a robot-motion-system control section. The robot-motion-system control section determines a behavior state according to a behavior model. A robot-thinking-system control section determines an emotion state according to an emotion model. A speech-synthesizing-control-information selection section determines a field on a speech-synthesizing-control-information table according to the behavior state and the emotion state. A language processing section analyzes in grammar a text for speech synthesizing sent from the robot-thinking-system control section, converts a predetermined portion according to a speech-synthesizing control information, and outputs to a rule-based speech synthesizing section. The rule-based speech synthesizing section synthesizes a speech signal corresponding to the text for speech synthesizing.
    Type: Application
    Filed: December 27, 2000
    Publication date: September 13, 2001
    Inventors: Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi, Makoto Akabane, Kenichiro Kobayashi, Keiichi Yamada, Tomoaki Nitta
  • Publication number: 20010007096
    Abstract: In a synchronization control apparatus, a voice-language-information generating section generates the voice language information of a word which a robot utters. A voice synthesizing section calculates phoneme information and a phoneme continuation duration according to the voice language information, and also generates synthesized-voice data according to an adjusted phoneme continuation duration. An articulation-operation generating section calculates an articulation-operation period according to the phoneme information. A voice-operation adjusting section adjusts the phoneme continuation duration and the articulation-operation period. An articulation-operation executing section operates an organ of articulation according to the adjusted articulation-operation period.
    Type: Application
    Filed: December 27, 2000
    Publication date: July 5, 2001
    Inventors: Keiichi Yamada, Kenichiro Kobayashi, Tomoaki Nitta, Makoto Akabane, Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi