Patents by Inventor Erika Kobayashi
Erika Kobayashi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 7984076Abstract: The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.Type: GrantFiled: December 28, 2007Date of Patent: July 19, 2011Assignee: Sony CorporationInventors: Kenichiro Kobayashi, Makoto Akabane, Tomoaki Nitta, Nobuhide Yamazaki, Erika Kobayashi
-
Patent number: 7912696Abstract: A natural language processing apparatus includes an input section for inputting natural language, a representation converting section for converting representation of the natural language, a display section for displaying, for confirmation, sentence converted at the representation converting section, a machine translation section for carrying out machine translation of the confirmed sentence, and a control section for controlling these respective sections, thus to provide natural language processing in which confirmation operation of user is reduced.Type: GrantFiled: August 31, 1999Date of Patent: March 22, 2011Assignee: Sony CorporationInventors: Yasuharu Asano, Atsuo Hiroe, Masato Shimakawa, Tetsuya Kagami, Erika Kobayashi
-
Publication number: 20080256120Abstract: The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.Type: ApplicationFiled: December 28, 2007Publication date: October 16, 2008Applicant: Sony CorporationInventors: Kenichiro Kobayashi, Makoto Akabane, Tomoaki Nitta, Nobuhide Yamazaki, Erika Kobayashi
-
Patent number: 7412390Abstract: The emotion is to be added to the synthesized speech as the prosodic feature of the language is maintained. In a speech synthesis device 200, a language processor 201 generates a string of pronunciation marks from the text, and a prosodic data generating unit 202 creates prosodic data, expressing the time duration, pitch, sound volume or the like parameters of phonemes, based on the string of pronunciation marks. A constraint information generating unit 203 is fed with the prosodic data and with the string of pronunciation marks to generate the constraint information which limits the changes in the parameters to add the so generated constraint information to the prosodic data. A emotion filter 204, fed with the prosodic data, to which has been added the constraint information, changes the parameters of the prosodic data, within the constraint, responsive to the feeling state information, imparted to it.Type: GrantFiled: March 13, 2003Date of Patent: August 12, 2008Assignees: Sony France S.A., Sony CorporationInventors: Erika Kobayashi, Toshiyuki Kumakura, Makoto Akabane, Kenichiro Kobayashi, Nobuhide Yamazaki, Tomoaki Nitta, Pierre Yves Oudeyer
-
Patent number: 7379871Abstract: Various sensors detect conditions outside a robot and an operation applied to the robot, and output the results of detection to a robot-motion-system control section. The robot-motion-system control section determines a behavior state according to a behavior model. A robot-thinking-system control section determines an emotion state according to an emotion model. A speech-synthesizing-control-information selection section determines a field on a speech-synthesizing-control-information table according to the behavior state and the emotion state. A language processing section analyzes in grammar a text for speech synthesizing sent from the robot-thinking-system control section, converts a predetermined portion according to a speech-synthesizing control information, and outputs to a rule-based speech synthesizing section. The rule-based speech synthesizing section synthesizes a speech signal corresponding to the text for speech synthesizing.Type: GrantFiled: December 27, 2000Date of Patent: May 27, 2008Assignee: Sony CorporationInventors: Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi, Makoto Akabane, Kenichiro Kobayashi, Keiichi Yamada, Tomoaki Nitta
-
Patent number: 7315867Abstract: The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.Type: GrantFiled: July 20, 2005Date of Patent: January 1, 2008Assignee: Sony CorporationInventors: Kenichiro Kobayashi, Makoto Akabane, Tomoaki Nitta, Nobuhide Yamazaki, Erika Kobayashi
-
Patent number: 7222076Abstract: The present invention relates to a voice output apparatus capable of, in response to a particular stimulus, stopping outputting a voice and outputting a reaction. The voice output apparatus is capable of outputting a voice in a natural manner. A rule-based synthesizer 24 produces a synthesized voice and outputs it. For example, when a synthesized voice “Where is an exit” was produced and outputting of the synthesized voice data has proceeded until “Where is an e” has been output, if a user taps a robot, then a reaction generator 30 determines, by referring to a reaction database 31, that a reaction voice “Ouch!” should be output in response to being tapped. The reaction generator 30 then controls an output controller 27 so as to stop outputting the synthesized voice “Where is an exit?” and output the reaction voice “Ouch!”.Type: GrantFiled: March 22, 2002Date of Patent: May 22, 2007Assignee: Sony CorporationInventors: Erika Kobayashi, Makoto Akabane, Tomoaki Nitta, Hideki Kishi, Rika Horinaka, Masashi Takeda
-
Patent number: 7111011Abstract: The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.Type: GrantFiled: May 10, 2002Date of Patent: September 19, 2006Assignee: Sony CorporationInventors: Kenichiro Kobayashi, Makoto Akabane, Tomoaki Nitta, Nobuhide Yamazaki, Erika Kobayashi
-
Patent number: 7080015Abstract: In a synchronization control apparatus, a voice-language-information generating section generates the voice language information of a word which a robot utters. A voice synthesizing section calculates phoneme information and a phoneme continuation duration according to the voice language information, and also generates synthesized-voice data according to an adjusted phoneme continuation duration. An articulation-operation generating section calculates an articulation-operation period according to the phoneme information. A voice-operation adjusting section adjusts the phoneme continuation duration and the articulation-operation period. An articulation-operation executing section operates an organ of articulation according to the adjusted articulation-operation period.Type: GrantFiled: August 26, 2004Date of Patent: July 18, 2006Assignee: Sony CorporationInventors: Keiichi Yamada, Kenichiro Kobayashi, Tomoaki Nitta, Makoto Akabane, Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi
-
Publication number: 20050251737Abstract: The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.Type: ApplicationFiled: July 20, 2005Publication date: November 10, 2005Applicant: Sony CorporationInventors: Kenichiro Kobayashi, Makoto Akabane, Tomoaki Nitta, Nobuhide Yamazaki, Erika Kobayashi
-
Patent number: 6865535Abstract: In a synchronization control apparatus, a voice-language-information generating section generates the voice language information of a word which a robot utters. A voice synthesizing section calculates phoneme information and a phoneme continuation duration according to the voice language information, and also generates synthesized-voice data according to an adjusted phoneme continuation duration. An articulation-operation generating section calculates an articulation-operation period according to the phoneme information. A voice-operation adjusting section adjusts the phoneme continuation duration and the articulation-operation period. An articulation-operation executing section operates an organ of articulation according to the adjusted articulation-operation period.Type: GrantFiled: December 27, 2000Date of Patent: March 8, 2005Assignee: Sony CorporationInventors: Keiichi Yamada, Kenichiro Kobayashi, Tomoaki Nitta, Makoto Akabane, Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi
-
Publication number: 20050027540Abstract: In a synchronization control apparatus, a voice-language-information generating section generates the voice language information of a word which a robot utters. A voice synthesizing section calculates phoneme information and a phoneme continuation duration according to the voice language information, and also generates synthesized-voice data according to an adjusted phoneme continuation duration. An articulation-operation generating section calculates an articulation-operation period according to the phoneme information. A voice-operation adjusting section adjusts the phoneme continuation duration and the articulation-operation period. An articulation-operation executing section operates an organ of articulation according to the adjusted articulation-operation period.Type: ApplicationFiled: August 26, 2004Publication date: February 3, 2005Inventors: Keiichi Yamada, Kenichiro Kobayashi, Tomoaki Nitta, Makoto Akabane, Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi
-
Patent number: 6845247Abstract: When a seal-like recording medium with a built-in memory that stores information specifying a communicates is placed in the close proximity of a medium read/write unit of a communication terminal apparatus and a button is operated, information (operator in case of seal-like medium, and telephone number in case of seal-like medium) stored in the seal-like medium is read out.Type: GrantFiled: December 8, 1999Date of Patent: January 18, 2005Assignee: Sony CorporationInventors: Takashi Sasai, Hiroaki Ogawa, Shuji Yonekura, Tomoaki Nitta, Erika Kobayashi
-
Publication number: 20040054519Abstract: The present invention relates to a language processing apparatus capable of generating an effective synthesized sound by performing language processing taking into account an onomatopoeia or a mimetic word. An effective synthesized voice is produced from a given text such that the synthesized voice includes a “sound” representing the meaning of an onomatopoeia or a mimetic word included in the given text. An onomatopoeic/mimetic word analyzer 21 extracts the onomatopoeia or the mimetic word from the text, and an onomatopoeic/mimetic word processing unit 27 produces acoustic data of a sound effect corresponding to the extracted onomatopoeia or mimetic word. A voice mixer 26 superimposes the acoustic data produced by the onomatopoeic/mimetic word processing unit 27 on the whole or a part of the synthesized voice data, corresponding to the text, produced by a rule-based synthesizer 24. The present invention may be applied to a robot having a voice synthesizer.Type: ApplicationFiled: May 23, 2003Publication date: March 18, 2004Inventors: Erika Kobayashi, Makoto Akabane, Tomoaki Nitta, Hideki Kishi, Rika Horinaka, Masashi Takeda
-
Publication number: 20040019484Abstract: The emotion is to be added to the synthesized speech as the prosodic feature of the language is maintained. In a speech synthesis device 200, a language processor 201 generates a string of pronunciation marks from the text, and a prosodic data generating unit 202 creates prosodic data, expressing the time duration, pitch, sound volume or the like parameters of phonemes, based on the string of pronunciation marks. A constraint information generating unit 203 is fed with the prosodic data and with the string of pronunciation marks to generate the constraint information which limits the changes in the parameters to add the so generated constraint information to the prosodic data. A emotion filter 204, fed with the prosodic data, to which has been added the constraint information, changes the parameters of the prosodic data, within the constraint, responsive to the feeling state information, imparted to it.Type: ApplicationFiled: March 13, 2003Publication date: January 29, 2004Inventors: Erika Kobayashi, Toshiyuki Kumakura, Makoto Akabane, Kenichiro Kobayashi, Nobuhide Yamazaki, Tomoaki Nitta, Pierre Yves Oudeyer
-
Publication number: 20030171850Abstract: The present invention relates to a voice output apparatus capable of, in response to a particular stimulus, stopping outputting a voice and outputting a reaction. The voice output apparatus is capable of outputting a voice in a natural manner. A rule-based synthesizer 24 produces a synthesized voice and outputs it. For example, when a synthesized voice “Where is an exit” was produced and outputting of the synthesized voice data has proceeded until “Where is an e” has been output, if a user taps a robot, then a reaction generator 30 determines, by referring to a reaction database 31, that a reaction voice “Ouch!” should be output in response to being tapped. The reaction generator 30 then controls an output controller 27 so as to stop outputting the synthesized voice “Where is an exit?” and output the reaction voice “Ouch!”.Type: ApplicationFiled: May 9, 2003Publication date: September 11, 2003Inventors: Erika Kobayashi, Makoto Akabane, Tomoaki Nitta, Hideki Kishi, Rika Horinaka, Masashi Takeda
-
Publication number: 20030007397Abstract: The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.Type: ApplicationFiled: May 10, 2002Publication date: January 9, 2003Inventors: Kenichiro Kobayashi, Makoto Akabane, Tomoaki Nitta, Nobuhide Yamazaki, Erika Kobayashi
-
Publication number: 20010021907Abstract: Various sensors detect conditions outside a robot and an operation applied to the robot, and output the results of detection to a robot-motion-system control section. The robot-motion-system control section determines a behavior state according to a behavior model. A robot-thinking-system control section determines an emotion state according to an emotion model. A speech-synthesizing-control-information selection section determines a field on a speech-synthesizing-control-information table according to the behavior state and the emotion state. A language processing section analyzes in grammar a text for speech synthesizing sent from the robot-thinking-system control section, converts a predetermined portion according to a speech-synthesizing control information, and outputs to a rule-based speech synthesizing section. The rule-based speech synthesizing section synthesizes a speech signal corresponding to the text for speech synthesizing.Type: ApplicationFiled: December 27, 2000Publication date: September 13, 2001Inventors: Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi, Makoto Akabane, Kenichiro Kobayashi, Keiichi Yamada, Tomoaki Nitta
-
Publication number: 20010007096Abstract: In a synchronization control apparatus, a voice-language-information generating section generates the voice language information of a word which a robot utters. A voice synthesizing section calculates phoneme information and a phoneme continuation duration according to the voice language information, and also generates synthesized-voice data according to an adjusted phoneme continuation duration. An articulation-operation generating section calculates an articulation-operation period according to the phoneme information. A voice-operation adjusting section adjusts the phoneme continuation duration and the articulation-operation period. An articulation-operation executing section operates an organ of articulation according to the adjusted articulation-operation period.Type: ApplicationFiled: December 27, 2000Publication date: July 5, 2001Inventors: Keiichi Yamada, Kenichiro Kobayashi, Tomoaki Nitta, Makoto Akabane, Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi