Patents by Inventor Makoto Akabane
Makoto Akabane has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 7197455Abstract: A client 2 includes a transmission unit 2d for transmitting the input speech information over a network to a server system 3 and an output unit 2b for receiving the contents selection information from the server system 3 over the network to output the received information. The server system 3 includes a prepared information storage unit 9b for memorizing one or more pieces of the preparation information pertinent to each contents, from one contents to another, and an information preparing server 7 for preparing the contents selection information based on the speech information received from the client 2 over the network and on the preparation information to send the so-prepared contents selection information to the client 3 over the network.Type: GrantFiled: March 3, 2000Date of Patent: March 27, 2007Assignee: Sony CorporationInventors: Fukuharu Sudo, Makoto Akabane, Toshitada Doi
-
Patent number: 7162152Abstract: A position detecting device for detecting a position of a movable body which is moved by a drive system having a rotation body includes a magnetic part having an N-pole and an S-pole on a face of the rotation body and a sensor chip having at least a magneto-resistive element. The face of the rotation body is perpendicular to the rotation shaft of the rotation body and the sensor chip is provided so as to face the magnetic part for detecting a variation of magnetic field by the magneto-resistive element when the magnetic part is rotated with the rotation body. A little space is required for mounting the sensor chip having the magneto-resistive element and therefore downsizing can be attained.Type: GrantFiled: November 18, 2004Date of Patent: January 9, 2007Assignee: Nidec Sankyo CorporationInventors: Makoto Akabane, Yukio Furuya
-
Publication number: 20060253286Abstract: The present invention is intended to provide a text-to-speech synthesis apparatus, including a storage for storing phoneme data of a plurality of speakers; a selector for selecting one of the plurality of speakers in accordance with an operation performed by a user; a searcher for searching the storage for phoneme data of the speaker selected by the selector; a text-to-speech synthesis processor for linking the phoneme data of the speaker retrieved by the searcher to convert input data into a synthetic speech; and a fee-charge controller for controlling a fee-charge operation for the user in accordance with the phoneme data selected by the selector. Consequently, the user can perform text-to-speech synthesis on the desired input data such as drama data by use of the obtained phoneme data.Type: ApplicationFiled: July 11, 2006Publication date: November 9, 2006Applicant: Sony CorporationInventors: Makoto Akabane, Hajime Yano, Keiichi Yamada, Goro Shiraishi, Junichi Kudo, Akira Tange
-
Patent number: 7111011Abstract: The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.Type: GrantFiled: May 10, 2002Date of Patent: September 19, 2006Assignee: Sony CorporationInventors: Kenichiro Kobayashi, Makoto Akabane, Tomoaki Nitta, Nobuhide Yamazaki, Erika Kobayashi
-
Patent number: 7099826Abstract: The present invention is intended to provide a text-to-speech synthesis apparatus, including a storage for storing phoneme data of a plurality of speakers; a selector for selecting one of the plurality of speakers in accordance with an operation performed by a user; a searcher for searching the storage for phoneme data of the speaker selected by the selector; a text-to-speech synthesis processor for linking the phoneme data of the speaker retrieved by the searcher to convert input data into a synthetic speech; and a fee-charge controller for controlling a fee-charge operation for the user in accordance with the phoneme data selected by the selector. Consequently, the user can perform text-to-speech synthesis on the desired input data such as drama data by use of the obtained phoneme data.Type: GrantFiled: May 31, 2002Date of Patent: August 29, 2006Assignee: Sony CorporationInventors: Makoto Akabane, Hajime Yano, Keiichi Yamada, Goro Shiraishi, Junichi Kudo, Akira Tange
-
Publication number: 20060158054Abstract: A motor includes a circumferential wall part of a case provided with an opening part on one end side in an axial direction and at least a magnet fixed to the inner peripheral face of the circumferential wall part. At least a part of the outer peripheral face of the magnet is fixed to the inner peripheral face of the circumferential wall part with a first adhesive, and further, the outer peripheral edge part of a first end face of the magnet that is located on a side of the opening part is fixed to the inner peripheral face of the circumferential wall part with a second adhesive.Type: ApplicationFiled: December 14, 2005Publication date: July 20, 2006Inventor: Makoto Akabane
-
Publication number: 20060161437Abstract: The present invention is intended to provide a text-to-speech synthesis apparatus, including a storage for storing phoneme data of a plurality of speakers; a selector for selecting one of the plurality of speakers in accordance with an operation performed by a user; a searcher for searching the storage for phoneme data of the speaker selected by the selector; a text-to-speech synthesis processor for linking the phoneme data of the speaker retrieved by the searcher to convert input data into a synthetic speech; and a fee-charge controller for controlling a fee-charge operation for the user in accordance with the phoneme data selected by the selector. Consequently, the user can perform text-to-speech synthesis on the desired input data such as drama data by use of the obtained phoneme data.Type: ApplicationFiled: March 21, 2006Publication date: July 20, 2006Applicant: Sony CorporationInventors: Makoto Akabane, Hajime Yano, Keiichi Yamada, Goro Shiraishi, Junichi Kudo, Akira Tange
-
Patent number: 7080015Abstract: In a synchronization control apparatus, a voice-language-information generating section generates the voice language information of a word which a robot utters. A voice synthesizing section calculates phoneme information and a phoneme continuation duration according to the voice language information, and also generates synthesized-voice data according to an adjusted phoneme continuation duration. An articulation-operation generating section calculates an articulation-operation period according to the phoneme information. A voice-operation adjusting section adjusts the phoneme continuation duration and the articulation-operation period. An articulation-operation executing section operates an organ of articulation according to the adjusted articulation-operation period.Type: GrantFiled: August 26, 2004Date of Patent: July 18, 2006Assignee: Sony CorporationInventors: Keiichi Yamada, Kenichiro Kobayashi, Tomoaki Nitta, Makoto Akabane, Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi
-
Publication number: 20060138877Abstract: A rotor core for a motor includes a core having a plurality of core sheets which are laminated, a rotation shaft which is press-fitted to the core, a press-fitting part and at least a space part which are provided at a joining portion of a through-hole of each of the core sheets to the rotation shaft, and a resin film which is formed in the space part for joining the rotation shaft with the core.Type: ApplicationFiled: November 30, 2005Publication date: June 29, 2006Inventor: Makoto Akabane
-
Patent number: 7062438Abstract: A sentence or a singing is to be synthesized with a natural speech close to the human voice. To this end, singing metrical data are formed in a tag processing unit 211 in a singing synthesis unit 212 in a speech synthesis apparatus 200 based on singing data and an analyzed text portion. A language analysis unit 213 performs language processing on text portions other than the singing data. As for a text portion registered in a natural metrical dictionary, as determined by this language processing, corresponding natural metrical data is selected and its parameters are adjusted in a metrical data adjustment unit 222 based on phonemic segment data of a phonemic segment storage unit 223 in the metrical data adjustment unit 222. As for a text portion not registered in the natural metrical dictionary, a phonemic symbol string is generated in a natural metrical dictionary storage unit 214, after which metrical data are generated in a metrical generating unit 221.Type: GrantFiled: March 13, 2003Date of Patent: June 13, 2006Assignee: Sony CorporationInventors: Kenichiro Kobayashi, Nobuhide Yamazaki, Makoto Akabane
-
Publication number: 20050251737Abstract: The text format of input data is checked, and is converted into a system-manipulated format. It is further determined if the input data is in an HTML or e-mail format using tags, heading information, and the like. The converted data is divided into blocks in a simple manner such that elements in the blocks can be checked based on repetition of predetermined character patterns. Each block section is tagged with a tag indicating a block. The data divided into blocks is parsed based on tags, character patterns, etc., and is structured. A table in text is also parsed, and is segmented into cells. Finally, tree-structured data having a hierarchical structure is generated based on the sentence-structured data. A sentence-extraction template paired with the tree-structured data is used to extract sentences.Type: ApplicationFiled: July 20, 2005Publication date: November 10, 2005Applicant: Sony CorporationInventors: Kenichiro Kobayashi, Makoto Akabane, Tomoaki Nitta, Nobuhide Yamazaki, Erika Kobayashi
-
Patent number: 6936940Abstract: A motor includes a stator and a rotor having drive permanent magnets and a rotary shaft. The stator supports a core with coils wound thereon and includes a base plate. The base plate is formed with a resin, and is composed of a core supporting section that supports the core, a shaft supporting section that rotatably supports the rotary shaft, and attachment sections for attaching the motor to an apparatus. The core supporting section, the shaft supporting section and the attachment sections of the base plate are formed in one piece.Type: GrantFiled: April 29, 2003Date of Patent: August 30, 2005Assignee: Sankyo Seiki Mfg. Co., Ltd.Inventors: Kazutaka Kobayashi, Makoto Akabane
-
Publication number: 20050152689Abstract: A position detecting device for detecting a position of a movable body which is moved by a drive system having a rotation body includes a magnetic part having an N-pole and an S-pole on a face of the rotation body and a sensor chip having at least a magneto-resistive element. The face of the rotation body is perpendicular to the rotation shaft of the rotation body and the sensor chip is provided so as to face the magnetic part for detecting a variation of magnetic field by the magneto-resistive element when the magnetic part is rotated with the rotation body. A little space is required for mounting the sensor chip having the magneto-resistive element and therefore downsizing can be attained.Type: ApplicationFiled: November 18, 2004Publication date: July 14, 2005Inventors: Makoto Akabane, Yukio Furuya
-
Patent number: 6865535Abstract: In a synchronization control apparatus, a voice-language-information generating section generates the voice language information of a word which a robot utters. A voice synthesizing section calculates phoneme information and a phoneme continuation duration according to the voice language information, and also generates synthesized-voice data according to an adjusted phoneme continuation duration. An articulation-operation generating section calculates an articulation-operation period according to the phoneme information. A voice-operation adjusting section adjusts the phoneme continuation duration and the articulation-operation period. An articulation-operation executing section operates an organ of articulation according to the adjusted articulation-operation period.Type: GrantFiled: December 27, 2000Date of Patent: March 8, 2005Assignee: Sony CorporationInventors: Keiichi Yamada, Kenichiro Kobayashi, Tomoaki Nitta, Makoto Akabane, Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi
-
Publication number: 20050027540Abstract: In a synchronization control apparatus, a voice-language-information generating section generates the voice language information of a word which a robot utters. A voice synthesizing section calculates phoneme information and a phoneme continuation duration according to the voice language information, and also generates synthesized-voice data according to an adjusted phoneme continuation duration. An articulation-operation generating section calculates an articulation-operation period according to the phoneme information. A voice-operation adjusting section adjusts the phoneme continuation duration and the articulation-operation period. An articulation-operation executing section operates an organ of articulation according to the adjusted articulation-operation period.Type: ApplicationFiled: August 26, 2004Publication date: February 3, 2005Inventors: Keiichi Yamada, Kenichiro Kobayashi, Tomoaki Nitta, Makoto Akabane, Masato Shimakawa, Nobuhide Yamazaki, Erika Kobayashi
-
Patent number: 6836050Abstract: A motor with brush includes a rotor having a rotary shaft and a commutator retained at the rotary shaft, two brush sections each having a plane surface in an axial direction of the rotor and in sliding contact with the commutator, and two brush terminals that are integral with the brush sections, respectively. Each of the brush terminals includes a brush connecting section formed in the axial direction of the rotary shaft and having a plane surface that connects to the plane surface of each of the corresponding brush sections, and a bent section that is bent in a direction generally orthogonal to the axial direction of the rotary shaft. The bent section restricts and fixedly positions the brush terminal and the brush section, and the brush terminals connect to external connection members that are provided outside the motor and inside an outer circumference of the motor.Type: GrantFiled: November 4, 2002Date of Patent: December 28, 2004Assignee: Sankyo Seiki Mfg. Co., Ltd.Inventors: Makoto Akabane, Masayuki Katagiri
-
Patent number: 6778963Abstract: An in-vehicle device which is capable of performing control by means of speech recognition includes a monitor for displaying map and other information and a speech input microphone connected to the monitor.Type: GrantFiled: May 17, 2001Date of Patent: August 17, 2004Assignee: Sony CorporationInventors: Toru Yamamoto, Makoto Akabane, Yoshikazu Takahashi, Masashi Ohkubo, Eiji Yamamoto, Satoko Ikezawa
-
Publication number: 20040054519Abstract: The present invention relates to a language processing apparatus capable of generating an effective synthesized sound by performing language processing taking into account an onomatopoeia or a mimetic word. An effective synthesized voice is produced from a given text such that the synthesized voice includes a “sound” representing the meaning of an onomatopoeia or a mimetic word included in the given text. An onomatopoeic/mimetic word analyzer 21 extracts the onomatopoeia or the mimetic word from the text, and an onomatopoeic/mimetic word processing unit 27 produces acoustic data of a sound effect corresponding to the extracted onomatopoeia or mimetic word. A voice mixer 26 superimposes the acoustic data produced by the onomatopoeic/mimetic word processing unit 27 on the whole or a part of the synthesized voice data, corresponding to the text, produced by a rule-based synthesizer 24. The present invention may be applied to a robot having a voice synthesizer.Type: ApplicationFiled: May 23, 2003Publication date: March 18, 2004Inventors: Erika Kobayashi, Makoto Akabane, Tomoaki Nitta, Hideki Kishi, Rika Horinaka, Masashi Takeda
-
Publication number: 20040019485Abstract: A sentence or a singing is to be synthesized with a natural speech close to the human voice. To this end, singing metrical data are formed in a tag processing unit 211 in a singing synthesis unit 212 in a speech synthesis apparatus 200 based on singing data and an analyzed text portion. A language analysis unit 213 performs language processing on text portions other than the singing data. As for a text portion registered in a natural metrical dictionary, as determined by this language processing, corresponding natural metrical data is selected and its parameters are adjusted in a metrical data adjustment unit 222 based on phonemic segment data of a phonemic segment storage unit 223 in the metrical data adjustment unit 222. As for a text portion not registered in the natural metrical dictionary, a phonemic symbol string is generated in a natural metrical dictionary storage unit 214, after which metrical data are generated in a metrical generating unit 221.Type: ApplicationFiled: March 13, 2003Publication date: January 29, 2004Inventors: Kenichiro Kobayashi, Nobuhide Yamazaki, Makoto Akabane
-
Publication number: 20040019484Abstract: The emotion is to be added to the synthesized speech as the prosodic feature of the language is maintained. In a speech synthesis device 200, a language processor 201 generates a string of pronunciation marks from the text, and a prosodic data generating unit 202 creates prosodic data, expressing the time duration, pitch, sound volume or the like parameters of phonemes, based on the string of pronunciation marks. A constraint information generating unit 203 is fed with the prosodic data and with the string of pronunciation marks to generate the constraint information which limits the changes in the parameters to add the so generated constraint information to the prosodic data. A emotion filter 204, fed with the prosodic data, to which has been added the constraint information, changes the parameters of the prosodic data, within the constraint, responsive to the feeling state information, imparted to it.Type: ApplicationFiled: March 13, 2003Publication date: January 29, 2004Inventors: Erika Kobayashi, Toshiyuki Kumakura, Makoto Akabane, Kenichiro Kobayashi, Nobuhide Yamazaki, Tomoaki Nitta, Pierre Yves Oudeyer