Patents by Inventor Yoshifumi Hirose

Yoshifumi Hirose has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20090313019
    Abstract: An emotion recognition apparatus is capable of performing accurate and stable speech-based emotion recognition, irrespective of individual, regional, and language differences of prosodic information.
    Type: Application
    Filed: May 21, 2007
    Publication date: December 17, 2009
    Inventors: Yumiko Kato, Takahiro Kamai, Yoshihisa Nakatoh, Yoshifumi Hirose
  • Publication number: 20090281807
    Abstract: A voice quality conversion device converts voice quality of an input speech using information of the speech.
    Type: Application
    Filed: May 8, 2008
    Publication date: November 12, 2009
    Inventors: Yoshifumi Hirose, Takahiro Kamai, Yumiko Kato
  • Publication number: 20090254349
    Abstract: A speech synthesizer can execute speech content editing at high speed and generate speech content easily. The speech synthesizer includes a small speech element DB (101), a small speech element selection unit (102), a small speech element concatenation unit (103), a prosody modification unit (104), a large speech element DB (105), a correspondence DB (106) that associates the small speech element DB (101) with the large speech element DB (105), a speech element candidate obtainment unit (107), a large speech element selection unit (108), and a large speech element concatenation unit (109). By editing synthetic speech using the small speech element DB (101) and performing quality enhancement on an editing result using the large speech element DB (105), speech content can be generated easily on a mobile terminal.
    Type: Application
    Filed: May 11, 2007
    Publication date: October 8, 2009
    Inventors: Yoshifumi Hirose, Yumiko Kato, Takahiro Kamai
  • Patent number: 7454343
    Abstract: A speech synthesizer that provides high-quality sound along with stable sound quality, including: a target parameter generation unit; a speech element DB; an element selection unit; a mixed parameter judgment unit which determines an optimum parameter combination of target parameters and speech elements; a parameter integration unit which integrates the parameters; and a waveform generation unit which generates synthetic speech. High-quality and stable synthetic speech is generated by combining, per parameter dimension, the parameters with stable sound quality generated by the target parameter generation unit with speech elements with high sound quality and a sense of true speech selected by the element selection unit.
    Type: Grant
    Filed: April 12, 2007
    Date of Patent: November 18, 2008
    Assignee: Panasonic Corporation
    Inventors: Yoshifumi Hirose, Takahiro Kamai, Yumiko Kato, Natsuki Saito
  • Patent number: 7349847
    Abstract: A speech synthesis apparatus appropriately transforms a voice characteristic of speech. The speech synthesis apparatus includes an element storing unit in which speech elements are stored, and a function storing unit in which transformation functions are stored, an adaptability judging unit which derives a degree of similarity by comparing a speech element stored in the element storing unit with an acoustic characteristic of the speech element used for generating a transformation function stored in the function storing unit. The speech synthesis apparatus also includes a selecting unit and voice characteristic transforming unit which transforms, for each speech element stored in the element storing unit, based on the degree of similarity derived by the adaptability judging unit, a voice characteristic of the speech element by applying one of the transformation functions stored in the function storing unit.
    Type: Grant
    Filed: February 13, 2006
    Date of Patent: March 25, 2008
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Yoshifumi Hirose, Natsuki Saito, Takahiro Kamai
  • Publication number: 20070233489
    Abstract: A speech synthesis device, in which the sound quality is not significantly degraded when generating a synthesized sound, includes a target element information generation unit (102), an element database (103), an element selection unit (104), a voice characteristics designation unit (105), a voice characteristics transformation unit (106), a distortion determination unit (108), and a target element information correction unit (109). When the speech element sequence transformed by the voice characteristics transformation unit (106) is determined as distorted by the distortion determination unit (108), the target element information correction unit (109) corrects the speech element information generated by the target element information generation unit (102) to the speech element information of the transformed voice characteristic, and the element selection unit (104) reselects a speech element sequence.
    Type: Application
    Filed: April 1, 2005
    Publication date: October 4, 2007
    Inventors: Yoshifumi Hirose, Hiromori Nii
  • Publication number: 20070203702
    Abstract: A speech synthesizer that provides high-quality sound along with stable sound quality, including: a target parameter generation unit; a speech element DB; an element selection unit; a mixed parameter judgment unit which determines an optimum parameter combination of target parameters and speech elements; a parameter integration unit which integrates the parameters; and a waveform generation unit which generates synthetic speech. High-quality and stable synthetic speech is generated by combining, per parameter dimension, the parameters with stable sound quality generated by the target parameter generation unit with speech elements with high sound quality and a sense of true speech selected by the element selection unit.
    Type: Application
    Filed: April 12, 2007
    Publication date: August 30, 2007
    Inventors: Yoshifumi Hirose, Takahiro Kamai, Yumiko Kato, Natsuki Saito
  • Patent number: 7251559
    Abstract: When traveling to a destination reported in a television broadcast or the like, the user has had to make a memo while watching the television and, when actually traveling by car, the user has had to enter necessary information into an in-vehicle terminal by looking at the memo. A broadcast provider broadcasts a travel program together with travel destination information related to it, and a television terminal presents the received broadcast and travel destination information to the user. If the user desires to travel to the destination thus presented, the television terminal downloads the received travel destination information into a memory card.
    Type: Grant
    Filed: November 7, 2001
    Date of Patent: July 31, 2007
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Hidetsugu Maekawa, Yoshifumi Hirose, Shinichi Yoshizawa, Kenji Mizutani, Yumi Wakita
  • Publication number: 20070094029
    Abstract: To provide a speech synthesis method of reading out units of synthesized speech without fail and in an easy to understand manner, even when playback of the units of synthesized speech are simultaneously requested. The duration prediction unit predicts the playback duration of synthesized speech to be synthesized based on text. The time constraint satisfaction judgment unit judges whether a constraint condition concerning the playback timing of the synthesized speech is satisfied or not, based on the predicted playback duration. If it judged that the constraint condition is not satisfied, the content modification unit shifts the playback starting timing of the synthesized speech of the text forward or backward, and modifies the contents of the text indicating time and distance in accordance with the shifted time. The synthesized speech generation unit generates synthesized speech based on the text having the modified contents and plays it back.
    Type: Application
    Filed: May 16, 2006
    Publication date: April 26, 2007
    Inventors: Natsuki Saito, Takahiro Kamai, Yumiko Kato, Yoshifumi Hirose
  • Publication number: 20060259299
    Abstract: A broadcast receiving system includes a broadcast receiving part for receiving a broadcast in which additional information that corresponds to an object appearing in broadcast contents and that contains keyword information for specifying the object is broadcasted simultaneously with the broadcast contents; a recognition vocabulary generating section for generating a recognition vocabulary set in a manner corresponding to the additional information by using a synonym dictionary; a speech recognition section for performing the speech recognition of a voice uttered by a viewing person, and for thereby specifying keyword information corresponding to a recognition vocabulary set when a word recognized as the speech recognition result is contained in the recognition vocabulary set; and a displaying section for displaying additional information corresponding to the specified keyword information.
    Type: Application
    Filed: December 26, 2003
    Publication date: November 16, 2006
    Inventors: Yumiko Kato, Takahiro Kamai, Hideyuki Yoshida, Yoshifumi Hirose
  • Publication number: 20060136213
    Abstract: A speech synthesis apparatus which can appropriately transform a voice characteristic of a speech is provided. The speech synthesis apparatus includes an element storing unit in which speech elements are stored, a function storing unit in which transformation functions are stored, an adaptability judging unit which derives a degree of similarity by comparing a speech element stored in the element storing unit with an acoustic characteristic of the speech element used for generating a transformation function stored in the function storing unit, and a selecting unit and voice characteristic transforming unit which transforms, for each speech element stored in the element storing unit, based on the degree of similarity derived by the adaptability judging unit, a voice characteristic of the speech element by applying one of the transformation functions stored in the function storing unit.
    Type: Application
    Filed: February 13, 2006
    Publication date: June 22, 2006
    Inventors: Yoshifumi Hirose, Natsuki Saito, Takahiro Kamai
  • Patent number: 7050979
    Abstract: A speech interpreting device. Speech is input in a first language. The input speech is recognized. One or more word strings of the first language are extracted and displayed. The word strings correspond to a result of the speech recognition. A word string is selected from the displayed word strings. The selected word string is expected to become an object of conversation to a second language. When a whole or a part of the selected word string is specified, candidates of a term which corresponds to contents of the specified whole or part of the selected word string are extracted and displayed. One of the displayed candidates is selected. The selected word string is converted to a second language.
    Type: Grant
    Filed: January 24, 2002
    Date of Patent: May 23, 2006
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Kenji Mizutani, Yoshifumi Hirose, Hidetsugu Maekawa, Yumi Wakita, Shinichi Yoshizawa
  • Publication number: 20050119890
    Abstract: The present invention includes: a characteristic parameter DB 106 that holds, with respect to each speech-unit, speech-unit data indicating a loan word attribute and acoustic characteristics; a language analysis unit 104 and a prosody prediction unit 109 that obtain text data and respectively predict a loan word attribute and acoustic characteristics of each of a plurality of speech-units that form text indicated by the text data; a speech-unit selection unit 108 that selects, from the characteristic parameter DB 106, speech-unit data that represents the loan word attribute and the acoustic characteristics similar to the predicted loan word attribute and acoustic characteristics of each speech-unit; and a speech synthesis unit 110 that generates synthesized speech using a plurality of the selected speech-units and outputs the synthesized speech.
    Type: Application
    Filed: November 29, 2004
    Publication date: June 2, 2005
    Inventor: Yoshifumi Hirose
  • Publication number: 20040068406
    Abstract: Even with an apparatus structure of a relatively small size, the possibility of misrecognizing speeches of a viewer can be reduced, and a conversation is smoothly sustained, whereby the viewer can readily have an impression that the conversation is established almost naturally. Such effects can be achieved by the following features. An image output section displays images which transit in a non-interactive manner for a viewer, such as broadcast images, or the like, on a display section. A conversation processing section outputs apparatus speech data for commencing a conversation based on conversation data which is stored in a conversation database and which is determined according to the transition of the images. When a speech is emitted by a viewer, the conversation processing section outputs apparatus speech data for replying to the speech of the viewer based on viewer speech data output from a speech recognition section and the conversation data.
    Type: Application
    Filed: July 16, 2003
    Publication date: April 8, 2004
    Inventors: Hidetsugu Maekawa, Yumi Wakita, Kenji Mizutani, Shinichi Yoshizawa, Yoshifumi Hirose, Kenji Matsui
  • Publication number: 20040068362
    Abstract: When traveling to a destination reported in a television broadcast or the like, the user has had to make a memo while watching the television and, when actually traveling by car, the user has had to enter necessary information into an in-vehicle terminal by looking at the memo.
    Type: Application
    Filed: September 22, 2003
    Publication date: April 8, 2004
    Inventors: Hidetsugu Maekawa, Yoshifumi Hirose, Shinichi Yoshizawa, Kenji Mizutani, Yumi Wakita
  • Publication number: 20030001865
    Abstract: An information display device includes a display capable of displaying first information belonging to a first hierarchy and second information belonging to a second hierarchy, and an operation means capable of executing a first scrolling operation for scrolling the display on which the first information belonging to the first hierarchy is displayed, a second scrolling operation for scrolling the display on which the second information belonging to the second hierarchy is displayed, and a hierarchy switching operation for switching between the first information belonging to the first hierarchy and the second information belonging to the second hierarchy to be displayed. The first scrolling operation, the second scrolling operation and the hierarchy switching operation, which are executed by the operation means, are the same kind of operation.
    Type: Application
    Filed: June 27, 2002
    Publication date: January 2, 2003
    Applicant: Matsushita Electric Industrial Co., Ltd.
    Inventors: Yoshifumi Hirose, Shinichi Yoshizawa
  • Publication number: 20020120436
    Abstract: A speech interpreting device which can be easily carried and operated is configured by: a speech inputting/outputting device 102; an image outputting device 103; one or more buttons 106; an image instructing device 105; a computation controlling device 101 which converts phonetically and linguistically data of the source language that are input by the user, and which supplies the converted data to the speech inputting/outputting device 102 and the image outputting device 103; an external large-scale nonvolatile memory device 104 which holds programs for instructing the computation controlling device 101 on procedures of the processes, and data; an external data input/output terminal 107 through which the computation controlling device 101 exchanges programs and data with external apparatuses; and a power source device 108 which supplies a necessary electric power.
    Type: Application
    Filed: January 24, 2002
    Publication date: August 29, 2002
    Inventors: Kenji Mizutani, Yoshifumi Hirose, Hidetsugu Maekawa, Yumi Wakita, Shinichi Yoshizawa