Patents by Inventor Masakatsu Hoshimi

Masakatsu Hoshimi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190243902
    Abstract: A translation device includes an input unit and a controller. The input unit obtains first text data in a first language. The controller generates second text data in a second language that is a translation of the first text data. The controller further generates first replacement data by replacing a first term, of a predetermined type, contained in the first text data by a parameter, obtains second replacement data, in the second language, corresponding to the first replacement data, and generates the second text data by replacing the parameter contained in the second replacement data by a second term in the second language that is a translation of the first term.
    Type: Application
    Filed: January 25, 2019
    Publication date: August 8, 2019
    Inventors: NATSUKI SAEKI, TOMOKAZU ISHIKAWA, MASAKATSU HOSHIMI
  • Patent number: 10216732
    Abstract: An information presentation method, a non-transitory recording medium storing thereon a computer program, and an information presentation system relate to speech recognition. A speech recognition unit performs speech recognition on speech pertaining to a dialogue and thereby generates dialogue text, a translation unit translates the dialogue text and thereby generates translated dialogue text, and a speech waveform synthesis unit performs speech synthesis on the translated dialogue text and thereby generates translated dialogue speech. An intention understanding unit then determines whether supplementary information exists, based on the dialogue text. If supplementary information exists, a communication unit transmits the supplementary information and the translated dialogue speech to a terminal to present the existence of the supplementary information to at least one person from among a plurality of people, according to the usage situation of the information presentation system of the at least one person.
    Type: Grant
    Filed: July 5, 2017
    Date of Patent: February 26, 2019
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Koji Miura, Masakatsu Hoshimi
  • Publication number: 20180067928
    Abstract: Provided are an information presentation method, a non-transitory recording medium storing thereon a computer program, and an information presentation system. A speech recognition unit performs speech recognition on speech pertaining to a dialogue and thereby generates dialogue text, a translation unit translates the dialogue text and thereby generates translated dialogue text, and a speech waveform synthesis unit performs speech synthesis on the translated dialogue text and thereby generates translated dialogue speech. An intention understanding unit then determines whether supplementary information exists, based on the dialogue text. If supplementary information exists, a communication unit transmits the supplementary information and the translated dialogue speech to a terminal to present the existence of the supplementary information to at least one person from among a plurality of people, according to the usage situation of the information presentation system of the at least one person.
    Type: Application
    Filed: July 5, 2017
    Publication date: March 8, 2018
    Inventors: KOJI MIURA, MASAKATSU HOSHIMI
  • Publication number: 20160210961
    Abstract: A speech interaction device includes: an obtainment unit that obtains utterance data indicating an utterance made by a user; a memory that holds a plurality of keywords; a word determination unit that extracts a plurality of words from the utterance data and determines, for each of the plurality of words, whether or not to match any of the plurality of keywords; a response sentence generation unit that, when the plurality of words include a first word that is determined not to match any of the plurality of keywords, generates a response sentence that includes a second word, which is among the plurality of words and determined to match any one of the plurality of keywords, and asks for re-input of a part corresponding to the first word; and a speech generation unit that generates speech data of the response sentence.
    Type: Application
    Filed: November 12, 2014
    Publication date: July 21, 2016
    Inventors: Masahiro NAKANISHI, Takahiro KAMAI, Masakatsu HOSHIMI
  • Patent number: 8311831
    Abstract: A voice emphasizing device emphasizes in a speech a “strained rough voice” at a position where a speaker or user of the speech intends to generate emphasis or musical expression. Thereby, the voice emphasizing device can provide the position with emphasis of anger, excitement, tension, or an animated way of speaking, or musical expression of Enka (Japanese ballad), blues, rock, or the like. As a result, rich vocal expression can be achieved. The voice emphasizing device includes: an emphasis utterance section detection unit (12) detecting, from an input speech waveform, an emphasis section that is a time duration having a waveform intended by the speaker or user to be converted; and a voice emphasizing unit (13) increasing fluctuation of an amplitude envelope of the waveform in the detected emphasis section.
    Type: Grant
    Filed: September 29, 2008
    Date of Patent: November 13, 2012
    Assignee: Panasonic Corporation
    Inventors: Yumiko Kato, Takahiro Kamai, Masakatsu Hoshimi
  • Publication number: 20110178680
    Abstract: A vehicle control device (10) is provided that can predict a driving operation of a driver earlier to respond to the driving operation quickly. The vehicle control device (10) includes: a posture measuring unit (11) to measure a posture indicating a state of at least one of the buttock region, the upper pelvic region, and the driver's leg opposite to the other leg with which the driver operates a brake or an accelerator; a posture change detection unit (12) to detect a posture change measured; a preparatory movement identification unit (13) to identify whether the posture change is caused by the driver's preparatory movement spontaneously made before the brake or accelerator operation, based on whether the posture change detected satisfies a predetermined condition; and a vehicle control unit (14) to control the vehicle when it is identified that the posture change has been caused by the preparatory movement.
    Type: Application
    Filed: March 30, 2011
    Publication date: July 21, 2011
    Inventors: Yumiko KATO, Masakatsu HOSHIMI
  • Publication number: 20100070283
    Abstract: A voice emphasizing device emphasizes in a speech a “strained rough voice” at a position where a speaker or user of the speech intends to generate emphasis or musical expression. Thereby, the voice emphasizing device can provide the position with emphasis of anger, excitement, tension, or an animated way of speaking, or musical expression of Enka (Japanese ballad), blues, rock, or the like. As a result, rich vocal expression can be achieved. The voice emphasizing device includes: an emphasis utterance section detection unit (12) detecting, from an input speech waveform, an emphasis section that is a time duration having a waveform intended by the speaker or user to be converted; and a voice emphasizing unit (13) increasing fluctuation of an amplitude envelope of the waveform in the detected emphasis section.
    Type: Application
    Filed: September 29, 2008
    Publication date: March 18, 2010
    Inventors: Yumiko Kato, Takahiro Kamai, Masakatsu Hoshimi
  • Patent number: 6842734
    Abstract: In an acoustic model producing apparatus, a plurality of noise samples are categorized into clusters so that a number of the clusters is smaller than that of noise samples. A noise sample is selected in each of the clusters to set the selected noise samples to second noise samples for training. On the other hand, untrained acoustic models are stored on a storage unit so that the untrained acoustic models are trained by using the second noise samples for training, thereby producing trained acoustic models for speech recognition so as to produce a trained acoustic model for speech recognition.
    Type: Grant
    Filed: June 14, 2001
    Date of Patent: January 11, 2005
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Maki Yamada, Masakatsu Hoshimi
  • Patent number: 6728673
    Abstract: A video retrieval data generation apparatus includes an extractor that is configured to extract a characteristic pattern from a voice signal synchronous with a video signal. The video retrieval data generation apparatus also includes an index generator that is configured to set the voice signal for a voice period as a processing target. The index generator is further configured to prepare standard voice patterns of a subword corresponding to a plurality of subwords, detect, for each subword, a characteristic pattern similar to a standard voice pattern at each of the voice periods, and generate, for each subword, an index containing time synchronization information corresponding to a position where the similar characteristic pattern is detected. The video retrieval data generation apparatus also includes a multiplexer that is configured to multiplex video signals, voice signals and indexes to output in a data stream format.
    Type: Grant
    Filed: May 9, 2003
    Date of Patent: April 27, 2004
    Assignee: Matsushita Electric Industrial Co., LTD
    Inventors: Hiroshi Furuyama, Hitoshi Yashio, Ikuo Inoue, Mitsuru Endo, Masakatsu Hoshimi
  • Publication number: 20030200091
    Abstract: A video retrieval data generation apparatus includes an extractor that is configured to extract a characteristic pattern from a voice signal synchronous with a video signal. The video retrieval data generation apparatus also includes an index generator that is configured to set the voice signal for a voice period as a processing target. The index generator is further configured to prepare standard voice patterns of a subword corresponding to a plurality of subwords, detect, for each subword, a characteristic pattern similar to a standard voice pattern at each of the voice periods, and generate, for each subword, an index containing time synchronization information corresponding to a position where the similar characteristic pattern is detected. The video retrieval data generation apparatus also includes a multiplexer that is configured to multiplex video signals, voice signals and indexes to output in a data stream format.
    Type: Application
    Filed: May 9, 2003
    Publication date: October 23, 2003
    Applicant: MATSUSHITA ELECTRIC INDUSTRIAL CO., LTD.
    Inventors: Hiroshi Furuyama, Hitoshi Yashio, Ikuo Inoue, Mitsuru Endo, Masakatsu Hoshimi
  • Patent number: 6611803
    Abstract: A video retrieval apparatus includes a retrieval data generator that is configured to extract a characteristic pattern from a voice signal synchronous with a video signal to generate an index for video retrieval. The video retrieval apparatus also includes a retrieval processor that is configured to input a key word from a retriever and collate the key word with the index to retrieve a desired video. The retrieval data generator includes a multiplexor that is configured to multiplex video signals, voice signals and indexes to output in data stream format. The retrieval processor includes a demultiplexor that is configured to demultiplex the multiplexed data stream into the video signals, the voice signals and the indexes. A video reproduction apparatus may collate a visual pattern of the key word visual pattern data of the video signal at the time a person vocalizes a sound as the index for retrieval.
    Type: Grant
    Filed: August 14, 2000
    Date of Patent: August 26, 2003
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Hiroshi Furuyama, Hitoshi Yashio, Ikuo Inoue, Mitsuru Endo, Masakatsu Hoshimi
  • Publication number: 20020055840
    Abstract: In an acoustic model producing apparatus, a plurality of noise samples are categorized into clusters so that a number of the clusters is smaller than that of noise samples. A noise sample is selected in each of the clusters to set the selected noise samples to second noise samples for training. On the other hand, untrained acoustic models are stored on a storage unit so that the untrained acoustic models are trained by using the second noise samples for training, thereby producing trained acoustic models for speech recognition so as to produce a trained acoustic model for speech recognition.
    Type: Application
    Filed: June 14, 2001
    Publication date: May 9, 2002
    Applicant: Matsushita Electric Industrial Co., Ltd.
    Inventors: Maki Yamada, Masakatsu Hoshimi
  • Patent number: 5692097
    Abstract: An inter-frame similarity between an input voice and a standard patterned word is calculated for each of frames and for each of standard patterned words, and a posterior probability similarity is produced by subtracting a constant value from each of the inter-frame similarities. The constant value is determined by analyzing voice data obtained from specified persons to set the posterior probability similarities to positive values when a word existing in the input voice matches with the standard patterned word and to set the posterior probability similarities to negative values when a word existing in the input voice does not match with the standard patterned word. Thereafter, an accumulated similarity having an accumulated value obtained by accumulating values of the posterior probability similarities according to a continuous dynamic programming matching operation for the frames of the input voice is calculated for each of the standard patterned words.
    Type: Grant
    Filed: November 23, 1994
    Date of Patent: November 25, 1997
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Maki Yamada, Masakatsu Hoshimi, Taisuke Watanabe, Katsuyuki Niyada
  • Patent number: 5345536
    Abstract: A set of "m" feature parameters is generated every frame from reference speech which is spoken by at least one speaker and which represents recognition-object words, where "m" denotes a preset integer. A set of "n" types of standard patterns is previously generated on the basis of speech data of a plurality of speakers, where "n" denotes a preset integer. Matching between the feature parameters of the reference speech and each of the standard patterns is executed to generate a vector of "n" reference similarities between the feature parameters of the reference speech and each of the standard patterns every frame. The reference similarity vectors of respective frames are arranged into temporal sequences corresponding to the recognition-object words respectively. The reference similarity vector sequences are previously registered as dictionary similarity vector sequences. Input speech to be recognized is analyzed to generate "m" feature parameters from the input speech.
    Type: Grant
    Filed: December 17, 1991
    Date of Patent: September 6, 1994
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Masakatsu Hoshimi, Maki Miyata, Shoji Hiraoka, Katsuyuki Niyada
  • Patent number: 5309547
    Abstract: A method of speech recognition includes the steps of analyzing input speech every frame and deriving feature parameters from the input speech, generating an input vector from the feature parameters of a plurality of frames, and periodically calculating partial distances between the input vector and partial standard patterns while shifting the frame one by one. Standard patterns correspond to recognition-object words respectively, and each of the standard patterns is composed of the partial standard patterns which represent parts of the corresponding recognition-object word respectively. The partial distances are accumulated into distances between the input speech and the standard patterns. The distances correspond to the recognition-object words respectively. The distances are compared with each other, and a minimum distance of the distances is selected when the input speech ends. One of the recognition-object words which corresponds to the minimum distance is decided to be a recognition result.
    Type: Grant
    Filed: June 11, 1992
    Date of Patent: May 3, 1994
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Katsuyuki Niyada, Masakatsu Hoshimi, Shoji Hiraoka, Tatsuya Kimura
  • Patent number: 4817159
    Abstract: Speech parameters (P.sub.h and P.sub.l) are derived for consonant classification and recognition by separating a speech signal into Low and High frequency bands, then in each band obtaining the time first-derivative, from which the min-max differences (power dip) are obtained (P.sub.h and P.sub.l). The distribution of P.sub.h and P.sub.l in a two-dimensional plot for a discriminant diagram classifies the consonant phoneme.
    Type: Grant
    Filed: June 4, 1984
    Date of Patent: March 28, 1989
    Assignee: Matsushita Electric Industrial Co., LTD.
    Inventors: Masakatsu Hoshimi, Katsuyuki Niyada