Patents by Inventor Hiroyasu Kuwano

Hiroyasu Kuwano has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20090103901
    Abstract: There is provided a content tag attachment support device enabling a person to perform both tag attachment work and correction word and suppressing increase of the work time. In this device, audio recognition means (104) recognizes audio inputted. Tag generation means (103) gives data obtained by audio recognition as a tag to the content reproduced by content reproducing means (101). Tag correction means (108) sends tag correction information to the tag generation means (103) and sends tag correction start completion report information to content reproduction control means (109). The content reproduction control means (109) controls the content reproducing means (101) so as to temporarily stop the content reproduction in synchronization with a start of the tag correction work and resume the content reproduction in synchronization with the end of the tag correction work.
    Type: Application
    Filed: June 12, 2006
    Publication date: April 23, 2009
    Applicant: MATSUSHITA ELECTRIC INDUSTRIAL CO., LTD.
    Inventors: Mitsuru Endo, Hiroyasu Kuwano, Akira Ishida
  • Publication number: 20050228665
    Abstract: A metadata preparing device comprising a content reproducing unit (1) for reproducing and outputting content, a monitor (3) for monitoring the content reproduced by the content reproducing unit, a voice input unit (4), a voice recognition unit (5) for recognizing a voice signal input from the voice input unit, a metadata generation unit (6) for converting information recognized by the voice recognition unit into metadata, and an identification information imparting unit (7) for acquiring identification information that identifies respective parts in the content from the reproduced content supplied from the content reproducing unit, for imparting to metadata, wherein the generated metadata is so constructed as to be associated with respective parts in the content.
    Type: Application
    Filed: June 23, 2003
    Publication date: October 13, 2005
    Applicant: MATSUSHITA ELECTRIC INDUSRIAL CO, LTD.
    Inventors: Massaki Kobayashi, Hiroyuki Sakai, Kenji Matsui, Hiroyasu Kuwano, Masafumi Shimotashiro, Mitsuru Yasukata, Mitsuru Endoh
  • Publication number: 20040117181
    Abstract: An input speech utterance is segmented into a prefixed time length to make frames, to extract an acoustic feature parameter of each frame. The acoustic feature parameter is frequency-converted by using pluralfrequency conversion coefficients previously defined. By using all combinations of plural post-conversion feature parameters obtained by the frequency conversion and at least one standard phonemic model, to compute plural similarities or distances of between the post-conversion feature parameters of each of the frames and the standard phonemic model. A frequency converting condition for normalizing the input utterance is decided by using the pluralsimilarities or distances. By using the frequency converting condition, the input utterance is normalized. With this method, even in case there is change of the speaker making a speech utterance, the individual difference of input utterance can be corrected thereby improving the performance of speech recognition.
    Type: Application
    Filed: September 24, 2003
    Publication date: June 17, 2004
    Inventors: Keiko Morii, Yoshihisa Nakatoh, Hiroyasu Kuwano
  • Patent number: 6308152
    Abstract: A string of acoustic feature parameters of each of recognition-desired words and a string of acoustic feature parameters of each of reception words are registered in advance. When an uttered word is received, a string of acoustic feature parameters is extracted from the uttered word, the acoustic feature parameters of the uttered word is compared with the string of acoustic feature parameters of each recognition-desired word, and a recognition-desired word recognition score indicating a similarity degree between the uttered word and each recognition-desired word is calculated. Also, a reception word recognition score indicating a similarity degree between the uttered word and each reception word is calculated.
    Type: Grant
    Filed: June 22, 1999
    Date of Patent: October 23, 2001
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Tomohiro Konuma, Hiroyasu Kuwano