Patents by Inventor Ken-ichi KAINUMA

Ken-ichi KAINUMA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11145222
    Abstract: A language learning system including: a learner terminal having a display displaying an image and a sound recording and reproduction device; a learning support server having a memory storing computer-readable instructions and a processor executing the computer-readable instructions so as to: cause the memory to store a model voice of a word with a model pronunciation for a learning language; send the model voice and an image the learner terminal; acquire a learner spoken voice of the word; identify the learner by analyzing the acquired learner voice and evaluate a pronunciation correctness of the acquired learner voice; and send an image indicating an evaluation result of the pronunciation correctness to the learner terminal; and a network that is communicably connected between the learner terminal and the learning support server. The image does not have character information regarding select portions of the learning language.
    Type: Grant
    Filed: January 16, 2018
    Date of Patent: October 12, 2021
    Inventor: Ken-ichi Kainuma
  • Patent number: 10964308
    Abstract: A speech processing apparatus is provided in which, while face feature points are extracted from moving image data obtained by imaging a speaker's face, for each frame, a first generation network for generating face feature points of the corresponding frame based on speech feature data extracted from uttered speech of the speaker for each frame is generated, and whether the first generation network is appropriate is evaluated using an identification network, then, a second generation network for generating the uttered speech from a plurality of uncertain settings including at least text representing utterance content of the uttered speech and information indicating emotions included in the uttered speech, a plurality of types of fixed settings which define speech quality, and the face feature points generated by the first generation network evaluated as appropriate, is generated, and whether the second generation network is appropriate is evaluated using the identification network.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: March 30, 2021
    Inventor: Ken-ichi Kainuma
  • Publication number: 20210027760
    Abstract: A speech processing apparatus is provided in which, while face feature points are extracted from moving image data obtained by imaging a speaker's face, for each frame, a first generation network for generating face feature points of the corresponding frame based on speech feature data extracted from uttered speech of the speaker for each frame is generated, and whether the first generation network is appropriate is evaluated using an identification network, then, a second generation network for generating the uttered speech from a plurality of uncertain settings including at least text representing utterance content of the uttered speech and information indicating emotions included in the uttered speech, a plurality of types of fixed settings which define speech quality, and the face feature points generated by the first generation network evaluated as appropriate, is generated, and whether the second generation network is appropriate is evaluated using the identification network.
    Type: Application
    Filed: October 29, 2018
    Publication date: January 28, 2021
    Inventor: Ken-ichi KAINUMA
  • Publication number: 20180137778
    Abstract: A language learning system including: a learner terminal having a display displaying an image and a sound recording and reproduction device; a learning support server having a memory storing computer-readable instructions and a processor executing the computer-readable instructions so as to: cause the memory to store a model voice of a word with a model pronunciation for a learning language; send the model voice and an image the learner terminal; acquire a learner spoken voice of the word; identify the learner by analyzing the acquired learner voice and evaluate a pronunciation correctness of the acquired learner voice; and send an image indicating an evaluation result of the pronunciation correctness to the learner terminal; and a network that is communicably connected between the learner terminal and the learning support server. The image does not have character information regarding select portions of the learning language.
    Type: Application
    Filed: January 16, 2018
    Publication date: May 17, 2018
    Inventor: Ken-ichi KAINUMA