Patents by Inventor Kentaro Murase

Kentaro Murase has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11941498
    Abstract: An image processing method executed by a computer, the method includes detecting a plurality of feature points of a face from an input image, referring to importance information that indicates an importance of a region within an image in a process of detecting a predetermined facial motion from the image, selecting, from the plurality of feature points detected by the detecting, one or more points that correspond to an image region including an importance indicated by the importance information equal to or smaller than a first threshold value, correcting the input image by using the one or more points selected by the selecting, to generate a corrected image; and determining whether or not the predetermined facial motion is occurring in the input image, based on an output obtained by inputting the corrected image to a recognition model.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: March 26, 2024
    Assignee: FUJITSU LIMITED
    Inventors: Ryosuke Kawamura, Kentaro Murase
  • Publication number: 20240037986
    Abstract: A recording medium stores a program for causing a computer to execute processing including: acquiring images; classifying the images, based on a combination of whether an action unit related to a motion of a portion occurs and whether occlusion is included in an image in which the action unit occurs; calculating a feature amount of the image by inputting each classified image into a model; and training the model so as to decrease a first distance between feature amounts of an image in which the action unit occurs and an image with an occlusion with respect to the image in which the action unit occurs and to increase a second distance between feature amounts of the image with the occlusion with respect to the image in which the action unit occurs and an image with an occlusion with respect to an image in which the action unit does not occur.
    Type: Application
    Filed: April 27, 2023
    Publication date: February 1, 2024
    Applicant: Fujitsu Limited
    Inventors: Ryosuke KAWAMURA, Kentaro MURASE
  • Publication number: 20230237845
    Abstract: A computer-readable recording medium has stored a program that causes a computer to execute a process including: generating a trained model that includes performing machine learning of a 1st_model based on a 1st_output value that is obtained when a 1st_image is input to the 1st_model in response to input of training data containing pair of the 1st_image and a 2nd_image and containing a 1st_label indicating which of the 1st and 2nd_image has captured greater movement of muscles of facial expression of a photographic subject, a 2nd_output value obtained when the 2nd_image is input to a 2nd_model that has common parameters with the 1st_model, and the 1st_label; and generating a 3rd_model that includes performing machine learning based on a 3rd_output value obtained when a 3rd_image is input to the trained model, and a 2nd_label indicating of movement of muscles of facial expression of a photographic subject captured in the 3rd_image.
    Type: Application
    Filed: March 9, 2023
    Publication date: July 27, 2023
    Applicant: FUJITSU LIMITED
    Inventors: JUNYA SAITO, Akiyoshi UCHIDA, Kentaro MURASE
  • Publication number: 20230046705
    Abstract: A non-transitory computer-readable storage medium storing a determination program that causes at least one computer to execute a process, the process includes acquiring a group of captured images that includes images including a face to which markers are attached; selecting, from a plurality of patterns that indicates a transition of positions of the markers, a first pattern that corresponds to a time-series change in the positions of the markers included in consecutive images among the group of captured images; and determining occurrence intensity of an action based on a determination criterion of the action determined based on the first pattern and the positions of the markers included in a captured image included after the consecutive images among the group of captured images.
    Type: Application
    Filed: October 28, 2022
    Publication date: February 16, 2023
    Applicant: FUJITSU LIMITED
    Inventors: JUNYA SAITO, Akiyoshi Uchida, Akihito Yoshii, Kiyonori Morioka, Kentaro Murase
  • Publication number: 20220415085
    Abstract: A non-transitory computer-readable recording medium stores a program that causes a computer to execute a process, the process includes inputting each of first images that includes a face of a subject to a first machine learning model to obtain a recognition result that includes information indicating first occurrence probability of each of facial expressions in each first image, generating training data that includes the recognition result and second images that are respectively generated based on the first images and in which at least a part of the face of the subject is concealed, and performing training of a second machine learning model, based on the training data, by using a loss function that represents an error that relates to a second occurrence probability of each facial expression in each second image and relates to magnitude relationship in the second occurrence probability among the second images.
    Type: Application
    Filed: April 12, 2022
    Publication date: December 29, 2022
    Applicant: FUJITSU LIMITED
    Inventors: Ryosuke KAWAMURA, Kentaro MURASE
  • Publication number: 20210248357
    Abstract: An image processing method executed by a computer, the method includes detecting a plurality of feature points of a face from an input image, referring to importance information that indicates an importance of a region within an image in a process of detecting a predetermined facial motion from the image, selecting, from the plurality of feature points detected by the detecting, one or more points that correspond to an image region including an importance indicated by the importance information equal to or smaller than a first threshold value, correcting the input image by using the one or more points selected by the selecting, to generate a corrected image; and determining whether or not the predetermined facial motion is occurring in the input image, based on an output obtained by inputting the corrected image to a recognition model.
    Type: Application
    Filed: January 25, 2021
    Publication date: August 12, 2021
    Applicant: FUJITSU LIMITED
    Inventors: Ryosuke KAWAMURA, Kentaro MURASE
  • Patent number: 11023474
    Abstract: An apparatus receives, via an input device, query input data including a word or a phrase, and acquires search result set data using the query input data. The apparatus acquires, for a value indicating a strength of a relationship between each impression word included in an impression word group and each word included in the query input data, and extracts the first feature word group according to the value indicating the strength of the relationship with each word, from the impression word group. The apparatus displays the search result set data using the first feature word group as an item.
    Type: Grant
    Filed: October 11, 2018
    Date of Patent: June 1, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Jun Takahashi, Junya Saito, Akinori Taguchi, Takuya Kamimura, Kentaro Murase, Seiji Okura, Shinji Kikuchi, Akira Nakagawa, Toshiyuki Fukuoka
  • Publication number: 20190114328
    Abstract: An apparatus receives, via an input device, query input data including a word or a phrase, and acquires search result set data using the query input data. The apparatus acquires, for a value indicating a strength of a relationship between each impression word included in an impression word group and each word included in the query input data, and extracts the first feature word group according to the value indicating the strength of the relationship with each word, from the impression word group. The apparatus displays the search result set data using the first feature word group as an item.
    Type: Application
    Filed: October 11, 2018
    Publication date: April 18, 2019
    Applicant: FUJITSU LIMITED
    Inventors: JUN TAKAHASHI, JUNYA SAITO, Akinori Taguchi, Takuya Kamimura, Kentaro Murase, Seiji Okura, Shinji Kikuchi, Akira NAKAGAWA, Toshiyuki Fukuoka
  • Patent number: 10241570
    Abstract: A pointing support device detects user's gaze position on a screen, sets the gaze position as an initial position of a pointer, displays a path starting from the initial position on the screen on the basis of path definition information, which defines the path to move the pointer and a movement pattern of the path and moves the pointer along the path.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: March 26, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Hiroshi Nakayama, Junichi Odagiri, Satoshi Nakashima, Kentaro Murase, Masakiyo Tanaka
  • Patent number: 10228905
    Abstract: A pointing support apparatus includes a memory, and a processor coupled to the memory and configured to detect a line-of-sight position of a user on a screen, extract a command included in a search range of the screen with reference to the line-of-sight position, generate a table in which the command and speech information of the command are associated with each other, and decide, when speech information from outside is accepted, a command based on comparison of the recognized speech information and the speech information of the table.
    Type: Grant
    Filed: February 16, 2017
    Date of Patent: March 12, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Hiroshi Nakayama, Junichi Odagiri, Satoshi Nakashima, Kentaro Murase, Masakiyo Tanaka
  • Patent number: 10002610
    Abstract: A presentation supporting device extracts a first word from a character string included in each region divided from a page of a document file, and calculates a score, for each region in a page currently-displayed, based on the first word and a second word acquired as a result of a sound recognition, and calculates, when the highest score of scores is equal to or higher than a first threshold, a distance between a first region in which the highlight display is currently executed and a second region in which the highest score is equal to or higher than the first threshold, and executes a highlight display in the second region when a frequency corresponding to the distance between the first region and the second region is equal to or higher than a second threshold, and executes a highlight display in the first region, when the second threshold is not reached.
    Type: Grant
    Filed: March 1, 2016
    Date of Patent: June 19, 2018
    Assignee: FUJITSU LIMITED
    Inventors: Masakiyo Tanaka, Jun Takahashi, Kentaro Murase
  • Publication number: 20170249124
    Abstract: A pointing support apparatus includes a memory, and a processor coupled to the memory and configured to detect a line-of-sight position of a user on a screen, extract a command included in a search range of the screen with reference to the line-of-sight position, generate a table in which the command and speech information of the command are associated with each other, and decide, when speech information from outside is accepted, a command based on comparison of the recognized speech information and the speech information of the table.
    Type: Application
    Filed: February 16, 2017
    Publication date: August 31, 2017
    Applicant: FUJITSU LIMITED
    Inventors: HIROSHI NAKAYAMA, Junichi Odagiri, Satoshi Nakashima, Kentaro Murase, Masakiyo Tanaka
  • Publication number: 20170139477
    Abstract: A pointing support device detects user's gaze position on a screen, sets the gaze position as an initial position of a pointer, displays a path starting from the initial position on the screen on the basis of path definition information, which defines the path to move the pointer and a movement pattern of the path and moves the pointer along the path.
    Type: Application
    Filed: October 24, 2016
    Publication date: May 18, 2017
    Applicant: FUJITSU LIMITED
    Inventors: HIROSHI NAKAYAMA, JUNICHI ODAGIRI, Satoshi NAKASHIMA, Kentaro Murase, Masakiyo TANAKA
  • Publication number: 20160275050
    Abstract: A presentation supporting device extracts a first word from a character string included in each region divided from a page of a document file, and calculates a score, for each region in a page currently-displayed, based on the first word and a second word acquired as a result of a sound recognition, and calculates, when the highest score of scores is equal to or higher than a first threshold, a distance between a first region in which the highlight display is currently executed and a second region in which the highest score is equal to or higher than the first threshold, and executes a highlight display in the second region when a frequency corresponding to the distance between the first region and the second region is equal to or higher than a second threshold, and executes a highlight display in the first region, when the second threshold is not reached.
    Type: Application
    Filed: March 1, 2016
    Publication date: September 22, 2016
    Applicant: FUJITSU LIMITED
    Inventors: Masakiyo TANAKA, Jun TAKAHASHI, Kentaro MURASE
  • Patent number: 8504368
    Abstract: A synthetic speech text-input device is provided that allows a user to intuitively know an amount of an input text that can be fit in a desired duration. A synthetic speech text-input device 1 includes: an input unit that receives a set duration in which a speech to be synthesized is to be fit, and a text for a synthetic speech; a text amount calculation unit that calculates an acceptable text amount based on the set duration received by the input unit, the acceptable text amount being an amount of a text acceptable as a synthetic speech of the set duration; and a text amount output unit that outputs the acceptable text amount calculated by the text amount calculation unit, when the input unit receives the text.
    Type: Grant
    Filed: September 10, 2010
    Date of Patent: August 6, 2013
    Assignee: Fujitsu Limited
    Inventors: Nobuyuki Katae, Kentaro Murase
  • Patent number: 8433573
    Abstract: A prosody modification device includes: a real voice prosody input part that receives real voice prosody information extracted from an utterance of a human; a regular prosody generating part that generates regular prosody information having a regular phoneme boundary that determines a boundary between phonemes and a regular phoneme length of a phoneme by using data representing a regular or statistical phoneme length in an utterance of a human with respect to a section including at least a phoneme or a phoneme string to be modified in the real voice prosody information; and a real voice prosody modification part that resets a real voice phoneme boundary by using the generated regular prosody information so that the real voice phoneme boundary and a real voice phoneme length of the phoneme or the phoneme string to be modified in the real voice prosody information are approximate to an actual phoneme boundary and an actual phoneme length of the utterance of the human, thereby modifying the real voice prosody in
    Type: Grant
    Filed: February 11, 2008
    Date of Patent: April 30, 2013
    Assignee: Fujitsu Limited
    Inventors: Kentaro Murase, Nobuyuki Katae
  • Patent number: 7925142
    Abstract: An information presenting apparatus, based on display status on a screen, estimates text blocks that a user is likely to classify as poor-legibility blocks and is likely to wish to be read aloud. A simple device that allows selecting from the text blocks is provided. A poor-legibility-block extractor divides text to be displayed on the screen into blocks corresponding to individual text blocks, classifies blocks including characters of sizes less than or equal to a predetermined size as poor-legibility blocks, and assigns block numbers to the poor-legibility blocks. A document display unit displays areas of the respective poor-legibility blocks as distinguished from other areas, with block numbers assigned to the respective poor-legibility blocks. When the user presses a numeric key corresponding to a block number, text in the corresponding block is read aloud.
    Type: Grant
    Filed: July 20, 2005
    Date of Patent: April 12, 2011
    Assignee: Fujitsu Limited
    Inventors: Kentaro Murase, Kazuhiro Watanabe
  • Publication number: 20110060590
    Abstract: A synthetic speech text-input device is provided that allows a user to intuitively know an amount of an input text that can be fit in a desired duration. A synthetic speech text-input device 1 includes: an input unit that receives a set duration in which a speech to be synthesized is to be fit, and a text for a synthetic speech; a text amount calculation unit that calculates an acceptable text amount based on the set duration received by the input unit, the acceptable text amount being an amount of a text acceptable as a synthetic speech of the set duration; and a text amount output unit that outputs the acceptable text amount calculated by the text amount calculation unit, when the input unit receives the text.
    Type: Application
    Filed: September 10, 2010
    Publication date: March 10, 2011
    Applicant: JUJITSU LIMITED
    Inventors: Nobuyuki Katae, Kentaro Murase
  • Publication number: 20080319755
    Abstract: According to an aspect of an embodiment, an apparatus for converting text data into sound signal, comprises: a phoneme determiner for determining phoneme data corresponding to a plurality of phonemes and pause data corresponding to a plurality of pauses to be inserted among a series of phonemes in the text data to be converted into sound signal; a phoneme length adjuster for modifying the phoneme data and the pause data by determining lengths of the phonemes, respectively in accordance with a speed of the sound signal and selectively adjusting the length of at least one of the phonemes which is placed immediately after one of the pauses so that the at least one of the phonemes is relatively extended timewise as compared to other phonemes; and a output unit for outputting sound signal on the basis of the adjusted phoneme data and pause data by the phoneme length adjuster.
    Type: Application
    Filed: June 24, 2008
    Publication date: December 25, 2008
    Applicant: FUJITSU LIMITED
    Inventors: Rika Nishiike, Hitoshi Sasaki, Nobuyuki Katae, Kentaro Murase, Takuya Noda
  • Publication number: 20080235025
    Abstract: A prosody modification device includes: a real voice prosody input part that receives real voice prosody information extracted from an utterance of a human; a regular prosody generating part that generates regular prosody information having a regular phoneme boundary that determines a boundary between phonemes and a regular phoneme length of a phoneme by using data representing a regular or statistical phoneme length in an utterance of a human with respect to a section including at least a phoneme or a phoneme string to be modified in the real voice prosody information; and a real voice prosody modification part that resets a real voice phoneme boundary by using the generated regular prosody information so that the real voice phoneme boundary and a real voice phoneme length of the phoneme or the phoneme string to be modified in the real voice prosody information are approximate to an actual phoneme boundary and an actual phoneme length of the utterance of the human, thereby modifying the real voice prosody in
    Type: Application
    Filed: February 11, 2008
    Publication date: September 25, 2008
    Applicant: FUJITSU LIMITED
    Inventors: Kentaro Murase, Nobuyuki Katae