Patents by Inventor Makoto Terao

Makoto Terao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220254136
    Abstract: An image acquisition unit 110 acquires a plurality of images. The plurality of images include an object to be inferred. An image cut-out unit 120 cuts out an object region including the object from each of the plurality of images acquired by the image acquisition unit 110. An importance generation unit 130 generates importance information by processing the object region cut out by the image cut-out unit 120. The importance information indicates the importance of the object region when an object inference model is generated, and is generated for each object region, that is, for each image acquired by the image acquisition unit 110. A learning data generation unit 140 stores a plurality of object regions cut out by the image cut-out unit 120 and a plurality of pieces of importance information generated by the importance generation unit 130 in a learning data storage unit 150 as at least a part of the learning data.
    Type: Application
    Filed: January 28, 2022
    Publication date: August 11, 2022
    Applicant: NEC Corporation
    Inventors: Tomokazu KANEKO, Katsuhiko TAKAHASHI, Makoto TERAO, Soma SHIRAISHI, Takami SATO, Yu NABETO, Ryosuke SAKAI
  • Patent number: 11402836
    Abstract: A server device efficiently acquires information of a surrounding area of a movable body for updating map information while suppressing communication load. Condition information indicating a condition of autonomous driving is received from a first movable body capable of autonomous driving based on a state of a surrounding area of the first movable body and a map, and a request for state information is transmitted to a second movable body capable of transmitting the state information indicating a state of a place where the first movable body has moved, and when the received condition information indicates that the autonomous driving has been possible, transmission of the request is prevented.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: August 2, 2022
    Assignee: PIONEER CORPORATION
    Inventors: Itaru Takemura, Hiroshi Nagata, Makoto Matsumaru, Kyoichi Terao, Akira Shimizu
  • Publication number: 20220198783
    Abstract: The learning device 10D is learned to extract moving image feature amount Fm which is feature amount relating to the moving image data Dm when the moving image data Dm is inputted thereto, and is learned to extract still image feature amount Fs which is feature amount relating to the still image data Ds when the still image data Ds is inputted thereto. The first inference unit 32D performs a first inference regarding the moving image data Dm based on the moving image feature amount Fm. The second inference unit 34D performs a second inference regarding the still image data Ds based on the still image feature amount Fs. The learning unit 36D performs learning of the feature extraction unit 31D based on the results of the first inference and the second inference.
    Type: Application
    Filed: May 29, 2019
    Publication date: June 23, 2022
    Applicant: NEC Corporation
    Inventors: Shuhei YOSHIDA, Makoto TERAO
  • Publication number: 20200342215
    Abstract: A model learning device provided with: an error-added movement locus generation unit for adding an error to movement locus data for action learning that represents the movement locus of a subject and to which is assigned an action label that is information representing the action of the subject, and thereby generating error-added movement locus data; and an action recognition model learning unit for learning a model, using at least the error-added movement locus data and learning data created on the basis of the action label, by which model the action of some subject can be recognized from the movement locus of the subject. Thus, it is possible to provide a model by which the action of a subject can be recognized with high accuracy on the basis of the movement locus of the subject estimated using a camera image.
    Type: Application
    Filed: December 5, 2018
    Publication date: October 29, 2020
    Applicant: NEC Corporation
    Inventor: Makoto TERAO
  • Patent number: 10083686
    Abstract: An analysis object determination device includes a detection unit which detects a plurality of specific utterance sections using data related to a voice in a conversation, the specific utterance sections representing a plurality of specific events originating from one or a plurality of participants in the conversation, or a specific event originating from one of the conversation participants, and an object determination unit which determines, on the basis of the plurality of specific utterance sections detected by the detection unit, one or more cause analysis sections for the specific event originating from the conversation participant, the number of the cause analysis sections being fewer than the number of the plurality of specific utterance sections.
    Type: Grant
    Filed: September 19, 2013
    Date of Patent: September 25, 2018
    Assignee: NEC CORPORATION
    Inventors: Koji Okabe, Yoshifumi Onishi, Makoto Terao, Masahiro Tani
  • Patent number: 9875236
    Abstract: An analysis subject determination device includes: a demand period detection unit which detects, from data corresponding to audio of a dissatisfaction conversation, a demand utterance period which represents a demand utterance of a first conversation party among a plurality of conversation parties which are carrying out the dissatisfaction conversation; a negation period detection unit which detects, from the data, a negation utterance period which represents a negation utterance of a second conversation party which differs from the first conversation party; and a subject determination unit which, from the data, determines a period with a time obtained from the demand period utterance period as a start point and a time obtained from the negation utterance period after the demand utterance period as an end point to be an analysis subject period of a cause of dissatisfaction of the first conversation party in the dissatisfaction conversation.
    Type: Grant
    Filed: March 27, 2014
    Date of Patent: January 23, 2018
    Assignee: NEC CORPORATION
    Inventors: Koji Okabe, Yoshifumi Onishi, Makoto Terao, Masahiro Tani
  • Publication number: 20170364854
    Abstract: The purpose of the present invention is to provide a technology which is capable of appropriately evaluating a person's conduct with respect to another person. Provided is an information processing device, comprising a recognition unit 11, a detection unit 12, and an evaluation unit 13. The recognition unit 11 recognizes an evaluation subject's conduct. The detection unit 12 detects a trigger which is a state of a person other than the evaluation subject which triggers the evaluation subject's conduct. Using the detected trigger and the result of recognition by the recognition unit 13 relating to the evaluation subject's conduct, the evaluation unit 13 evaluates the evaluation subject's conduct.
    Type: Application
    Filed: December 2, 2015
    Publication date: December 21, 2017
    Inventors: Terumi UMEMATSU, Ryosuke ISOTANI, Yoshifumi OMISHI, Masanori TSUJIKAWA, Makoto TERAO, Tasuku KITADE, Shuji KOMEIJI
  • Publication number: 20160275968
    Abstract: A speech detection device according to the present invention acquires an acoustic signal, calculates a feature value representing a spectrum shape for a plurality of first frames from the acoustic signal, calculates a ratio of a likelihood of a voice model to a likelihood of a non-voice model for the first frames using the feature value, determines a candidate target voice section that is a section including target voice by use of the likelihood ratio, calculates a posterior probability of a plurality of phonemes using the feature value, calculates at least one of entropy and time difference of posterior probabilities of the plurality of phonemes for the first frames, and specifies a section as changed to a section not including the target voice, out of the candidate target voice sections, by use of at least one of the entropy and the time difference of the posterior probabilities.
    Type: Application
    Filed: May 8, 2014
    Publication date: September 22, 2016
    Inventors: Makoto TERAO, Masanori TSUJIKAWA
  • Publication number: 20160267924
    Abstract: A speech detection device according to the present invention acquires an acoustic signal, calculates a sound level for first frames in the acoustic signal, determines the first frame having the sound level greater than or equal to a first threshold value as a first target frame, calculates a feature value representing a spectrum shape for second frames in the acoustic signal, calculates a ratio of a likelihood of a voice model to a likelihood of a non-voice model for the second frames with the feature value as an input, determines the second frame having the likelihood ratio greater than or equal to a second threshold value as a second target frame, and determines a section included in both a first target section corresponding to the first target frame and a second target section corresponding to the second target frame as a target voice section including the target voice.
    Type: Application
    Filed: May 8, 2014
    Publication date: September 15, 2016
    Applicant: NEC Corporation
    Inventors: Makoto TERAO, Masanori TSUJIKAWA
  • Publication number: 20160203121
    Abstract: An analysis subject determination device includes: a demand period detection unit which detects, from data corresponding to audio of a dissatisfaction conversation, a demand utterance period which represents a demand utterance of a first conversation party among a plurality of conversation parties which are carrying out the dissatisfaction conversation; a negation period detection unit which detects, from the data, a negation utterance period which represents a negation utterance of a second conversation party which differs from the first conversation party; and a subject determination unit which, from the data, determines a period with a time obtained from the demand period utterance period as a start point and a time obtained from the negation utterance period after the demand utterance period as an end point to be an analysis subject period of a cause of dissatisfaction of the first conversation party in the dissatisfaction conversation.
    Type: Application
    Filed: March 27, 2014
    Publication date: July 14, 2016
    Applicant: NEC Corporation
    Inventors: Koji OKABE, Yoshifumi ONISHI, Makoto TERAO, Masahiro TANI
  • Patent number: 9336769
    Abstract: An apparatus that calculates a confidence measure of a target word string specified in a recognition result includes: an alternative candidate generator which generates an alternative candidate word string in the position of the target word string; a classifier training unit which trains a classifier which is configured to discriminate between the target word string and the alternative candidate word string; a feature extractor which extracts a feature value representing an adjacent context in the position of the target word string; and a confidence measure calculator which determining whether the true word string in the position of the target word string is the target word string or the alternative candidate word string by using the classifier and the feature value and calculates a confidence measure of the target word string on the basis of the determination result.
    Type: Grant
    Filed: March 1, 2012
    Date of Patent: May 10, 2016
    Assignees: NEC CORPORATION, THE UNIVERSITY OF WASHINGTON
    Inventors: Makoto Terao, Mari Ostendorf
  • Publication number: 20150310877
    Abstract: This conversation analysis device comprises: a change detection unit that detects, for each of a plurality of conversation participants, each of a plurality of prescribed change patterns for emotional states, on the basis of data corresponding to voices in a target conversation; an identification unit that identifies, from among the plurality of prescribed change patterns detected by the change detection unit, a beginning combination and an ending combination, which are prescribed combinations of the prescribed change patterns that satisfy prescribed position conditions between the plurality of conversation participants; and an interval determination unit that determines specific emotional intervals, which have a start time and an end time and represent specific emotions of the conversation participants of the target conversation, by determining a start time and an end time on the basis of each time position in the target conversation pertaining to the starting combination and ending combination identified by
    Type: Application
    Filed: August 21, 2013
    Publication date: October 29, 2015
    Applicant: NEC Corporation
    Inventors: Yoshifumi ONISHI, Makoto TERAO, Masahiro TANI, Koji OKABE
  • Publication number: 20150287402
    Abstract: An analysis object determination device includes a detection unit which detects a plurality of specific utterance sections using data related to a voice in a conversation, the specific utterance sections representing a plurality of specific events originating from one or a plurality of participants in the conversation, or a specific event originating from one of the conversation participants, and an object determination unit which determines, on the basis of the plurality of specific utterance sections detected by the detection unit, one or more cause analysis sections for the specific event originating from the conversation participant, the number of the cause analysis sections being fewer than the number of the plurality of specific utterance sections.
    Type: Application
    Filed: September 19, 2013
    Publication date: October 8, 2015
    Inventors: Koji Okabe, Yoshifumi Onishi, Makoto Terao, Masahiro Tani
  • Publication number: 20150278194
    Abstract: An information processing device according to the present invention includes: a global context extraction unit which identifies a word, a character, or a word string included in data as a specific word, and extracts a set of words included in at least a predetermined range extending from the specific word as a global context; a context classification unit which classifies the global context based on a predetermined viewpoint, and outputs a result of classification; and a language model generation unit which generates a language model for calculating a generation probability of the specific word by using the result of the classification.
    Type: Application
    Filed: November 7, 2013
    Publication date: October 1, 2015
    Applicant: NEC Corporation
    Inventors: Makoto Terao, Takafumi Koshinaka
  • Publication number: 20150279391
    Abstract: This dissatisfying conversation determination device include: a data acquisition unit that acquires a plurality of word data, and a plurality of phonation time data by target conversation participants; an extraction unit that extracts a plurality of specific word data configuring polite expression and impolite expression from the plurality of word data; a change detection unit that detects a point of change from polite expression to impolite expression by the target conversation participants based on the plurality of specific word data and the plurality of phonation time data; and a dissatisfaction determination unit that determines whether the target conversation is a dissatisfying conversation for the target conversation participants based on the result of the point of change detected by the change detection unit.
    Type: Application
    Filed: August 21, 2013
    Publication date: October 1, 2015
    Applicant: NEC Corporation
    Inventors: Yoshifumi Onishi, Makoto Terao, Masahiro Tani, Koji Okabe
  • Publication number: 20150262574
    Abstract: An expression classification device includes: a segment detection unit that detects a specific expression segment that includes a specific expression that can be used in a plurality of nuances from data corresponding to a voice of a conversation; a feature extraction unit that extracts feature information that includes at least one of a prosody feature and an utterance timing feature with regard to the specific expression segment that is detected by the segment detection unit; and a classification unit that classifies the specific expression included in the specific expression segment based on a nuance corresponding to a use situation in the conversation by using the feature information extracted by the feature extraction unit.
    Type: Application
    Filed: September 19, 2013
    Publication date: September 17, 2015
    Applicant: NEC Corporation
    Inventors: Makoto TERAO, Yoshifumi ONISHI, Koji OKABE, Masahiro TANI
  • Patent number: 9053751
    Abstract: A sound segment sorting unit (103) sorts the sound segments of a video. An image segment sorting unit (104) sorts the image segments of the video. A multiple sorting result generation unit (105) generates a plurality of sound segment sorting results and/or a plurality of image segment sorting results. A sorting result pair generation unit (106) generates a plurality of sorting result pairs of the sorting results as the candidates of the optimum segment sorting result of the video. A sorting result output unit (108) compares the sorting result comparative scores of the sorting result pairs calculated by a sorting result comparative score calculation unit (107) and thus outputs a sound segment sorting result and an image segment sorting result having good correspondence. This allows to accurately sort, for each object, a plurality of sound segments and a plurality of image segments contained in the video without adjusting parameters in advance.
    Type: Grant
    Filed: November 5, 2010
    Date of Patent: June 9, 2015
    Assignee: NEC CORPORATION
    Inventors: Makoto Terao, Takafumi Koshinaka
  • Publication number: 20140195238
    Abstract: An apparatus that calculates a confidence measure of a target word string specified in a recognition result includes: an alternative candidate generator which generates an alternative candidate word string in the position of the target word string; a classifier training unit which trains a classifier which is configured to discriminate between the target word string and the alternative candidate word string; a feature extractor which extracts a feature value representing an adjacent context in the position of the target word string; and a confidence measure calculator which determining whether the true word string in the position of the target word string is the target word string or the alternative candidate word string by using the classifier and the feature value and calculates a confidence measure of the target word string on the basis of the determination result.
    Type: Application
    Filed: March 1, 2012
    Publication date: July 10, 2014
    Applicants: UNIVERSITY OF WASHINGTON THROUGH ITS CENTER FOR COMMERCIALIZATION, NEC CORPORATION
    Inventors: Makoto Terao, Mari Ostendorf
  • Patent number: 8422787
    Abstract: There is provided an apparatus including a model based topic segmentation section that when segments a text using a topic model representing semantic coherence, a parameter estimation section that estimates a control parameter used in segmenting the text based on detection of a change point of word distribution in the text, using the result of segmentation by the model based topic segmentation unit as training data, and a change point detection topic segmentation section that segments the text, based on detection of the change point of word distribution in the text, using the parameter estimated by the parameter estimation section.
    Type: Grant
    Filed: December 25, 2008
    Date of Patent: April 16, 2013
    Assignee: NEC Corporation
    Inventors: Makoto Terao, Takafumi Koshinaka
  • Publication number: 20120233168
    Abstract: A sound segment sorting unit (103) sorts the sound segments of a video. An image segment sorting unit (104) sorts the image segments of the video. A multiple sorting result generation unit (105) generates a plurality of sound segment sorting results and/or a plurality of image segment sorting results. A sorting result pair generation unit (106) generates a plurality of sorting result pairs of the sorting results as the candidates of the optimum segment sorting result of the video. A sorting result output unit (108) compares the sorting result comparative scores of the sorting result pairs calculated by a sorting result comparative score calculation unit (107) and thus outputs a sound segment sorting result and an image segment sorting result having good correspondence. This allows to accurately sort, for each object, a plurality of sound segments and a plurality of image segments contained in the video without adjusting parameters in advance.
    Type: Application
    Filed: November 5, 2010
    Publication date: September 13, 2012
    Applicant: NEC CORPORATION
    Inventors: Makoto Terao, Takafumi Koshinaka