Patents by Inventor Makoto Terao

Makoto Terao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240067763
    Abstract: The solid titanium catalyst component (I) of the present invention contains titanium, magnesium, halogen, and a cyclic multiple-ester-group-containing compound (a) represented by the following formula (1).
    Type: Application
    Filed: August 26, 2021
    Publication date: February 29, 2024
    Applicant: MITSUI CHEMICALS, INC.
    Inventors: Takashi KIMURA, Makoto ISOGAI, Yasushi NAKAYAMA, Kenji MICHIUE, Takashi JINNAI, Wataru YAMADA, Shotaro TAKANO, Hiroshi TERAO, Takaaki YANO, Yoshiyuki TOTANI, Sunil Krzysztof MOORTHI, Takashi NAKANO
  • Publication number: 20240067764
    Abstract: A solid titanium catalyst component (I) for olefin polymer production contains titanium, magnesium, halogen, and a cyclic multiple-ester-group-containing compound (a) represented by the formula (1). Preferably, a propylene polymer that is obtained by the olefin polymerization method and has specific thermal properties as determined primarily by differential scanning calorimetry (DSC).
    Type: Application
    Filed: December 21, 2021
    Publication date: February 29, 2024
    Applicant: MITSUI CHEMICALS, INC.
    Inventors: Takashi KIMURA, Makoto ISOGAI, Yasushi NAKAYAMA, Kenji MICHIUE, Takashi JINNAI, Wataru YAMADA, Shotaro TAKANO, Hiroshi TERAO, Takaaki YANO, Yoshiyuki TOTANI, Sunil Krzysztof MOORTHI, Takashi NAKANO
  • Patent number: 11908177
    Abstract: The learning device 10D is learned to extract moving image feature amount Fm which is feature amount relating to the moving image data Dm when the moving image data Dm is inputted thereto, and is learned to extract still image feature amount Fs which is feature amount relating to the still image data Ds when the still image data Ds is inputted thereto. The first inference unit 32D performs a first inference regarding the moving image data Dm based on the moving image feature amount Fm. The second inference unit 34D performs a second inference regarding the still image data Ds based on the still image feature amount Fs. The learning unit 36D performs learning of the feature extraction unit 31D based on the results of the first inference and the second inference.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: February 20, 2024
    Assignee: NEC CORPORATION
    Inventors: Shuhei Yoshida, Makoto Terao
  • Publication number: 20230215152
    Abstract: In a learning device, a feature extraction means extracts image features from an input image. A class discrimination means discriminate a class of the input image based on the image features, and generates a class discriminative result. A class discriminative loss calculation means calculates a class discriminative loss based on the class discriminative result. A normal/abnormal discrimination means discriminates whether the class is a normal class or an abnormal class, based on the image features, and generates a normal/abnormal discriminative result. The AUC loss calculation means calculates an AUC loss based on the normal/abnormal result. A first learning means updates parameters of the feature extraction means, a class discrimination means, and the normal/abnormal discrimination means, based on the class discriminative loss and the AUC loss.
    Type: Application
    Filed: June 3, 2020
    Publication date: July 6, 2023
    Applicant: NEC Corporation
    Inventors: Tomokazu Kaneko, Makoto Terao
  • Publication number: 20230177389
    Abstract: A recognition loss calculation unit of a learning device calculates a recognition loss using: a recognition result with respect to recognition object data in a learning data set that is a set of a pair of the recognition object data and a weak label; a mixing matrix calculated based on the learning data set; and the weak label attached to the recognition object data. The recognition loss calculation unit includes: a difference calculation unit that calculates a difference between a mixing matrix and the recognition result; and a sum of squares calculation unit that calculates the recognition loss by calculating a sum of a square of the difference.
    Type: Application
    Filed: March 13, 2020
    Publication date: June 8, 2023
    Applicant: NEC Corporation
    Inventors: Shuhei YOSHIDA, Makoto TERAO
  • Patent number: 11580784
    Abstract: A model learning device provided with: an error-added movement locus generation unit for adding an error to movement locus data for action learning that represents the movement locus of a subject and to which is assigned an action label that is information representing the action of the subject, and thereby generating error-added movement locus data; and an action recognition model learning unit for learning a model, using at least the error-added movement locus data and learning data created on the basis of the action label, by which model the action of some subject can be recognized from the movement locus of the subject. Thus, it is possible to provide a model by which the action of a subject can be recognized with high accuracy on the basis of the movement locus of the subject estimated using a camera image.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: February 14, 2023
    Assignee: NEC CORPORATION
    Inventor: Makoto Terao
  • Publication number: 20220335712
    Abstract: The dataset supply unit supplies a learning dataset. The recognition unit outputs the recognition result for the recognition object data in the supplied learning dataset. Further, the intersection matrix computation unit computes the intersection matrix based on the learning dataset. The recognition loss computation unit computes the recognition loss using the recognition result, the intersection matrix, and the correct answer data given to the recognition object data. Then, the updating unit updates the parameters of the recognition unit based on the recognition loss.
    Type: Application
    Filed: September 27, 2019
    Publication date: October 20, 2022
    Applicant: NEC Corporation
    Inventors: Shuhei YOSHIDA, Makoto TERAO
  • Publication number: 20220254136
    Abstract: An image acquisition unit 110 acquires a plurality of images. The plurality of images include an object to be inferred. An image cut-out unit 120 cuts out an object region including the object from each of the plurality of images acquired by the image acquisition unit 110. An importance generation unit 130 generates importance information by processing the object region cut out by the image cut-out unit 120. The importance information indicates the importance of the object region when an object inference model is generated, and is generated for each object region, that is, for each image acquired by the image acquisition unit 110. A learning data generation unit 140 stores a plurality of object regions cut out by the image cut-out unit 120 and a plurality of pieces of importance information generated by the importance generation unit 130 in a learning data storage unit 150 as at least a part of the learning data.
    Type: Application
    Filed: January 28, 2022
    Publication date: August 11, 2022
    Applicant: NEC Corporation
    Inventors: Tomokazu KANEKO, Katsuhiko TAKAHASHI, Makoto TERAO, Soma SHIRAISHI, Takami SATO, Yu NABETO, Ryosuke SAKAI
  • Publication number: 20220198783
    Abstract: The learning device 10D is learned to extract moving image feature amount Fm which is feature amount relating to the moving image data Dm when the moving image data Dm is inputted thereto, and is learned to extract still image feature amount Fs which is feature amount relating to the still image data Ds when the still image data Ds is inputted thereto. The first inference unit 32D performs a first inference regarding the moving image data Dm based on the moving image feature amount Fm. The second inference unit 34D performs a second inference regarding the still image data Ds based on the still image feature amount Fs. The learning unit 36D performs learning of the feature extraction unit 31D based on the results of the first inference and the second inference.
    Type: Application
    Filed: May 29, 2019
    Publication date: June 23, 2022
    Applicant: NEC Corporation
    Inventors: Shuhei YOSHIDA, Makoto TERAO
  • Publication number: 20200342215
    Abstract: A model learning device provided with: an error-added movement locus generation unit for adding an error to movement locus data for action learning that represents the movement locus of a subject and to which is assigned an action label that is information representing the action of the subject, and thereby generating error-added movement locus data; and an action recognition model learning unit for learning a model, using at least the error-added movement locus data and learning data created on the basis of the action label, by which model the action of some subject can be recognized from the movement locus of the subject. Thus, it is possible to provide a model by which the action of a subject can be recognized with high accuracy on the basis of the movement locus of the subject estimated using a camera image.
    Type: Application
    Filed: December 5, 2018
    Publication date: October 29, 2020
    Applicant: NEC Corporation
    Inventor: Makoto TERAO
  • Patent number: 10083686
    Abstract: An analysis object determination device includes a detection unit which detects a plurality of specific utterance sections using data related to a voice in a conversation, the specific utterance sections representing a plurality of specific events originating from one or a plurality of participants in the conversation, or a specific event originating from one of the conversation participants, and an object determination unit which determines, on the basis of the plurality of specific utterance sections detected by the detection unit, one or more cause analysis sections for the specific event originating from the conversation participant, the number of the cause analysis sections being fewer than the number of the plurality of specific utterance sections.
    Type: Grant
    Filed: September 19, 2013
    Date of Patent: September 25, 2018
    Assignee: NEC CORPORATION
    Inventors: Koji Okabe, Yoshifumi Onishi, Makoto Terao, Masahiro Tani
  • Patent number: 9875236
    Abstract: An analysis subject determination device includes: a demand period detection unit which detects, from data corresponding to audio of a dissatisfaction conversation, a demand utterance period which represents a demand utterance of a first conversation party among a plurality of conversation parties which are carrying out the dissatisfaction conversation; a negation period detection unit which detects, from the data, a negation utterance period which represents a negation utterance of a second conversation party which differs from the first conversation party; and a subject determination unit which, from the data, determines a period with a time obtained from the demand period utterance period as a start point and a time obtained from the negation utterance period after the demand utterance period as an end point to be an analysis subject period of a cause of dissatisfaction of the first conversation party in the dissatisfaction conversation.
    Type: Grant
    Filed: March 27, 2014
    Date of Patent: January 23, 2018
    Assignee: NEC CORPORATION
    Inventors: Koji Okabe, Yoshifumi Onishi, Makoto Terao, Masahiro Tani
  • Publication number: 20170364854
    Abstract: The purpose of the present invention is to provide a technology which is capable of appropriately evaluating a person's conduct with respect to another person. Provided is an information processing device, comprising a recognition unit 11, a detection unit 12, and an evaluation unit 13. The recognition unit 11 recognizes an evaluation subject's conduct. The detection unit 12 detects a trigger which is a state of a person other than the evaluation subject which triggers the evaluation subject's conduct. Using the detected trigger and the result of recognition by the recognition unit 13 relating to the evaluation subject's conduct, the evaluation unit 13 evaluates the evaluation subject's conduct.
    Type: Application
    Filed: December 2, 2015
    Publication date: December 21, 2017
    Inventors: Terumi UMEMATSU, Ryosuke ISOTANI, Yoshifumi OMISHI, Masanori TSUJIKAWA, Makoto TERAO, Tasuku KITADE, Shuji KOMEIJI
  • Publication number: 20160275968
    Abstract: A speech detection device according to the present invention acquires an acoustic signal, calculates a feature value representing a spectrum shape for a plurality of first frames from the acoustic signal, calculates a ratio of a likelihood of a voice model to a likelihood of a non-voice model for the first frames using the feature value, determines a candidate target voice section that is a section including target voice by use of the likelihood ratio, calculates a posterior probability of a plurality of phonemes using the feature value, calculates at least one of entropy and time difference of posterior probabilities of the plurality of phonemes for the first frames, and specifies a section as changed to a section not including the target voice, out of the candidate target voice sections, by use of at least one of the entropy and the time difference of the posterior probabilities.
    Type: Application
    Filed: May 8, 2014
    Publication date: September 22, 2016
    Inventors: Makoto TERAO, Masanori TSUJIKAWA
  • Publication number: 20160267924
    Abstract: A speech detection device according to the present invention acquires an acoustic signal, calculates a sound level for first frames in the acoustic signal, determines the first frame having the sound level greater than or equal to a first threshold value as a first target frame, calculates a feature value representing a spectrum shape for second frames in the acoustic signal, calculates a ratio of a likelihood of a voice model to a likelihood of a non-voice model for the second frames with the feature value as an input, determines the second frame having the likelihood ratio greater than or equal to a second threshold value as a second target frame, and determines a section included in both a first target section corresponding to the first target frame and a second target section corresponding to the second target frame as a target voice section including the target voice.
    Type: Application
    Filed: May 8, 2014
    Publication date: September 15, 2016
    Applicant: NEC Corporation
    Inventors: Makoto TERAO, Masanori TSUJIKAWA
  • Publication number: 20160203121
    Abstract: An analysis subject determination device includes: a demand period detection unit which detects, from data corresponding to audio of a dissatisfaction conversation, a demand utterance period which represents a demand utterance of a first conversation party among a plurality of conversation parties which are carrying out the dissatisfaction conversation; a negation period detection unit which detects, from the data, a negation utterance period which represents a negation utterance of a second conversation party which differs from the first conversation party; and a subject determination unit which, from the data, determines a period with a time obtained from the demand period utterance period as a start point and a time obtained from the negation utterance period after the demand utterance period as an end point to be an analysis subject period of a cause of dissatisfaction of the first conversation party in the dissatisfaction conversation.
    Type: Application
    Filed: March 27, 2014
    Publication date: July 14, 2016
    Applicant: NEC Corporation
    Inventors: Koji OKABE, Yoshifumi ONISHI, Makoto TERAO, Masahiro TANI
  • Patent number: 9336769
    Abstract: An apparatus that calculates a confidence measure of a target word string specified in a recognition result includes: an alternative candidate generator which generates an alternative candidate word string in the position of the target word string; a classifier training unit which trains a classifier which is configured to discriminate between the target word string and the alternative candidate word string; a feature extractor which extracts a feature value representing an adjacent context in the position of the target word string; and a confidence measure calculator which determining whether the true word string in the position of the target word string is the target word string or the alternative candidate word string by using the classifier and the feature value and calculates a confidence measure of the target word string on the basis of the determination result.
    Type: Grant
    Filed: March 1, 2012
    Date of Patent: May 10, 2016
    Assignees: NEC CORPORATION, THE UNIVERSITY OF WASHINGTON
    Inventors: Makoto Terao, Mari Ostendorf
  • Publication number: 20150310877
    Abstract: This conversation analysis device comprises: a change detection unit that detects, for each of a plurality of conversation participants, each of a plurality of prescribed change patterns for emotional states, on the basis of data corresponding to voices in a target conversation; an identification unit that identifies, from among the plurality of prescribed change patterns detected by the change detection unit, a beginning combination and an ending combination, which are prescribed combinations of the prescribed change patterns that satisfy prescribed position conditions between the plurality of conversation participants; and an interval determination unit that determines specific emotional intervals, which have a start time and an end time and represent specific emotions of the conversation participants of the target conversation, by determining a start time and an end time on the basis of each time position in the target conversation pertaining to the starting combination and ending combination identified by
    Type: Application
    Filed: August 21, 2013
    Publication date: October 29, 2015
    Applicant: NEC Corporation
    Inventors: Yoshifumi ONISHI, Makoto TERAO, Masahiro TANI, Koji OKABE
  • Publication number: 20150287402
    Abstract: An analysis object determination device includes a detection unit which detects a plurality of specific utterance sections using data related to a voice in a conversation, the specific utterance sections representing a plurality of specific events originating from one or a plurality of participants in the conversation, or a specific event originating from one of the conversation participants, and an object determination unit which determines, on the basis of the plurality of specific utterance sections detected by the detection unit, one or more cause analysis sections for the specific event originating from the conversation participant, the number of the cause analysis sections being fewer than the number of the plurality of specific utterance sections.
    Type: Application
    Filed: September 19, 2013
    Publication date: October 8, 2015
    Inventors: Koji Okabe, Yoshifumi Onishi, Makoto Terao, Masahiro Tani
  • Publication number: 20150279391
    Abstract: This dissatisfying conversation determination device include: a data acquisition unit that acquires a plurality of word data, and a plurality of phonation time data by target conversation participants; an extraction unit that extracts a plurality of specific word data configuring polite expression and impolite expression from the plurality of word data; a change detection unit that detects a point of change from polite expression to impolite expression by the target conversation participants based on the plurality of specific word data and the plurality of phonation time data; and a dissatisfaction determination unit that determines whether the target conversation is a dissatisfying conversation for the target conversation participants based on the result of the point of change detected by the change detection unit.
    Type: Application
    Filed: August 21, 2013
    Publication date: October 1, 2015
    Applicant: NEC Corporation
    Inventors: Yoshifumi Onishi, Makoto Terao, Masahiro Tani, Koji Okabe