Patents by Inventor Daniel Georg Andrade Silva
Daniel Georg Andrade Silva has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240086492Abstract: An information processing apparatus (100) is disclosed. The information processing apparatus (100) includes an input means (102), a statistic calculation means (104) and an optimization means (106). The input means (102) receives input samples including responses and covariates. The statistic calculation means (104) transforms the responses into transformed samples using a function depending on the covariates and an unbiased parameter. A distribution of the transformed samples only depends on a dispersion parameter. The optimization means (106) maximizes a distribution of the transformed samples to determine an estimate of the dispersion parameter.Type: ApplicationFiled: January 21, 2021Publication date: March 14, 2024Applicant: NEC CorporationInventors: Daniel Georg ANDRADE SILVA, Yuzuru OKAJIMA
-
Publication number: 20230334297Abstract: An object of the present disclosure is to provide an information processing apparatus, an information processing method, and a non-transitory computer readable medium capable of producing an accurate output to detect outlier(s). An information processing apparatus according to the present disclosure includes at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: calculate each probability of each data point being an outlier by using a temperature parameter t, wherein t>0; lower the temperature parameter t towards 0 with a plural of step; and output the probability.Type: ApplicationFiled: August 28, 2020Publication date: October 19, 2023Applicant: NEC CorporationInventors: Daniel Georg Andrade Silva, Yuzuru Okajima
-
Publication number: 20230104117Abstract: An information processing apparatus for determining a threshold on classification scores includes: a score ranking component that sorts all classification scores from samples of an evaluation data set that was not used for training the classifier and removes scores for which the class label is false; and an iteration component that iterates the threshold from the highest score returned from the score ranking component down until the number of samples with score not lower than the current threshold is larger than a user specified recall value times the number of true labels in the evaluation data set.Type: ApplicationFiled: February 13, 2020Publication date: April 6, 2023Applicant: NEC CorporationInventors: Daniel Georg ANDRADE SILVA, Yuzuru OKAJIMA, Kunihiko SADAMASA
-
Patent number: 11521092Abstract: An inference method according to the present invention in an inference system inferring a probability that an ending state holds based on a starting state and a rule set, the method includes: when a rule set derived by excluding one rule from rules constituting a first rule set is set as a second rule set, a probability that the ending state holds based on the starting state and the first rule set is set as a first inference result, and a probability that the ending state holds based on the starting state and the second rule set is set as a second inference result, calculating an importance being an indicator indicating magnitude of a difference between the first inference result and the second inference result; and outputting the rule and the importance of the rule, being associated with each other for each of the excluded rule.Type: GrantFiled: March 9, 2017Date of Patent: December 6, 2022Assignee: NEC CORPORATIONInventors: Kentarou Sasaski, Daniel Georg Andrade Silva, Yotaro Watanabe, Kunihiko Sadamasa
-
Publication number: 20220138232Abstract: A visualization device visualizes plural clustering results. The clustering result ordering unit orders plural clustering results based on quality criteria. Each of the clustering results includes covariate clusters. The hierarchical arrangement unit creates hierarchical tree structure including the covariate clusters as nodes. The created hierarchical structure is displayed.Type: ApplicationFiled: February 28, 2019Publication date: May 5, 2022Applicant: NEC CorporationInventors: Daniel Georg ANDRADE SILVA, Yuzuru OKAJIMA
-
Patent number: 11200453Abstract: An information processing system for improving detection of a relation between events is provided. A learning system (100) includes a training data storage (120) and a learning module (130). The training data storage (120) stores a training pair of a first and second event, and a relation between the training pair of the first and second events. The relation is a first or second relation. The learning module 130 learns a neural network for classifying a relation between a pair of the first and second events to be classified as the first or second relation, by using the training pair. The neural network includes a first layer to extract a feature of the first relation from features of the first and second events, a second layer to extract a feature of the second relation from the features of the first and second events, and a joint layer to extract a joint feature of the first and second relations from the features of the first and second relations.Type: GrantFiled: February 29, 2016Date of Patent: December 14, 2021Assignee: NEC CORPORATIONInventors: Daniel Georg Andrade Silva, Yotaro Watanabe
-
Publication number: 20210192277Abstract: An information processing system for improving detection of a relation between events is provided. A learning system (100) includes a training data storage (120) and a learning module (130). The training data storage (120) stores a training pair of a first and second event, and a relation between the training pair of the first and second events. The relation is a first or second relation. The learning module 130 learns a neural network for classifying a relation between a pair of the first and second events to be classified as the first or second relation, by using the training pair. The neural network includes a first layer to extract a feature of the first relation from features of the first and second events, a second layer to extract a feature of the second relation from the features of the first and second events, and a joint layer to extract a joint feature of the first and second relations from the features of the first and second relations.Type: ApplicationFiled: February 29, 2016Publication date: June 24, 2021Applicant: NEC CorporationInventors: DANIEL GEORG ANDRADE SILVA, YOTARO WATANABE
-
Publication number: 20200311574Abstract: A regression apparatus 10 that optimizes a joint regression and clustering criteria includes a train classifier unit and an acquire clustering result unit. The train classifier unit trains a classifier with a weight vector or a weight matrix, using labeled training data, a similarity of features, a loss function characterizing regression quality, and a penalty encouraging the similarity of features, wherein a strength of the penalty is proportional to the similarity of features. The acquire clustering result unit an acquire clustering result unit that, using the trained classifier, to identify feature clusters by grouping the features which regression weight is equal.Type: ApplicationFiled: September 29, 2017Publication date: October 1, 2020Applicant: NEC CORPORATIONInventor: Daniel Georg ANDRADE SILVA
-
Publication number: 20200293929Abstract: An inference method according to the present invention in an inference system inferring a probability that an ending state holds based on a starting state and a rule set, the method includes: when a rule set derived by excluding one rule from rules constituting a first rule set is set as a second rule set, a probability that the ending state holds based on the starting state and the first rule set is set as a first inference result, and a probability that the ending state holds based on the starting state and the second rule set is set as a second inference result, calculating an importance being an indicator indicating magnitude of a difference between the first inference result and the second inference result; and outputting the rule and the importance of the rule, being associated with each other for each of the excluded rule.Type: ApplicationFiled: March 9, 2017Publication date: September 17, 2020Applicant: NEC CorporationInventors: Kentarou SASASKI, Daniel Georg ANDRADE SILVA, Yotaro WATANABE, Kunihiko SADAMASA
-
Patent number: 10354010Abstract: An information processing system to increase weights of words that are related to a text, but that do not explicitly occur in the text, in a weight vector representing the text, is provided. An adjusting system (100) includes a distance storing unit (110) and an adjusting unit (120). The distance storing unit (110) stores distances between any two terms of a plurality of terms. The distance between two terms becomes smaller as the two terms are semantically more similar. The adjusting unit (120) adjusts a weight of each term of the plurality of terms in a weight vector including weights of the plurality of terms and representing a text, on the basis of a distance between each term and other term in the weight vector and a weight of the other term.Type: GrantFiled: April 24, 2015Date of Patent: July 16, 2019Assignee: NEC CORPORATIONInventors: Daniel Georg Andrade Silva, Akihiro Tamura, Masaaki Tsuchida
-
Patent number: 10324971Abstract: A method for classifying a new instance including a text document by using training instances with class including labeled data and zero or more training instances with class including unlabeled data, comprising: estimating a word distribution for each class by using the labeled data and the unlabeled data; estimating a background distribution and a degree of interpolation between the background distribution and the word distribution by using the labeled data and the unlabeled data; calculating two probabilities for that the word generated from the word distribution and the word generated from the background distribution; combining the two probabilities by using the interpolation; combining the resulting probabilities of all words to estimate a document probability for the class that indicates the document is generated from the class; and classifying the new instance as a class for which the document probability is the highest.Type: GrantFiled: June 20, 2014Date of Patent: June 18, 2019Assignee: NEC CorporationInventors: Daniel Georg Andrade Silva, Hironori Mizuguchi, Kai Ishikawa
-
Publication number: 20190180192Abstract: An information processing system for learning new probabilistic rules even if only one training sample is given. A learning system (100) includes a KB (knowledge base) storage (110), a rule generator (130), and a weight calculator (140). The KB storage (110) stores a KB including a knowledge storage for storing rules between events among a plurality of events. The rule generator (130) generates one or more new rules based on the rules and an implication score between the events. The weight calculator (140) calculates a weight of the one or more new rules for probabilistic reasoning based on the implication score.Type: ApplicationFiled: August 18, 2016Publication date: June 13, 2019Applicant: NEC CorporationInventors: Daniel Georg ANDRADE SILVA, Yotaro WATANABE, Satoshi MORINAGA, Kunihiko SADAMASA
-
Publication number: 20190164072Abstract: An inference system according to the present invention relates to inference from a starting state and a first rule set to an ending state. The inference system includes: a memory; and at least one processor coupled to the memory. The processor performs operations. The operations includes: receiving a parameter for use in selecting a second rule set from the first rule set; and visualizing the second rule set associated with the parameter.Type: ApplicationFiled: August 2, 2016Publication date: May 30, 2019Applicant: NEC CorporationInventors: Kentarou SASAKI, Daniel Georg ANDRADE SILVA, Yotaro WATANABE, Kunihiko SADAMASA
-
Publication number: 20190164078Abstract: Provided is an information processing system to accurately predict performance of a classifier to the number of samples of labeled data. A training system 100 includes an extraction unit 120 and an estimation unit 130. The extraction unit 120 extracts a reference data set that is similar to a target data set, from one or more reference data sets. The estimation unit 130 estimates a performance of a classifier assuming that the classifier is trained with labeled data in the target data set, by using the extracted reference data set, and outputs the estimated performance.Type: ApplicationFiled: April 13, 2017Publication date: May 30, 2019Applicant: NEC CorporationInventors: Daniel Georg ANDRADE SILVA, Itaru HOSOMI
-
Patent number: 10140361Abstract: A text mining device (2) is used in which data composed of a set of records including an attribute value and text data is used as analysis target data. The text mining device (2) includes an analysis perspective candidate generation unit (20) that extracts an attribute value from the analysis target data and generates an analysis perspective candidate using the extracted attribute value, and a characteristic degree calculation unit (21) that compares text data in a record including the attribute value extracted as the analysis perspective candidate with text data in a record set that includes at least a record other than the record including the attribute value in the analysis target data, and calculates a characteristic degree indicating a relationship between the analysis perspective candidate and the analysis target data based on a result of the comparison.Type: GrantFiled: August 23, 2013Date of Patent: November 27, 2018Assignee: NEC CORPORATIONInventors: Masaaki Tsuchida, Kai Ishikawa, Takashi Onishi, Daniel Georg Andrade Silva
-
Publication number: 20180137100Abstract: An information processing system to increase weights of words that are related to a text, but that do not explicitly occur in the text, in a weight vector representing the text, is provided. An adjusting system (100) includes a distance storing unit (110) and an adjusting unit (120). The distance storing unit (110) stores distances between any two terms of a plurality of terms. The distance between two terms becomes smaller as the two terms are semantically more similar. The adjusting unit (120) adjusts a weight of each term of the plurality of terms in a weight vector including weights of the plurality of terms and representing a text, on the basis of a distance between each term and other term in the weight vector and a weight of the other term.Type: ApplicationFiled: April 24, 2015Publication date: May 17, 2018Applicant: NEC CorporationInventors: DANIEL GEORG ANDRADE SILVA, AKIHIRO TAMURA, MASAAKI TSUCHIDA
-
Publication number: 20170169105Abstract: A document classification method includes a first step for calculating smoothing weights for each word and a fixed class, a second step for calculating smoothed second-order word probability, and a third step for classifying document including calculating the probability that the document belongs to the fixed class.Type: ApplicationFiled: November 27, 2013Publication date: June 15, 2017Applicant: NEC CorporationInventors: Daniel Georg ANDRADE SILVA, Hironori MIZUGUCHI, Kai ISHIKAWA
-
Publication number: 20170116332Abstract: A method for classifying a new instance including a text document by using training instances with class including labeled data and zero or more training instances with class including unlabeled data, comprising: estimating a word distribution for each class by using the labeled data and the unlabeled data; estimating a background distribution and a degree of interpolation between the background distribution and the word distribution by using the labeled data and the unlabeled data; calculating two probabilities for that the word generated from the word distribution and the word generated from the background distribution; combining the two probabilities by using the interpolation; combining the resulting probabilities of all words to estimate a document probability for the class that indicates the document is generated from the class; and classifying the new instance as a class for which the document probability is the highest.Type: ApplicationFiled: June 20, 2014Publication date: April 27, 2017Applicant: NEC CorporationInventors: Daniel Georg ANDRADE SILVA, Hironori MIZUGUCHI, Kai ISHIKAWA
-
Patent number: 9542386Abstract: An entailment evaluation device includes: a generation unit which generates first information indicating at least the order of occurrence of events of first and second simple sentences included in the hypothesis text and generates second information indicating at least the order of occurrence of events of third and fourth simple sentences included in a target text, the third simple sentence being related to the first simple sentence, the fourth simple sentence being related to the second simple sentence; a calculation unit which obtains a calculation result by comparing, based on the first and second information, the order of occurrence of events of first and second simple sentences and order of occurrence of events of third and fourth simple sentences; and a determination unit which determines, based on at least the calculation result, whether or not the target text entails the hypothesis text.Type: GrantFiled: February 28, 2014Date of Patent: January 10, 2017Assignee: NEC CORPORATIONInventors: Daniel Georg Andrade Silva, Kai Ishikawa, Masaaki Tsuchida, Takashi Onishi
-
Publication number: 20160012034Abstract: An entailment evaluation device includes: a generation unit which generates first information indicating at least the order of occurrence of events of first and second simple sentences included in the hypothesis text and generates second information indicating at least the order of occurrence of events of third and fourth simple sentences included in a target text, the third simple sentence being related to the first simple sentence, the fourth simple sentence being related to the second simple sentence; a calculation unit which obtains a calculation result by comparing, based on the first and second information, the order of occurrence of events of first and second simple sentences and order of occurrence of events of third and fourth simple sentences; and a determination unit which determines, based on at least the calculation result, whether or not the target text entails the hypothesis text.Type: ApplicationFiled: February 28, 2014Publication date: January 14, 2016Inventors: Daniel Georg ANDRADE SILVA, Kai ISHIKAWA, Masaaki TSUCHIDA, Takashi ONISHI