Patents by Inventor Andre? Elisseeff
Andre? Elisseeff has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20110119213Abstract: Identification of a determinative subset of features from within a group of features is performed by training a support vector machine using training samples with class labels to determine a value of each feature, where features are removed based on their the value. One or more features having the smallest values are removed and an updated kernel matrix is generated using the remaining features. The process is repeated until a predetermined number of features remain which are capable of accurately separating the data into different classes.Type: ApplicationFiled: December 1, 2010Publication date: May 19, 2011Applicant: HEALTH DISCOVERY CORPORATIONInventors: André Elisseeff, Bernhard Schölkopf, Fernando Perez-Cruz
-
Publication number: 20110106735Abstract: Identification of a determinative subset of features from within a group of features is performed by training a support vector machine using training samples with class labels to determine a value of each feature, where features are removed based on their the value. One or more features having the smallest values are removed and an updated kernel matrix is generated using the remaining features. The process is repeated until a predetermined number of features remain which are capable of accurately separating the data into different classes. In some embodiments, features are eliminated by a ranking criterion based on a Lagrange multiplier corresponding to each training sample.Type: ApplicationFiled: November 11, 2010Publication date: May 5, 2011Applicant: HEALTH DISCOVERY CORPORATIONInventors: Jason Weston, André Elisseeff, Bernhard Schölkopf, Fernando Perez-Cruz, Isabelle Guyon
-
Publication number: 20110078099Abstract: A group of features that has been identified as “significant” in being able to separate data into classes is evaluated using a support vector machine which separates the dataset into classes one feature at a time. After separation, an extremal margin value is assigned to each feature based on the distance between the lowest feature value in the first class and the highest feature value in the second class. Separately, extremal margin values are calculated for a normal distribution within a large number of randomly drawn example sets for the two classes to determine the number of examples within the normal distribution that would have a specified extremal margin value. Using p-values calculated for the normal distribution, a desired p-value is selected. The specified extremal margin value corresponding to the selected p-value is compared to the calculated extremal margin values for the group of features.Type: ApplicationFiled: September 26, 2010Publication date: March 31, 2011Applicant: HEALTH DISCOVERY CORPORATIONInventors: Jason Aaron Edward Weston, André Elisseeff, Bernhard Schöelkopf, Fernando Perez-Cruz, Isabelle Guyon
-
Patent number: 7890445Abstract: A model selection method is provided for choosing the number of clusters, or more generally the parameters of a clustering algorithm. The algorithm is based on comparing the similarity between pairs of clustering runs on sub-samples or other perturbations of the data. High pairwise similarities show that the clustering represents a stable pattern in the data. The method is applicable to any clustering algorithm, and can also detect lack of structure. We show results on artificial and real data using a hierarchical clustering algorithm.Type: GrantFiled: October 30, 2007Date of Patent: February 15, 2011Assignee: Health Discovery CorporationInventors: Asa Ben Hur, André Elisseeff, Isabelle Guyon
-
Publication number: 20100318482Abstract: Learning machines, such as support vector machines, are used to analyze datasets to recognize patterns within the dataset using kernels that are selected according to the nature of the data to be analyzed. Where the datasets include an invariance transformation or noise, tangent vectors are defined to identify relationships between the invariance or noise and the training data points. A covariance matrix is formed using the tangent vectors, then used in generation of the kernel, which may be based on a kernel PCA map.Type: ApplicationFiled: August 25, 2010Publication date: December 16, 2010Applicant: HEALTH DISCOVERY CORPORATIONInventors: Peter L. Bartlett, André Elisseeff, Bernhard Schoelkopf, Olivier Chapelle
-
Patent number: 7805388Abstract: In a pre-processing step prior to training a learning machine, pre-processing includes reducing the quantity of features to be processed using feature selection methods selected from the group consisting of recursive feature elimination (RFE), minimizing the number of non-zero parameters of the system (l0-norm minimization), evaluation of cost function to identify a subset of features that are compatible with constraints imposed by the learning set, unbalanced correlation score, transductive feature selection and single feature using margin-based ranking. The features remaining after feature selection are then used to train a learning machine for purposes of pattern classification, regression, clustering and/or novelty detection.Type: GrantFiled: October 30, 2007Date of Patent: September 28, 2010Assignee: Health Discovery CorporationInventors: Jason Weston, André Elisseeff, Bernhard Schölkopf, Fernando Perez-Cruz, Isabelle Guyon
-
Patent number: 7788193Abstract: Learning machines, such as support vector machines, are used to analyze datasets to recognize patterns within the dataset using kernels that are selected according to the nature of the data to be analyzed. Where the datasets possesses structural characteristics, locational kernels can be utilized to provide measures of similarity among data points within the dataset. The locational kernels are then combined to generate a decision function, or kernel, that can be used to analyze the dataset. Where an invariance transformation or noise is present, tangent vectors are defined to identify relationships between the invariance or noise and the data points. A covariance matrix is formed using the tangent vectors, then used in generation of the kernel.Type: GrantFiled: October 30, 2007Date of Patent: August 31, 2010Assignee: Health Discovery CorporationInventors: Peter L. Bartlett, André Elisseeff, Bernhard Schoelkopf, Olivier Chapelle
-
Publication number: 20100205124Abstract: Support vector machines are used to classify data contained within a structured dataset such as a plurality of signals generated by a spectral analyzer. The signals are pre-processed to ensure alignment of peaks across the spectra. Similarity measures are constructed to provide a basis for comparison of pairs of samples of the signal. A support vector machine is trained to discriminate between different classes of the samples. to identify the most predictive features within the spectra. In a preferred embodiment feature selection is performed to reduce the number of features that must be considered.Type: ApplicationFiled: February 4, 2010Publication date: August 12, 2010Applicant: HEALTH DISCOVERY CORPORATIONInventors: Asa Ben-Hur, Andre Elisseeff, Olivier Chapelle, Jason Aaron Edward Weston
-
Patent number: 7676442Abstract: Support vector machines are used to classify data contained within a structured dataset such as a plurality of signals generated by a spectral analyzer. The signals are pre-processed to ensure alignment of peaks across the spectra. Similarity measures are constructed to provide a basis for comparison of pairs of samples of the signal. A support vector machine is trained to discriminate between different classes of the samples. to identify the most predictive features within the spectra. In a preferred embodiment feature selection is performed to reduce the number of features that must be considered.Type: GrantFiled: October 30, 2007Date of Patent: March 9, 2010Assignee: Health Discovery CorporationInventors: Asa Ben-Hur, André Elisseeff, Olivier Chapelle, Jason Aaron Edward Weston
-
Patent number: 7624074Abstract: In a pre-processing step prior to training a learning machine, pre-processing includes reducing the quantity of features to be processed using feature selection methods selected from the group consisting of recursive feature elimination (RFE), minimizing the number of non-zero parameters of the system (l0-norm minimization), evaluation of cost function to identify a subset of features that are compatible with constraints imposed by the learning set, unbalanced correlation score and transductive feature selection. The features remaining after feature selection are then used to train a learning machine for purposes of pattern classification, regression, clustering and/or novelty detection.Type: GrantFiled: October 30, 2007Date of Patent: November 24, 2009Assignee: Health Discovery CorporationInventors: Jason Aaron Edward Weston, Andre′ Elisseeff, Bernard Schoelkopf, Fernando Pérez-Cruz
-
Patent number: 7617163Abstract: Support vector machines are used to classify data contained within a structured dataset such as a plurality of signals generated by a spectral analyzer. The signals are pre-processed to ensure alignment of peaks across the spectra. Similarity measures are constructed to provide a basis for comparison of pairs of samples of the signal. A support vector machine is trained to discriminate between different classes of the samples. to identify the most predictive features within the spectra. In a preferred embodiment feature selection is performed to reduce the number of features that must be considered.Type: GrantFiled: October 9, 2002Date of Patent: November 10, 2009Assignee: Health Discovery CorporationInventors: Asa Ben-Hur, André Elisseeff, Olivier Chapelle, Jason Aaron Edward Weston
-
Publication number: 20090083231Abstract: A system and method for analyzing electronic data records including an annotation unit being operable to receive a set of electronic data records and to compute concept vectors for the set of electronic data records, wherein the coordinates of the concept vectors represent scores of the concepts in the respective electronic data record and wherein the concepts are part of an ontology, a similarity network unit being operable to compute a similarity network by means of the concept vectors and by at least one relationship between the concepts of the ontology, the similarity network representing similarities between the electronic data records, wherein the vertices of the similarity network represent the electronic data records and the edges of the similarity network represent similarity values indicating a degree of similarity between the vertices and steps for executing the system.Type: ApplicationFiled: September 18, 2008Publication date: March 26, 2009Inventors: Frey Aagaard Eberholst, Andre Elisseeff, Peter Lundkvist, Ulf H. Nielsen, Erich M. Ruetsche
-
Patent number: 7475048Abstract: A computer-implemented method is provided for ranking features within a large dataset containing a large number of features according to each feature's ability to separate data into classes. For each feature, a support vector machine separates the dataset into two classes and determines the margins between extremal points in the two classes. The margins for all of the features are compared and the features are ranked based upon the size of the margin, with the highest ranked features corresponding to the largest margins. A subset of features for classifying the dataset is selected from a group of the highest ranked features. In one embodiment, the method is used to identify the best genes for disease prediction and diagnosis using gene expression data from micro-arrays.Type: GrantFiled: November 7, 2002Date of Patent: January 6, 2009Assignee: Health Discovery CorporationInventors: Jason Weston, André Elisseeff, Bernhard Schölkopf, Fernando Perez-Cruz, Isabelle Guyon
-
Publication number: 20080301070Abstract: Learning machines, such as support vector machines, are used to analyze datasets to recognize patterns within the dataset using kernels that are selected according to the nature of the data to be analyzed. Where the datasets possesses structural characteristics, locational kernels can be utilized to provide measures of similarity among data points within the dataset. The locational kernels are then combined to generate a decision function, or kernel, that can be used to analyze the dataset. Where an invariance transformation or noise is present, tangent vectors are defined to identify relationships between the invariance or noise and the data points. A covariance matrix is formed using the tangent vectors, then used in generation of the kernel.Type: ApplicationFiled: October 30, 2007Publication date: December 4, 2008Inventors: Peter L. Bartlett, Andre Elisseeff, Bernhard Schoelkopf, Olivier Chapelle
-
Publication number: 20080215513Abstract: In a pre-processing step prior to training a learning machine, pre-processing includes reducing the quantity of features to be processed using feature selection methods selected from the group consisting of recursive feature elimination (RFE), minimizing the number of non-zero parameters of the system (l0-norm minimization), evaluation of cost function to identify a subset of features that are compatible with constraints imposed by the learning set, unbalanced correlation score and transductive feature selection. The features remaining after feature selection are then used to train a learning machine for purposes of pattern classification, regression, clustering and/or novelty detection.Type: ApplicationFiled: October 30, 2007Publication date: September 4, 2008Inventors: Jason Aaron Edward Weston, Andre' Elisseeff, Bernard Schoelkopf, Fernando Perez-Cruz
-
Publication number: 20080140592Abstract: A model selection method is provided for choosing the number of clusters, or more generally the parameters of a clustering algorithm. The algorithm is based on comparing the similarity between pairs of clustering runs on sub-samples or other perturbations of the data. High pairwise similarities show that the clustering represents a stable pattern in the data. The method is applicable to any clustering algorithm, and can also detect lack of structure. We show results on artificial and real data using a hierarchical clustering algorithm.Type: ApplicationFiled: October 30, 2007Publication date: June 12, 2008Inventors: Asa Ben-Hur, Andre Elisseeff, Isabelle Guyon
-
Publication number: 20080097940Abstract: Support vector machines are used to classify data contained within a structured dataset such as a plurality of signals generated by a spectral analyzer. The signals are pre-processed to ensure alignment of peaks across the spectra. Similarity measures are constructed to provide a basis for comparison of pairs of samples of the signal. A support vector machine is trained to discriminate between different classes of the samples. to identify the most predictive features within the spectra. In a preferred embodiment feature selection is performed to reduce the number of features that must be considered.Type: ApplicationFiled: October 30, 2007Publication date: April 24, 2008Inventors: Asa Ben-Hur, Andre Elisseeff, Olivier Chapelle, Jason Weston
-
Patent number: 7353215Abstract: Learning machines, such as support vector machines, are used to analyze datasets to recognize patterns within the dataset using kernels that are selected according to the nature of the data to be analyzed. Where the datasets possesses structural characteristics, locational kernels can be utilized to provide measures of similarity among data points within the dataset. The locational kernels are then combined to generate a decision function, or kernel, that can be used to analyze the dataset. Where invariance transformations or noise is present, tangent vectors are defined to identify relationships between the invariance or noise and the data points. A covariance matrix is formed using the tangent vectors, then used in generation of the kernel for recognizing patterns in the dataset.Type: GrantFiled: May 7, 2002Date of Patent: April 1, 2008Assignee: Health Discovery CorporationInventors: Peter L. Bartlett, André Elisseeff, Bernhard Schoelkopf
-
Patent number: 7318051Abstract: In a pre-processing step prior to training a learning machine, pre-processing includes reducing the quantity of features to be processed using feature selection methods selected from the group consisting of recursive feature elimination (RFE), minimizing the number of non-zero parameters of the system (lo-norm minimization), evaluation of cost function to identify a subset of features that are compatible with constraints imposed by the learning set, unbalanced correlation score and transductive feature selection. The features remaining after feature selection are then used to train a learning machine for purposes of pattern classification, regression, clustering and/or novelty detection. (FIG.Type: GrantFiled: May 20, 2002Date of Patent: January 8, 2008Assignee: Health Discovery CorporationInventors: Jason Aaron Edward Weston, André Elisseeff, Bernhard Schoelkopf, Fernando Pérez-Cruz
-
Publication number: 20050216426Abstract: In a pre-processing step prior to training a learning machine, pre-processing includes reducing the quantity of features to be processed using feature selection methods selected from the group consisting of recursive feature elimination (RFE), minimizing the number of non-zero parameters of the system (lo-norm minimization), evaluation of cost function to identify a subset of features that are compatible with constraints imposed by the learning set, unbalanced correlation score and transductive feature selection. The features remaining after feature selection are then used to train a learning machine for purposes of pattern classification, regression, clustering and/or novelty detection.Type: ApplicationFiled: May 20, 2002Publication date: September 29, 2005Inventors: Jason Aaron Weston, Andre Elisseeff, Bernhard Schoelkopf, Fernando Perez-Cruz