Patents by Inventor Vladimir Vapnik

Vladimir Vapnik has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8315956
    Abstract: A method and system for use in describing a phenomenon of interest. The method and system computes a decision rule for use in describing the phenomenon of interest using training data relating to the phenomenon of interest, labels for labeling the training data, and hidden information about the training data or directed distances obtained from the hidden information, as inputs.
    Type: Grant
    Filed: November 24, 2008
    Date of Patent: November 20, 2012
    Assignee: NEC Laboratories America, Inc.
    Inventors: Akshay Vashist, Vladimir Vapnik
  • Patent number: 7979367
    Abstract: A system and method for support vector machine plus (SVM+) computations include selecting a set of indexes for a target function to create a quadratic function depending on a number of variables, and reducing the number of variables to two in the quadratic function using linear constraints. An extreme point is computed for the quadratic function in closed form. A two-dimensional set is defined where the indexes determine whether a data point is in the two-dimensional set or not. A determination is made of whether the extreme point belongs to the two-dimensional set. If the extreme point belongs to the two-dimensional set, the extreme point defines a maximum and defines a new set of parameters for a next iteration. Otherwise, the quadratic function is restricted on at least one boundary of the two-dimensional set to create a one-dimensional quadratic function. The steps are repeated until the maximum is determined.
    Type: Grant
    Filed: March 11, 2008
    Date of Patent: July 12, 2011
    Assignee: NEC Laboratories America, Inc.
    Inventors: Rauf Izmailov, Akshay Vashist, Vladimir Vapnik
  • Publication number: 20090204555
    Abstract: A method and system for use in describing a phenomenon of interest. The method and system computes a decision rule for use in describing the phenomenon of interest using training data relating to the phenomenon of interest, labels for labeling the training data, and hidden information about the training data or directed distances obtained from the hidden information, as inputs.
    Type: Application
    Filed: November 24, 2008
    Publication date: August 13, 2009
    Applicant: NEC LABORATORIES AMERICA, INC.
    Inventors: Akshay Vashist, Vladimir Vapnik
  • Publication number: 20080243731
    Abstract: A system and method for support vector machine plus (SVM+) computations include selecting a set of indexes for a target function to create a quadratic function depending on a number of variables, and reducing the number of variables to two in the quadratic function using linear constraints. An extreme point is computed for the quadratic function in closed form. A two-dimensional set is defined where the indexes determine whether a data point is in the two-dimensional set or not. A determination is made of whether the extreme point belongs to the two-dimensional set. If the extreme point belongs to the two-dimensional set, the extreme point defines a maximum and defines a new set of parameters for a next iteration. Otherwise, the quadratic function is restricted on at least one boundary of the two-dimensional set to create a one-dimensional quadratic function. The steps are repeated until the maximum is determined.
    Type: Application
    Filed: March 11, 2008
    Publication date: October 2, 2008
    Applicant: NEC LABORATORIES AMERICA, INC.
    Inventors: RAUF IZMAILOV, AKSHAY VASHIST, VLADIMIR VAPNIK
  • Patent number: 7406450
    Abstract: Disclosed is a parallel support vector machine technique for solving problems with a large set of training data where the kernel computation, as well as the kernel cache and the training data, are spread over a number of distributed machines or processors. A plurality of processing nodes are used to train a support vector machine based on a set of training data. Each of the processing nodes selects a local working set of training data based on data local to the processing node, for example a local subset of gradients. Each node transmits selected data related to the working set (e.g., gradients having a maximum value) and receives an identification of a global working set of training data. The processing node optimizes the global working set of training data and updates a portion of the gradients of the global working set of training data. The updating of a portion of the gradients may include generating a portion of a kernel matrix. These steps are repeated until a convergence condition is met.
    Type: Grant
    Filed: February 20, 2006
    Date of Patent: July 29, 2008
    Assignee: NEC Laboratories America, Inc.
    Inventors: Hans Peter Graf, Igor Durdanovic, Eric Cosatto, Vladimir Vapnik
  • Publication number: 20070094170
    Abstract: Disclosed is a parallel support vector machine technique for solving problems with a large set of training data where the kernel computation, as well as the kernel cache and the training data, are spread over a number of distributed machines or processors. A plurality of processing nodes are used to train a support vector machine based on a set of training data. Each of the processing nodes selects a local working set of training data based on data local to the processing node, for example a local subset of gradients. Each node transmits selected data related to the working set (e.g., gradients having a maximum value) and receives an identification of a global working set of training data. The processing node optimizes the global working set of training data and updates a portion of the gradients of the global working set of training data. The updating of a portion of the gradients may include generating a portion of a kernel matrix. These steps are repeated until a convergence condition is met.
    Type: Application
    Filed: February 20, 2006
    Publication date: April 26, 2007
    Applicant: NEC LABORATORIES AMERICA, INC.
    Inventors: Hans Graf, Igor Durdanovic, Eric Cosatto, Vladimir Vapnik
  • Publication number: 20060112026
    Abstract: Disclosed is an improved technique for training a support vector machine using a distributed architecture. A training data set is divided into subsets, and the subsets are optimized in a first level of optimizations, with each optimization generating a support vector set. The support vector sets output from the first level optimizations are then combined and used as input to a second level of optimizations. This hierarchical processing continues for multiple levels, with the output of each prior level being fed into the next level of optimizations. In order to guarantee a global optimal solution, a final set of support vectors from a final level of optimization processing may be fed back into the first level of the optimization cascade so that the results may be processed along with each of the training data subsets.
    Type: Application
    Filed: October 29, 2004
    Publication date: May 25, 2006
    Applicant: NEC Laboratories America, Inc.
    Inventors: Hans Graf, Eric Cosatto, Leon Bottou, Vladimir Vapnik
  • Patent number: 5950146
    Abstract: A method for estimating a real function that describes a phenomenon occurring in a space of any dimensionality. The function is estimated by taking a series of measurements of the phenomenon being described and using those measurements to construct an expansion that has a manageable number of terms. A reduction in the number of terms is achieved by using an approximation that is defined as an expansion on kernel functions, the kernel functions forming an inner product in Hilbert space. By finding the support vectors for the measurements one specifies the expansion functions. The number of terms in an estimation according to the present invention is generally much less than the number of observations of the real world phenomenon that is being estimated.
    Type: Grant
    Filed: October 4, 1996
    Date of Patent: September 7, 1999
    Assignee: AT & T Corp.
    Inventor: Vladimir Vapnik
  • Patent number: 5649068
    Abstract: A method is described wherein the dual representation mathematical principle is used for the design of decision systems. This principle permits some decision functions that are weighted sums of predefined functions to be represented as memory-based decision function. Using this principle a memory-based decision system with optimum margin is designed wherein weights and prototypes of training patterns of a memory-based decision function are determined such that the corresponding dual decision function satisfies the criterion of margin optimality.
    Type: Grant
    Filed: May 16, 1996
    Date of Patent: July 15, 1997
    Assignee: Lucent Technologies Inc.
    Inventors: Bernard Boser, Isabelle Guyon, Vladimir Vapnik
  • Patent number: 5640492
    Abstract: A soft margin classifier and method are disclosed for processing input data of a training set into classes separated by soft margins adjacent optimal hyperplanes. Slack variables are provided, allowing erroneous or difficult data in the training set to be taken into account in determining the optimal hyperplane. Inseparable data in the training set are separated without removal of data obstructing separation by determining the optimal hyperplane having minimal number of erroneous classifications of the obstructing data. The parameters of the optimal hyperplane generated from the training set determine decision functions or separators for classifying empirical data.
    Type: Grant
    Filed: June 30, 1994
    Date of Patent: June 17, 1997
    Assignee: Lucent Technologies Inc.
    Inventors: Corinna Cortes, Vladimir Vapnik