Patents by Inventor Igor Durdanovic

Igor Durdanovic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7406450
    Abstract: Disclosed is a parallel support vector machine technique for solving problems with a large set of training data where the kernel computation, as well as the kernel cache and the training data, are spread over a number of distributed machines or processors. A plurality of processing nodes are used to train a support vector machine based on a set of training data. Each of the processing nodes selects a local working set of training data based on data local to the processing node, for example a local subset of gradients. Each node transmits selected data related to the working set (e.g., gradients having a maximum value) and receives an identification of a global working set of training data. The processing node optimizes the global working set of training data and updates a portion of the gradients of the global working set of training data. The updating of a portion of the gradients may include generating a portion of a kernel matrix. These steps are repeated until a convergence condition is met.
    Type: Grant
    Filed: February 20, 2006
    Date of Patent: July 29, 2008
    Assignee: NEC Laboratories America, Inc.
    Inventors: Hans Peter Graf, Igor Durdanovic, Eric Cosatto, Vladimir Vapnik
  • Publication number: 20070094170
    Abstract: Disclosed is a parallel support vector machine technique for solving problems with a large set of training data where the kernel computation, as well as the kernel cache and the training data, are spread over a number of distributed machines or processors. A plurality of processing nodes are used to train a support vector machine based on a set of training data. Each of the processing nodes selects a local working set of training data based on data local to the processing node, for example a local subset of gradients. Each node transmits selected data related to the working set (e.g., gradients having a maximum value) and receives an identification of a global working set of training data. The processing node optimizes the global working set of training data and updates a portion of the gradients of the global working set of training data. The updating of a portion of the gradients may include generating a portion of a kernel matrix. These steps are repeated until a convergence condition is met.
    Type: Application
    Filed: February 20, 2006
    Publication date: April 26, 2007
    Applicant: NEC LABORATORIES AMERICA, INC.
    Inventors: Hans Graf, Igor Durdanovic, Eric Cosatto, Vladimir Vapnik