Patents by Inventor Jeremy Zieg Kolter

Jeremy Zieg Kolter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210165391
    Abstract: A computer-implemented method for training a classifier, particularly a binary classifier, for classifying input signals to optimize performance according to a non-decomposable metric that measures an alignment between classifications corresponding to input signals of a set of training data and corresponding predicted classifications of the input signals obtained from the classifier. The method includes providing weighting factors that characterize how the non-decomposable metric depends on a plurality of terms from a confusion matrix of the classifications and the predicted classifications, and training the classifier depending on the provided weighting factors.
    Type: Application
    Filed: November 17, 2020
    Publication date: June 3, 2021
    Inventors: Rizal Fathony, Frank Schmidt, Jeremy Zieg Kolter
  • Publication number: 20210089879
    Abstract: Performing an adversarial attack on a neural network classifier is described. A dataset of input-output pairs is constructed, each input element of the input-output pairs randomly chosen from a search space, each output element of the input-output pairs indicating a prediction output of the neural network classifier for the corresponding input element. A Gaussian process is utilized on the dataset of input-output pairs to optimize an acquisition function to find a best perturbation input element from the dataset. The best perturbation input element is upsampled to generate an upsampled best input element. The upsampled best input element is added to an original input to generate a candidate input. The neural network classifier is queried to determine a classifier prediction for the candidate input. A score for the classifier prediction is computed. The candidate input is accepted as a successful adversarial attack responsive to the classifier prediction being incorrect.
    Type: Application
    Filed: September 24, 2019
    Publication date: March 25, 2021
    Inventors: Satya Narayan SHUKLA, Anit Kumar SAHU, Devin WILLMOTT, Jeremy Zieg KOLTER
  • Publication number: 20210089894
    Abstract: A system and computer implemented method for learning rules from a data base including entities and relations between the entities, wherein an entity is either a constant or a numerical value, and a relation between a constant and a numerical value is a numerical relation and a relation between two constants is a non-numerical relation. The method includes: deriving aggregate values from said numerical and/or non-numerical relations; deriving non-numerical relations from said aggregate values; adding said derived non-numerical relations to the data base; constructing differentiable operators, wherein a differentiable operator refers to a non-numerical or a derived non-numerical relation of the data base, and extracting rules from said differentiable operators.
    Type: Application
    Filed: August 14, 2020
    Publication date: March 25, 2021
    Inventors: Csaba Domokos, Daria Stepanova, Jeremy Zieg Kolter, Po-wei Wang
  • Publication number: 20210089866
    Abstract: Markov random field parameters are identified to use for covariance modeling of correlation between gradient terms of a loss function of the classifier. A subset of images are sampled, from a dataset of images, according to a normal distribution to estimate the gradient terms. Black-box gradient estimation is used to infer values of the parameters of the Markov random field according to the sampling. Fourier basis vectors are generated from the inferred values. An original image is perturbed using the Fourier basis vectors to obtain loss function values. An estimate of a gradient is obtained from the loss function values. An image perturbation is created using the estimated gradient. The image perturbation is added to an original input to generate a candidate adversarial input that maximizes loss in identifying the image by the classifier. The neural network classifier is queried to determine a classifier prediction for the candidate adversarial input.
    Type: Application
    Filed: September 24, 2019
    Publication date: March 25, 2021
    Inventors: Jeremy Zieg KOLTER, Anit Kumar SAHU
  • Publication number: 20210089842
    Abstract: A method to classify sensor data with improved robustness against label noise. A predicted label may be computed for a novel input with improved robustness against label noise by estimating a label which is most likely under repeated application of a base training function to the training labels incorporating noise according to a noise level and subsequent application of a base classifier configured according to the base prediction function to the novel input.
    Type: Application
    Filed: September 18, 2020
    Publication date: March 25, 2021
    Inventors: Elan Kennar Rosenfeld, Ezra Maurice Winston, Frank Schmidt, Jeremy Zieg Kolter
  • Publication number: 20210081505
    Abstract: A simulation includes converting a molecular dynamics snapshot of elements within a multi-element system into a graph with atoms as nodes of the graph; defining a matrix such that each column of the matrix represents a node in the graph; defining a distance matrix according to a set of relative positions of each of the atoms; iterating through the GTFF using an attention mechanism, operating on the matrix and augmented by incorporating the distance matrix, to pass hidden state from a current layer of the GTFF to a next layer of the GTFF; performing a combination over the columns of the matrix to produce a scalar molecular energy; making a backward pass through the GTFF, iteratively calculating derivatives at each of the layers of the GTFF to compute a prediction of force acting on each atom; and returning the prediction of the force acting on each atom.
    Type: Application
    Filed: September 12, 2019
    Publication date: March 18, 2021
    Inventors: Shaojie BAI, Jeremy Zieg KOLTER, Mordechai KORNBLUTH, Jonathan MAILOA, Devin WILLMOTT
  • Publication number: 20210042606
    Abstract: Some embodiments are directed to a neural network comprising an iterative function (z[i+1]=ƒ(z[i], ?, c(?)). Such an iterative function is known in the field of machine learning to be representable by a stack of layers which have mutually shared weights. According to some embodiments the stack of layers may during training be replaced by the use of a numerical root-finding algorithm to find an equilibrium of the iterative function in which a further execution of the iterative function would not substantially further change the output of the iterative function. Effectively, the stack of layers may be replaced by a numerical equilibrium solver. The use of the numerical root-finding algorithm is demonstrated to greatly reduce the memory footprint during training while achieving similar accuracy as state-of-the-art prior art models.
    Type: Application
    Filed: August 5, 2020
    Publication date: February 11, 2021
    Inventors: Shaojie BAI, Jeremy Zieg KOLTER, Michael SCHOBER
  • Publication number: 20210042457
    Abstract: A system and computer-implemented method are provided for training a dynamics model to learn the dynamics of a physical system. The dynamics model may be learned to be able to infer a future state of the physical system and/or its environment based on a current state of the physical system and/or its environment. The learned dynamics model is inherently globally stable. Instead of learning a dynamics model and attempting to separately verify its stability, the learnable dynamics model comprises a learnable Lyapunov function which is jointly learned together with the nominal dynamics of the physical system. The learned dynamics model is highly suitable for real-life applications in which a physical system may assume a state which was unseen during training as the learned dynamics model is inherently globally stable.
    Type: Application
    Filed: July 20, 2020
    Publication date: February 11, 2021
    Inventors: Gaurav Manek, Jeremy Zieg Kolter, Julia Vinogradska
  • Publication number: 20200372364
    Abstract: A system for applying a neural network to an input instance. The neural network includes an optimization layer for determining values of one or more output neurons from values of one or more input neurons by a joint optimization parametrized by one or more parameters. An input instance is obtained. The values of the one or more input neurons to the optimization layer are obtained and input vectors for the one or more input neurons are determined therefrom. Output vectors for the one or more output neurons are computed from the determined input vectors by jointly optimizing at least the output vectors with respect to the input vectors to solve a semidefinite program defined by the one or more parameters. The values of the one or more output neurons are determined from the respective computed output vectors.
    Type: Application
    Filed: May 12, 2020
    Publication date: November 26, 2020
    Inventors: Csaba Domokos, Jeremy Zieg Kolter, Po-wei Wang, Priya L. Donti
  • Publication number: 20200364553
    Abstract: Some embodiments are directed to a neural network training device for training a neural network. At least one layer of the neural network layers is a projection layer. The projection layer projects a layer input vector (x) of the projection layer to a layer output vector (y). The output vector (y) sums to the summing parameter (k).
    Type: Application
    Filed: May 17, 2019
    Publication date: November 19, 2020
    Inventors: Brandon David Amos, Vladlen Koltun, Jeremy Zieg Kolter, Frank Rüdiger Schmidt
  • Publication number: 20200364616
    Abstract: A system for training a classification model to be robust against perturbations of multiple perturbation types. A perturbation type defines a set of allowed perturbations. The classification model is trained by, in an outer iteration, selecting a set of training instances of a training dataset; selecting, among perturbations allowed by the multiple perturbation types, one or more perturbations for perturbing the selected training instances to maximize a loss function; and updating the set of parameters of the classification model to decrease the loss for the perturbed instances. A perturbation is determined by, in an inner iteration, determining updated perturbations allowed by respective perturbation types of the multiple perturbation types and selecting an updated perturbation that most increases the loss of the classification model.
    Type: Application
    Filed: April 24, 2020
    Publication date: November 19, 2020
    Inventors: Eric Wong, Frank Schmidt, Jeremy Zieg Kolter, Pratyush Maini