Patents by Inventor Marc Law

Marc Law has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250077624
    Abstract: In various examples, systems and methods are disclosed relating to graph generation. One system includes one or more processing circuits configured to receive a first data structure including one or more relationships between a plurality of components. The one or more processing circuits are further configured to encode, using a predefined function, a second data structure determined based on the first data structure to generate a noisy representation of the second data structure. The one or more processing circuits are further configured to decode, using one or more models, the first data structure based on feature extraction and pattern analysis of the noisy representation.
    Type: Application
    Filed: November 20, 2023
    Publication date: March 6, 2025
    Applicant: NVIDIA Corporation
    Inventors: Marc LAW, Karsten Julian KREIS, Haggai MARON
  • Publication number: 20240296205
    Abstract: Approaches presented herein provide for unsupervised domain transfer learning. In particular, three neural networks can be trained together using at least labeled data from a first domain and unlabeled data from a second domain. Features of the data are extracted using a feature extraction network. A first classifier network uses these features to classify the data, while a second classifier network uses these features to determine the relevant domain. A combined loss function is used to optimize the networks, with a goal of the feature extraction network extracting features that the first classifier network is able to use to accurately classify the data, but prevent the second classifier from determining the domain for the image. Such optimization enables object classification to be performed with high accuracy for either domain, even though there may have been little to no labeled training data for the second domain.
    Type: Application
    Filed: May 6, 2024
    Publication date: September 5, 2024
    Inventors: David Acuna Marrero, Guojun Zhang, Marc Law, Sanja Fidler
  • Patent number: 11989262
    Abstract: Approaches presented herein provide for unsupervised domain transfer learning. In particular, three neural networks can be trained together using at least labeled data from a first domain and unlabeled data from a second domain. Features of the data are extracted using a feature extraction network. A first classifier network uses these features to classify the data, while a second classifier network uses these features to determine the relevant domain. A combined loss function is used to optimize the networks, with a goal of the feature extraction network extracting features that the first classifier network is able to use to accurately classify the data, but prevent the second classifier from determining the domain for the image. Such optimization enables object classification to be performed with high accuracy for either domain, even though there may have been little to no labeled training data for the second domain.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: May 21, 2024
    Assignee: Nvidia Corporation
    Inventors: David Acuna Marrero, Guojun Zhang, Marc Law, Sanja Fidler
  • Publication number: 20230385687
    Abstract: Approaches for training data set size estimation for machine learning model systems and applications are described. Examples include a machine learning model training system that estimates target data requirements for training a machine learning model, given an approximate relationship between training data set size and model performance using one or more validation score estimation functions. To derive a validation score estimation function, a regression data set is generated from training data, and subsets of the regression data set are used to train the machine learning model. A validation score is computed for the subsets and used to compute regression function parameters to curve fit the selected regression function to the training data set. The validation score estimation function is then solved for and provides an output of an estimate of the number additional training samples needed for the validation score estimation function to meet or exceed a target validation score.
    Type: Application
    Filed: May 31, 2022
    Publication date: November 30, 2023
    Inventors: Rafid Reza Mahmood, James Robert Lucas, David Jesus Acuna Marrero, Daiqing Li, Jonah Philion, Jose Manuel Alvarez Lopez, Zhiding Yu, Sanja Fidler, Marc Law
  • Publication number: 20230376849
    Abstract: In various examples, estimating optimal training data set sizes for machine learning model systems and applications. Systems and methods are disclosed that estimate an amount of data to include in a training data set, where the training data set is then used to train one or more machine learning models to reach a target validation performance. To estimate the amount of training data, subsets of an initial training data set may be used to train the machine learning model(s) in order to determine estimates for the minimum amount of training data needed to train the machine learning model(s) to reach the target validation performance. The estimates may then be used to generate one or more functions, such as a cumulative density function and/or a probability density function, wherein the function(s) is then used to estimate the amount of training data needed to train the machine learning model(s).
    Type: Application
    Filed: May 16, 2023
    Publication date: November 23, 2023
    Inventors: Rafid Reza Mahmood, Marc Law, James Robert Lucas, Zhiding Yu, Jose Manuel Alvarez Lopez, Sanja Fidler
  • Publication number: 20230244985
    Abstract: In various examples, a representative subset of data points are queried or selected using integer programming to minimize the Wasserstein distance between the selected data points and the data set from which they were selected. A Generalized Benders Decomposition (GBD) may be used to decompose and iteratively solve the minimization problem, providing a globally optimal solution (an identified subset of data points that match the distribution of their data set) within a threshold tolerance. Data selection may be accelerated by applying one or more constraints while iterating, such as optimality cuts that leverage properties of the Wasserstein distance and/or pruning constraints that reduce the search space of candidate data points. In an active learning implementation, a representative subset of unlabeled data points may be selected using GBD, labeled, and used to train machine learning model(s) over one or more cycles of active learning.
    Type: Application
    Filed: February 2, 2022
    Publication date: August 3, 2023
    Inventors: Rafid Reza Mahmood, Sanja Fidler, Marc Law
  • Publication number: 20220391667
    Abstract: Approaches presented herein use ultrahyperbolic representations (e.g., non-Riemannian manifolds) in inferencing tasks—such as classification—performed by machine learning models (e.g., neural networks). For example, a machine learning model may receive, as input, a graph including data on which to perform an inferencing task. This input can be in the form of, for example, a set of nodes and an adjacency matrix, where the nodes can each correspond to a vector in the graph. The neural network can take this input and perform mapping in order to generate a representation of this graph using an ultrahyperbolic (e.g., non-parametric, pseudo- or semi-Riemannian) manifold. This manifold can be of constant non-zero curvature, generalizing to at least hyperbolic and elliptical geometries. Once such a manifold-based representation is obtained, the neural network can perform one or more inferencing tasks using this representation, such as for classification or animation.
    Type: Application
    Filed: May 27, 2022
    Publication date: December 8, 2022
    Inventor: Marc Law
  • Publication number: 20220383073
    Abstract: In various examples, machine learning models (MLMs) may be updated using multi-order gradients in order to train the MLMs, such as at least a first order gradient and any number of higher-order gradients. At least a first of the MLMs may be trained to generate a representation of features that is invariant to a first domain corresponding to a first dataset and a second domain corresponding to a second dataset. At least a second of the MLMs may be trained to classify whether the representation corresponds to the first domain or the second domain. At least a third of the MLMs may trained to perform a task. The first dataset may correspond to a labeled source domain and the second dataset may correspond to an unlabeled target domain. The training may include transferring knowledge from the first domain to the second domain in a representation space.
    Type: Application
    Filed: May 27, 2022
    Publication date: December 1, 2022
    Inventors: David Jesus Acuna Marrero, Sanja Fidler, Marc Law, Guojun Zhang
  • Publication number: 20220108134
    Abstract: Approaches presented herein provide for unsupervised domain transfer learning. In particular, three neural networks can be trained together using at least labeled data from a first domain and unlabeled data from a second domain. Features of the data are extracted using a feature extraction network. A first classifier network uses these features to classify the data, while a second classifier network uses these features to determine the relevant domain. A combined loss function is used to optimize the networks, with a goal of the feature extraction network extracting features that the first classifier network is able to use to accurately classify the data, but prevent the second classifier from determining the domain for the image. Such optimization enables object classification to be performed with high accuracy for either domain, even though there may have been little to no labeled training data for the second domain.
    Type: Application
    Filed: April 9, 2021
    Publication date: April 7, 2022
    Inventors: David Acuna Marrero, Guojun Zhang, Marc Law, Sanja Fidler