Patents by Inventor Marc Law

Marc Law has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11989262
    Abstract: Approaches presented herein provide for unsupervised domain transfer learning. In particular, three neural networks can be trained together using at least labeled data from a first domain and unlabeled data from a second domain. Features of the data are extracted using a feature extraction network. A first classifier network uses these features to classify the data, while a second classifier network uses these features to determine the relevant domain. A combined loss function is used to optimize the networks, with a goal of the feature extraction network extracting features that the first classifier network is able to use to accurately classify the data, but prevent the second classifier from determining the domain for the image. Such optimization enables object classification to be performed with high accuracy for either domain, even though there may have been little to no labeled training data for the second domain.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: May 21, 2024
    Assignee: Nvidia Corporation
    Inventors: David Acuna Marrero, Guojun Zhang, Marc Law, Sanja Fidler
  • Publication number: 20240094741
    Abstract: Disclosed herein are a method and apparatus for automated following behind a lead vehicle. The lead vehicle navigates a path from a starting point to a destination. The lead vehicle and the following vehicle are connected via V2V communication, allowing one or more following vehicles to detect the path taken by the lead vehicle. A computerized control system on the following vehicle (a Follow-the-Leader, or FTL, system) allows the following vehicle to mimic the behavior of the lead vehicle, with the FTL system controlling steering to guide the following vehicle along the path previously navigated by the lead vehicle. In some embodiments, the lead vehicle and following vehicle may both use Global Navigation Satellite System (GNSS) position coordinates. In some embodiments, the following vehicle may also have a system of sensors to maintain a gap between the following and lead vehicles.
    Type: Application
    Filed: April 25, 2023
    Publication date: March 21, 2024
    Applicant: Peloton Technology, Inc.
    Inventors: Shad Laws, Joshua Switkes, Art Gavrysh, Marc Tange, Mark Herbert, Colleen Twitty, Dean Hogle, Andrew Tamoney, Eric Monsler, Carlos Rosario, Oliver Bayley, Richard Pallo, Louis Donayre, Laurenz Laubinger, Brian Smartt, Joyce Tam, Brian Silverman, Tabitha Jarvis, Murad Bharwani, Steven Erlein, Austin Schuh, Mark Luckevich
  • Publication number: 20230385687
    Abstract: Approaches for training data set size estimation for machine learning model systems and applications are described. Examples include a machine learning model training system that estimates target data requirements for training a machine learning model, given an approximate relationship between training data set size and model performance using one or more validation score estimation functions. To derive a validation score estimation function, a regression data set is generated from training data, and subsets of the regression data set are used to train the machine learning model. A validation score is computed for the subsets and used to compute regression function parameters to curve fit the selected regression function to the training data set. The validation score estimation function is then solved for and provides an output of an estimate of the number additional training samples needed for the validation score estimation function to meet or exceed a target validation score.
    Type: Application
    Filed: May 31, 2022
    Publication date: November 30, 2023
    Inventors: Rafid Reza Mahmood, James Robert Lucas, David Jesus Acuna Marrero, Daiqing Li, Jonah Philion, Jose Manuel Alvarez Lopez, Zhiding Yu, Sanja Fidler, Marc Law
  • Publication number: 20230376849
    Abstract: In various examples, estimating optimal training data set sizes for machine learning model systems and applications. Systems and methods are disclosed that estimate an amount of data to include in a training data set, where the training data set is then used to train one or more machine learning models to reach a target validation performance. To estimate the amount of training data, subsets of an initial training data set may be used to train the machine learning model(s) in order to determine estimates for the minimum amount of training data needed to train the machine learning model(s) to reach the target validation performance. The estimates may then be used to generate one or more functions, such as a cumulative density function and/or a probability density function, wherein the function(s) is then used to estimate the amount of training data needed to train the machine learning model(s).
    Type: Application
    Filed: May 16, 2023
    Publication date: November 23, 2023
    Inventors: Rafid Reza Mahmood, Marc Law, James Robert Lucas, Zhiding Yu, Jose Manuel Alvarez Lopez, Sanja Fidler
  • Publication number: 20230244985
    Abstract: In various examples, a representative subset of data points are queried or selected using integer programming to minimize the Wasserstein distance between the selected data points and the data set from which they were selected. A Generalized Benders Decomposition (GBD) may be used to decompose and iteratively solve the minimization problem, providing a globally optimal solution (an identified subset of data points that match the distribution of their data set) within a threshold tolerance. Data selection may be accelerated by applying one or more constraints while iterating, such as optimality cuts that leverage properties of the Wasserstein distance and/or pruning constraints that reduce the search space of candidate data points. In an active learning implementation, a representative subset of unlabeled data points may be selected using GBD, labeled, and used to train machine learning model(s) over one or more cycles of active learning.
    Type: Application
    Filed: February 2, 2022
    Publication date: August 3, 2023
    Inventors: Rafid Reza Mahmood, Sanja Fidler, Marc Law
  • Publication number: 20220391667
    Abstract: Approaches presented herein use ultrahyperbolic representations (e.g., non-Riemannian manifolds) in inferencing tasks—such as classification—performed by machine learning models (e.g., neural networks). For example, a machine learning model may receive, as input, a graph including data on which to perform an inferencing task. This input can be in the form of, for example, a set of nodes and an adjacency matrix, where the nodes can each correspond to a vector in the graph. The neural network can take this input and perform mapping in order to generate a representation of this graph using an ultrahyperbolic (e.g., non-parametric, pseudo- or semi-Riemannian) manifold. This manifold can be of constant non-zero curvature, generalizing to at least hyperbolic and elliptical geometries. Once such a manifold-based representation is obtained, the neural network can perform one or more inferencing tasks using this representation, such as for classification or animation.
    Type: Application
    Filed: May 27, 2022
    Publication date: December 8, 2022
    Inventor: Marc Law
  • Publication number: 20220383073
    Abstract: In various examples, machine learning models (MLMs) may be updated using multi-order gradients in order to train the MLMs, such as at least a first order gradient and any number of higher-order gradients. At least a first of the MLMs may be trained to generate a representation of features that is invariant to a first domain corresponding to a first dataset and a second domain corresponding to a second dataset. At least a second of the MLMs may be trained to classify whether the representation corresponds to the first domain or the second domain. At least a third of the MLMs may trained to perform a task. The first dataset may correspond to a labeled source domain and the second dataset may correspond to an unlabeled target domain. The training may include transferring knowledge from the first domain to the second domain in a representation space.
    Type: Application
    Filed: May 27, 2022
    Publication date: December 1, 2022
    Inventors: David Jesus Acuna Marrero, Sanja Fidler, Marc Law, Guojun Zhang
  • Publication number: 20220108134
    Abstract: Approaches presented herein provide for unsupervised domain transfer learning. In particular, three neural networks can be trained together using at least labeled data from a first domain and unlabeled data from a second domain. Features of the data are extracted using a feature extraction network. A first classifier network uses these features to classify the data, while a second classifier network uses these features to determine the relevant domain. A combined loss function is used to optimize the networks, with a goal of the feature extraction network extracting features that the first classifier network is able to use to accurately classify the data, but prevent the second classifier from determining the domain for the image. Such optimization enables object classification to be performed with high accuracy for either domain, even though there may have been little to no labeled training data for the second domain.
    Type: Application
    Filed: April 9, 2021
    Publication date: April 7, 2022
    Inventors: David Acuna Marrero, Guojun Zhang, Marc Law, Sanja Fidler