Patents by Inventor Jose Manuel Alvarez Lopez

Jose Manuel Alvarez Lopez has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240127067
    Abstract: Systems and methods are disclosed for improving natural robustness of sparse neural networks. Pruning a dense neural network may improve inference speed and reduces the memory footprint and energy consumption of the resulting sparse neural network while maintaining a desired level of accuracy. In real-world scenarios in which sparse neural networks deployed in autonomous vehicles perform tasks such as object detection and classification for acquired inputs (images), the neural networks need to be robust to new environments, weather conditions, camera effects, etc. Applying sharpness-aware minimization (SAM) optimization during training of the sparse neural network improves performance for out of distribution (OOD) images compared with using conventional stochastic gradient descent (SGD) optimization. SAM optimizes a neural network to find a flat minimum: a region that both has a small loss value, but that also lies within a region of low loss.
    Type: Application
    Filed: August 31, 2023
    Publication date: April 18, 2024
    Inventors: Annamarie Bair, Hongxu Yin, Pavlo Molchanov, Maying Shen, Jose Manuel Alvarez Lopez
  • Publication number: 20240087222
    Abstract: An artificial intelligence framework is described that incorporates a number of neural networks and a number of transformers for converting a two-dimensional image into three-dimensional semantic information. Neural networks convert one or more images into a set of image feature maps, depth information associated with the one or more images, and query proposals based on the depth information. A first transformer implements a cross-attention mechanism to process the set of image feature maps in accordance with the query proposals. The output of the first transformer is combined with a mask token to generate initial voxel features of the scene. A second transformer implements a self-attention mechanism to convert the initial voxel features into refined voxel features, which are up-sampled and processed by a lightweight neural network to generate the three-dimensional semantic information, which may be used by, e.g., an autonomous vehicle for various advanced driver assistance system (ADAS) functions.
    Type: Application
    Filed: November 20, 2023
    Publication date: March 14, 2024
    Inventors: Yiming Li, Zhiding Yu, Christopher B. Choy, Chaowei Xiao, Jose Manuel Alvarez Lopez, Sanja Fidler, Animashree Anandkumar
  • Publication number: 20230385687
    Abstract: Approaches for training data set size estimation for machine learning model systems and applications are described. Examples include a machine learning model training system that estimates target data requirements for training a machine learning model, given an approximate relationship between training data set size and model performance using one or more validation score estimation functions. To derive a validation score estimation function, a regression data set is generated from training data, and subsets of the regression data set are used to train the machine learning model. A validation score is computed for the subsets and used to compute regression function parameters to curve fit the selected regression function to the training data set. The validation score estimation function is then solved for and provides an output of an estimate of the number additional training samples needed for the validation score estimation function to meet or exceed a target validation score.
    Type: Application
    Filed: May 31, 2022
    Publication date: November 30, 2023
    Inventors: Rafid Reza Mahmood, James Robert Lucas, David Jesus Acuna Marrero, Daiqing Li, Jonah Philion, Jose Manuel Alvarez Lopez, Zhiding Yu, Sanja Fidler, Marc Law
  • Publication number: 20230376849
    Abstract: In various examples, estimating optimal training data set sizes for machine learning model systems and applications. Systems and methods are disclosed that estimate an amount of data to include in a training data set, where the training data set is then used to train one or more machine learning models to reach a target validation performance. To estimate the amount of training data, subsets of an initial training data set may be used to train the machine learning model(s) in order to determine estimates for the minimum amount of training data needed to train the machine learning model(s) to reach the target validation performance. The estimates may then be used to generate one or more functions, such as a cumulative density function and/or a probability density function, wherein the function(s) is then used to estimate the amount of training data needed to train the machine learning model(s).
    Type: Application
    Filed: May 16, 2023
    Publication date: November 23, 2023
    Inventors: Rafid Reza Mahmood, Marc Law, James Robert Lucas, Zhiding Yu, Jose Manuel Alvarez Lopez, Sanja Fidler
  • Publication number: 20230325656
    Abstract: Apparatuses, systems, and techniques to cause one or more portions of one or more neural networks to be trained. In at least one embodiment, one or more portions of one or more neural networks are caused to be trained by, for example, iteratively adjusting precision of weight parameters associated with the one or more portions based, at least in part, on one or more performance metrics of the one or more portions.
    Type: Application
    Filed: May 3, 2022
    Publication date: October 12, 2023
    Inventors: Rundong Li, Yichun Shen, Abel Brown, Jose Manuel Alvarez Lopez, Siyi Li
  • Publication number: 20230325670
    Abstract: A technique for dynamically configuring and executing an augmented neural network in real-time according to performance constraints also maintains the legacy neural network execution path. A neural network model that has been trained for a task is augmented with low-compute “shallow” phases paired with each legacy phase and the legacy phases of the neural network model are held constant (e.g., unchanged) while the shallow phases are trained. During inference, one or more of the shallow phases can be selectively executed in place of the corresponding legacy phase. Compared with the legacy phases, the shallow phases are typically less accurate, but have reduced latency and consume less power. Therefore, processing using one or more of the shallow phases in place of one or more of the legacy phases enables the augmented neural network to dynamically adapt to changes in the execution environment (e.g., processing load or performance requirement).
    Type: Application
    Filed: August 18, 2022
    Publication date: October 12, 2023
    Inventors: Jason Lavar Clemons, Stephen W. Keckler, Iuri Frosio, Jose Manuel Alvarez Lopez, Maying Shen
  • Publication number: 20230290135
    Abstract: Apparatuses, systems, and techniques to generate a robust representation of an image. In at least one embodiment, input tokens of an input image are received, and an inference about the input image is generated based on a vision transformer (ViT) system comprising at least one self-attention module to perform token mixing and a channel self-attention module to perform channel processing.
    Type: Application
    Filed: March 9, 2023
    Publication date: September 14, 2023
    Inventors: Daquan Zhou, Zhiding Yu, Enze Xie, Anima Anandkumar, Chaowei Xiao, Jose Manuel Alvarez Lopez
  • Publication number: 20230186077
    Abstract: One embodiment of the present invention sets forth a technique for executing a transformer neural network. The technique includes computing a first set of halting scores for a first set of tokens that has been input into a first layer of the transformer neural network. The technique also includes determining that a first halting score included in the first set of halting scores exceeds a threshold value. The technique further includes in response to the first halting score exceeding the threshold value, causing a first token that is included in the first set of tokens and is associated with the first halting score not to be processed by one or more layers within the transformer neural network that are subsequent to the first layer.
    Type: Application
    Filed: June 15, 2022
    Publication date: June 15, 2023
    Inventors: Hongxu YIN, Jan KAUTZ, Jose Manuel ALVAREZ LOPEZ, Arun MALLYA, Pavlo MOLCHANOV, Arash VAHDAT
  • Publication number: 20230077258
    Abstract: Apparatuses, systems, and techniques are presented to simplify neural networks. In at least one embodiment, one or more portions of one or more neural networks are cause to be removed based, at least in part, on one or more performance metrics of the one or more neural networks.
    Type: Application
    Filed: August 10, 2021
    Publication date: March 9, 2023
    Inventors: Maying Shen, Pavlo Molchanov, Hongxu Yin, Lei Mao, Jianna Liu, Jose Manuel Alvarez Lopez
  • Publication number: 20220292360
    Abstract: Apparatuses, systems, and techniques to remove one or more nodes of a neural network. In at least one embodiment, one or more nodes of a neural network are removed, based on, for example, whether the one or more nodes are likely to affect performance of the neural network.
    Type: Application
    Filed: March 15, 2021
    Publication date: September 15, 2022
    Inventors: Maying Shen, Pavlo Molchanov, Hongxu Yin, Jose Manuel Alvarez Lopez
  • Publication number: 20220284283
    Abstract: Apparatuses, systems, and techniques to invert a neural network. In at least one embodiment, one or more neural network layers are inverted and, in at least one embodiment, loaded in reverse order.
    Type: Application
    Filed: March 8, 2021
    Publication date: September 8, 2022
    Inventors: Hongxu Yin, Pavlo Molchanov, Jose Manuel Alvarez Lopez, Xin Dong
  • Publication number: 20220284232
    Abstract: Apparatuses, systems, and techniques to identify one or more images used to train one or more neural networks. In at least one embodiment, one or more images used to train one or more neural networks are identified, based on, for example, one or more labels of one or more objects within the one or more images.
    Type: Application
    Filed: March 1, 2021
    Publication date: September 8, 2022
    Inventors: Hongxu Yin, Arun Mallya, Arash Vahdat, Jose Manuel Alvarez Lopez, Jan Kautz, Pavlo Molchanov
  • Publication number: 20220156982
    Abstract: Apparatuses, systems, and techniques for calculating data compression parameters using codebook entry values. In at least one embodiment, one or more circuits is to calculate one or more data compression parameters based, at least in part, on at least on one or more values of the data to be compressed in relation to at least two codebook entry values.
    Type: Application
    Filed: November 19, 2020
    Publication date: May 19, 2022
    Inventors: Yerlan Idelbayev, Pavlo Molchanov, Hongxu Danny Yin, Maying Shen, Jose Manuel Alvarez Lopez
  • Publication number: 20220147743
    Abstract: Approaches presented herein provide for semantic data matching, as may be useful for selecting data from a large unlabeled dataset to train a neural network. For an object detection use case, such a process can identify images within an unlabeled set even when an object of interest represents a relatively small portion of an image or there are many other objects in the image. A query image can be processed to extract image features or feature maps from only one or more regions of interest in that image, as may correspond to objects of interest. These features are compared with images in an unlabeled dataset, with similarity scores being calculated between the features of the region(s) of interest and individual images in the unlabeled set. One or more highest scored images can be selected as training images showing objects that are semantically similar to the object in the query image.
    Type: Application
    Filed: April 9, 2021
    Publication date: May 12, 2022
    Inventors: Donna Roy, Suraj Kothawade, Elmar Haussmann, Jose Manuel Alvarez Lopez, Michele Fenzi, Christoph Angerer
  • Publication number: 20220051017
    Abstract: Apparatuses, systems, and techniques to identify one or more objects in one or more images. In at least one embodiment, one or more objects are identified in one or more images based, at least in part, on a likelihood that one or more objects is different from other objects in one or more images.
    Type: Application
    Filed: August 11, 2020
    Publication date: February 17, 2022
    Inventors: Jiwoong Choi, Jose Manuel Alvarez Lopez