Patents by Inventor Mayank Daga

Mayank Daga has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240054332
    Abstract: Methods, devices, systems, and instructions for adaptive quantization in an artificial neural network (ANN) calculate a distribution of ANN information; select a quantization function from a set of quantization functions based on the distribution; apply the quantization function to the ANN information to generate quantized ANN information; load the quantized ANN information into the ANN; and generate an output based on the quantized ANN information. Some examples recalculate the distribution of ANN information and reselect the quantization function from the set of quantization functions based on the resampled distribution if the output does not sufficiently correlate with a known correct output. In some examples, the ANN information includes a set of training data. In some examples, the ANN information includes a plurality of link weights.
    Type: Application
    Filed: October 27, 2023
    Publication date: February 15, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Daniel I. Lowell, Sergey Voronov, Mayank Daga
  • Patent number: 11803734
    Abstract: Methods, devices, systems, and instructions for adaptive quantization in an artificial neural network (ANN) calculate a distribution of ANN information; select a quantization function from a set of quantization functions based on the distribution; apply the quantization function to the ANN information to generate quantized ANN information; load the quantized ANN information into the ANN; and generate an output based on the quantized ANN information. Some examples recalculate the distribution of ANN information and reselect the quantization function from the set of quantization functions based on the resampled distribution if the output does not sufficiently correlate with a known correct output. In some examples, the ANN information includes a set of training data. In some examples, the ANN information includes a plurality of link weights.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: October 31, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Daniel I. Lowell, Sergey Voronov, Mayank Daga
  • Patent number: 10558466
    Abstract: Systems, apparatuses, and methods for adjusting group sizes to match a processor lane width are described. In early iterations of an algorithm, a processor partitions a dataset into groups of data points which are integer multiples of the processing lane width of the processor. For example, when performing a K-means clustering algorithm, the processor determines that a first plurality of data points belong to a first group during a given iteration. If the first plurality of data points is not an integer multiple of the number of processing lanes, then the processor reassigns a first number of data points from the first plurality of data points to one or more other groups. The processor then performs the next iteration with these first number of data points assigned to other groups even though the first number of data points actually meets the algorithmic criteria for belonging to the first group.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: February 11, 2020
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mauricio Breternitz, Mayank Daga
  • Publication number: 20190188557
    Abstract: Methods, devices, systems, and instructions for adaptive quantization in an artificial neural network (ANN) calculate a distribution of ANN information; select a quantization function from a set of quantization functions based on the distribution; apply the quantization function to the ANN information to generate quantized ANN information; load the quantized ANN information into the ANN; and generate an output based on the quantized ANN information. Some examples recalculate the distribution of ANN information and reselect the quantization function from the set of quantization functions based on the resampled distribution if the output does not sufficiently correlate with a known correct output. In some examples, the ANN information includes a set of training data. In some examples, the ANN information includes a plurality of link weights.
    Type: Application
    Filed: December 20, 2017
    Publication date: June 20, 2019
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Daniel I. Lowell, Sergey Voronov, Mayank Daga
  • Publication number: 20180314945
    Abstract: Systems, apparatuses, and methods for enhanced resolution video and security via machine learning are disclosed. A system is configured to receive a source code representation of a neural network. In one embodiment, the source code representation is a directed acyclic graph (DAG). The system determines if the source code representation includes any of one or more patterns, with each pattern including two or more adjacent layers. The system also identifies, for each pattern, a combined layer with which to replace the detected pattern. If any occurrences of the one or more patterns are detected in the source code representation, the system replaces each pattern with a corresponding combined layer. Additionally, the system generates an optimized representation of the neural network, wherein the optimized representation includes replacements for any detected patterns. The optimized representation can be utilized to generate an executable version of the neural network.
    Type: Application
    Filed: April 27, 2017
    Publication date: November 1, 2018
    Inventors: Mauricio Breternitz, Mayank Daga
  • Patent number: 10031947
    Abstract: A method and apparatus for performing a top-down Breadth-First Search (BFS) includes performing a first determination whether to convert to a bottom-up BFS. A second determination is performed whether to convert to the bottom-up BFS, based upon the first determination being positive. The bottom-up BFS is performed, based upon the first determination and the second determination being positive. A third determination is made whether to convert from the bottom-up BFS to the top-down BFS, based upon the third determination being positive.
    Type: Grant
    Filed: June 24, 2015
    Date of Patent: July 24, 2018
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Mayank Daga
  • Publication number: 20170371665
    Abstract: Systems, apparatuses, and methods for adjusting group sizes to match a processor lane width are described. In early iterations of an algorithm, a processor partitions a dataset into groups of data points which are integer multiples of the processing lane width of the processor. For example, when performing a K-means clustering algorithm, the processor determines that a first plurality of data points belong to a first group during a given iteration. If the first plurality of data points is not an integer multiple of the number of processing lanes, then the processor reassigns a first number of data points from the first plurality of data points to one or more other groups. The processor then performs the next iteration with these first number of data points assigned to other groups even though the first number of data points actually meets the algorithmic criteria for belonging to the first group.
    Type: Application
    Filed: June 23, 2016
    Publication date: December 28, 2017
    Inventors: Mauricio Breternitz, Mayank Daga
  • Patent number: 9697176
    Abstract: A method of multiplication of a sparse matrix and a vector to obtain a new vector and a system for implementing the method are claimed. Embodiments of the method are intended to optimize the performance of sparse matrix-vector multiplication in highly parallel processors, such as GPUs. The sparse matrix is stored in compressed sparse row (CSR) format.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: July 4, 2017
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Mayank Daga, Joseph L. Greathouse
  • Publication number: 20160378791
    Abstract: A method and apparatus for performing a top-down Breadth-First Search (BFS) includes performing a first determination whether to convert to a bottom-up BFS. A second determination is performed whether to convert to the bottom-up BFS, based upon the first determination being positive. The bottom-up BFS is performed, based upon the first determination and the second determination being positive. A third determination is made whether to convert from the bottom-up BFS to the top-down BFS, based upon the third determination being positive.
    Type: Application
    Filed: June 24, 2015
    Publication date: December 29, 2016
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventor: Mayank Daga
  • Publication number: 20160140084
    Abstract: A method of multiplication of a sparse matrix and a vector to obtain a new vector and a system for implementing the method are claimed. Embodiments of the method are intended to optimize the performance of sparse matrix-vector multiplication in highly parallel processors, such as GPUs. The sparse matrix is stored in compressed sparse row (CSR) format.
    Type: Application
    Filed: November 14, 2014
    Publication date: May 19, 2016
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventors: Mayank Daga, Joseph L. Greathouse