Patents by Inventor Nitin Naresh GAREGRAT

Nitin Naresh GAREGRAT has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220261456
    Abstract: A computing device, including a hardware accelerator configured to receive a first matrix and receive a second matrix. The hardware accelerator may, for a plurality of partial matrix regions, in a first iteration, read a first submatrix of the first matrix and a second submatrix of the second matrix into a front-end processing area. The hardware accelerator may multiply the first submatrix by the second submatrix to compute a first intermediate partial matrix. In each of one or more subsequent iterations, the hardware accelerator may read an additional submatrix into the front end processing area. The hardware accelerator may compute an additional intermediate partial matrix as a product of the additional submatrix and a submatrix reused from an immediately prior iteration. The hardware accelerator may compute each partial matrix as a sum of two or more of the intermediate partial matrices and may output the plurality of partial matrices.
    Type: Application
    Filed: January 14, 2021
    Publication date: August 18, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Derek Edward Davout GLADDING, Nitin Naresh GAREGRAT, Timothy Hume HEIL, Balamurugan KULANTHIVELU VELUCHAMY
  • Publication number: 20220222319
    Abstract: A computing device is provided, including one or more processing devices configured to receive a first matrix including a plurality of first matrix elements arranged in a plurality of submatrices. The one or more processing devices may be further configured to generate first matrix sparsity metadata indicating one or more zero submatrices and one or more nonzero submatrices of the plurality of submatrices. Each of the first matrix elements included in the one or more zero submatrices may be equal to zero. The one or more processing devices may be further configured to store, in memory, a compressed first matrix including the first matrix sparsity metadata and the one or more nonzero submatrices and not including the one or more zero submatrices.
    Type: Application
    Filed: January 14, 2021
    Publication date: July 14, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Derek Edward Davout GLADDING, Nitin Naresh GAREGRAT
  • Publication number: 20220222575
    Abstract: A computing device, including a hardware accelerator configured to train a machine learning model by computing a first product matrix including a plurality of first dot products. Computing the first product matrix may include receiving a first matrix including a plurality of first vectors and a second matrix including a plurality of second vectors. Each first vector may include a first shared exponent and a plurality of first vector elements. Each second vector may include a second shared exponent and a plurality of second vector elements. For each first vector, computing the first product matrix may further include computing the first dot product of the first vector and a second vector. The first dot product may include a first dot product exponent, a first dot product sign, and a first dot product mantissa. Training the first machine learning model may further include storing the first product matrix in memory.
    Type: Application
    Filed: January 14, 2021
    Publication date: July 14, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Derek Edward Davout GLADDING, Nitin Naresh GAREGRAT, Viraj Sunil KHADYE, Yuxuan ZHANG