Patents by Inventor Slawomir KIERAT

Slawomir KIERAT has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12299577
    Abstract: Aspects of the present invention are directed to computer-implemented techniques for improving the training of artificial neural networks using a reduced precision (e.g., float16) data format. Embodiments of the present invention rescale tensor values prior to performing matrix operations (such as matrix multiplication or matrix addition) to prevent overflow and underflow. To preserve accuracy throughout the performance of the matrix operations, the scale factors are defined using a novel data format to represent tensors, wherein a matrix is represented by the tuple X, where X=(a, v[.]), wherein a is a float scale factor and v[.] are scaled values stored in the float16 format. The value of any element X[i] according to this data format would be equal to a*v[i].
    Type: Grant
    Filed: June 15, 2017
    Date of Patent: May 13, 2025
    Assignee: NVIDIA Corporation
    Inventors: Boris Ginsburg, Sergei Nikolaev, Ahmad Kiswani, Hao Wu, Amir Gholaminejad, Slawomir Kierat, Michael Houston, Alex Fit-Florea
  • Publication number: 20240119267
    Abstract: Apparatuses, systems, and techniques to selectively use one or more neural network layers. In at least one embodiment, one or more neural network layers are selectively used based on, for example, one or more iteratively increasing neural network performance metrics.
    Type: Application
    Filed: September 21, 2022
    Publication date: April 11, 2024
    Inventors: Slawomir Kierat, Piotr Karpinski, Mateusz Sieniawski, Pawel Morkisz, Szymon Migacz, Linnan Wang, Chen-Han Yu, Satish Salian, Ashwath Aithal, Alexandru Fit-Florea
  • Publication number: 20210256348
    Abstract: Aspects of the present invention are directed to computer-implemented techniques for performing data compression and conversion between data formats of varying degrees of precision, and more particularly for improving the inferencing (application) of artificial neural networks using a reduced precision (e.g., INT8) data format. Embodiments of the present invention generate candidate conversions of data output, then employ a relative measure of quality to identify the candidate conversion with the greatest accuracy (i.e., least divergence from the original higher precision values). The representation can be then be used during inference to perform computations on the resulting output data.
    Type: Application
    Filed: May 3, 2021
    Publication date: August 19, 2021
    Inventors: Szymon Migacz, Hao Wu, Dilip Sequeira, Ujval Kapasi, Maxim Milakov, Slawomir Kierat, Zacky Zhou, Yilin Zhang, Alex Fit-Florea
  • Patent number: 10997492
    Abstract: Aspects of the present invention are directed to computer-implemented techniques for performing data compression and conversion between data formats of varying degrees of precision, and more particularly for improving the inferencing (application) of artificial neural networks using a reduced precision (e.g., INT8) data format. Embodiments of the present invention generate candidate conversions of data output, then employ a relative measure of quality to identify the candidate conversion with the greatest accuracy (i.e., least divergence from the original higher precision values). The representation can be then be used during inference to perform computations on the resulting output data.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: May 4, 2021
    Assignee: Nvidia Corporation
    Inventors: Szymon Migacz, Hao Wu, Dilip Sequeira, Ujval Kapasi, Maxim Milakov, Slawomir Kierat, Zacky Zhou, Yilin Zhang, Alex Fit-Florea
  • Publication number: 20180211152
    Abstract: Aspects of the present invention are directed to computer-implemented techniques for performing data compression and conversion between data formats of varying degrees of precision, and more particularly for improving the inferencing (application) of artificial neural networks using a reduced precision (e.g., INT8) data format. Embodiments of the present invention generate candidate conversions of data output, then employ a relative measure of quality to identify the candidate conversion with the greatest accuracy (i.e., least divergence from the original higher precision values). The representation can be then be used during inference to perform computations on the resulting output data.
    Type: Application
    Filed: December 11, 2017
    Publication date: July 26, 2018
    Inventors: Szymon Migacz, Hao Wu, Dilip Sequeira, Ujval Kapasi, Maxim Milakov, Slawomir Kierat, Zacky Zhou, Yilin Zhang, Alex Fit-Florea
  • Publication number: 20170372202
    Abstract: Aspects of the present invention are directed to computer-implemented techniques for improving the training of artificial neural networks using a reduced precision (e.g., float16) data format. Embodiments of the present invention rescale tensor values prior to performing matrix operations (such as matrix multiplication or matrix addition) to prevent overflow and underflow. To preserve accuracy throughout the performance of the matrix operations, the scale factors are defined using a novel data format to represent tensors, wherein a matrix is represented by the tuple X, where X=(a, v[.]), wherein a is a float scale factor and v[.] are scaled values stored in the float16 format. The value of any element X[i] according to this data format would be equal to a*v[i].
    Type: Application
    Filed: June 15, 2017
    Publication date: December 28, 2017
    Inventors: Boris GINSBURG, Sergei NIKOLAEV, Ahmad KISWANI, Hao WU, Amir GHOLAMINEJAD, Slawomir KIERAT, Michael HOUSTON, Alex FIT-FLOREA