Patents by Inventor Michael ROTZIN

Michael ROTZIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230333855
    Abstract: In one embodiment, a matrix processor comprises a memory to store a matrix operand and a strided read sequence, wherein: the matrix operand is stored out of order in the memory; and the strided read sequence comprises a sequence of read operations to read the matrix operand in a correct order from the memory. The matrix processor further comprises circuitry to: receive a first instruction to be executed by the matrix processor, wherein the first instruction is to instruct the matrix processor to perform a first operation on the matrix operand; read the matrix operand from the memory based on the strided read sequence; and execute the first instruction by performing the first operation on the matrix operand.
    Type: Application
    Filed: May 19, 2023
    Publication date: October 19, 2023
    Applicant: Intel Corporation
    Inventors: Nitin N. Garegrat, Tony L. Werner, Jeff DelChiaro, Michael Rotzin, Robert T. Rhoades, Ujwal Basavaraj Sajjanar, Anne Q. Ye
  • Patent number: 11687341
    Abstract: In one embodiment, a matrix processor comprises a memory to store a matrix operand and a strided read sequence, wherein: the matrix operand is stored out of order in the memory; and the strided read sequence comprises a sequence of read operations to read the matrix operand in a correct order from the memory. The matrix processor further comprises circuitry to: receive a first instruction to be executed by the matrix processor, wherein the first instruction is to instruct the matrix processor to perform a first operation on the matrix operand; read the matrix operand from the memory based on the strided read sequence; and execute the first instruction by performing the first operation on the matrix operand.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: June 27, 2023
    Assignee: Intel Corporation
    Inventors: Nitin N. Garegrat, Tony L. Werner, Jeff DelChiaro, Michael Rotzin, Robert T. Rhoades, Ujwal Basavaraj Sajjanar, Anne Q. Ye
  • Patent number: 11520562
    Abstract: A method comprising storing a plurality of entries, each entry of the plurality of entries associated with a portion of a range of input values, each entry of the plurality of entries comprising a set of coefficients defining a power series approximation; selecting first entry of the plurality of entries based on a determination that a floating point input value is within a portion of the range of input values that is associated with the first entry; and calculating an output value by evaluating the power series approximation defined by the set of coefficients of the first entry at the floating point input value.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: December 6, 2022
    Assignee: Intel Corporation
    Inventors: Brian J. Hickmann, Nitin N. Garegrat, Maciej Urbanski, Michael Rotzin
  • Patent number: 11169776
    Abstract: Systems, apparatuses and methods may provide for technology that in response to an identification that one or more hardware units are to execute on a first type of data format, decomposes a first original floating point number to a plurality of first segmented floating point numbers that are to be equivalent to the first original floating point number. The technology may further in response to the identification, decompose a second original floating point number to a plurality of second segmented floating point numbers that are to be equivalent to the second original floating point number. The technology may further execute a multiplication operation on the first and second segmented floating point numbers to multiply the first segmented floating point numbers with the second segmented floating point numbers.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: November 9, 2021
    Assignee: Intel Corporation
    Inventors: Nitin N. Garegrat, Maciej Urbanski, Michael Rotzin, Brian J. Hickmann, Valentina Popescu
  • Publication number: 20210263993
    Abstract: Methods and apparatuses relating to performing vector multiplication are described. Hardware accelerators to perform vector multiplication are also described.
    Type: Application
    Filed: September 27, 2018
    Publication date: August 26, 2021
    Inventors: Maciej URBANSKI, Brian J. HICKMANN, Michael ROTZIN, Krishnakumar NAIR, Andrew YANG, Brian S. MORRIS, Dennis BRADFORD
  • Patent number: 10761757
    Abstract: An apparatus and method for a converting tensor data. For example, one embodiment of a method comprises: fetching source tensor blocks of a source tensor data structure, each source tensor block comprising a plurality of source tensor data elements having a first numeric representation, wherein the source tensor data structure comprises a predefined structural arrangement of source tensor blocks; converting the one or more source tensor blocks into one or more destination tensor blocks comprising a plurality of destination tensor data elements having a second numeric representation different from the first numeric representation, wherein the sets of one or more source tensor blocks are converted to one or more corresponding destination tensor blocks in a specified order based on the first and second numeric representations; and storing each individual destination tensor block in a designated memory region to maintain coherency with the predefined structural arrangement of the source tensor blocks.
    Type: Grant
    Filed: June 30, 2018
    Date of Patent: September 1, 2020
    Assignee: Intel Corporation
    Inventors: Krishnakumar Nair, Andrew Yang, Michael Rotzin, Nitin Garegrat, Tom Schebye, Tony Werner
  • Patent number: 10620951
    Abstract: Disclosed embodiments relate to sparse matrix multiplication (SMM) acceleration using column folding and squeezing. In one example, a processor, in response to a SMM instruction having fields to specify locations of first, second, and output matrices, the second matrix being a sparse matrix, uses execution circuitry to pack the second matrix by replacing one or more zero-valued elements with non-zero elements yet to be processed, each of the replaced elements further including a field to identify its logical position within the second matrix, and, the execution circuitry further to, for each non-zero element at row M and column K of the specified first matrix, generate a product of the element and each corresponding non-zero element at row K, column N of the packed second matrix, and accumulate each generated product with a previous value of a corresponding element at row M and column N of the specified output matrix.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: April 14, 2020
    Assignee: Intel Corporation
    Inventors: Omid Azizi, Guy Boudoukh, Tony Werner, Andrew Yang, Michael Rotzin, Chen Koren, Eriko Nurvitadhi
  • Publication number: 20190391811
    Abstract: In one embodiment, a matrix processor comprises a memory to store a matrix operand and a strided read sequence, wherein: the matrix operand is stored out of order in the memory; and the strided read sequence comprises a sequence of read operations to read the matrix operand in a correct order from the memory. The matrix processor further comprises circuitry to: receive a first instruction to be executed by the matrix processor, wherein the first instruction is to instruct the matrix processor to perform a first operation on the matrix operand; read the matrix operand from the memory based on the strided read sequence; and execute the first instruction by performing the first operation on the matrix operand.
    Type: Application
    Filed: August 29, 2019
    Publication date: December 26, 2019
    Applicant: Intel Corporation
    Inventors: Nitin N. Garegrat, Tony L. Werner, Jeff DelChiaro, Michael Rotzin, Robert T. Rhoades, Ujwal Basavaraj Sajjanar, Anne Q. Ye
  • Publication number: 20190384575
    Abstract: A method comprising storing a plurality of entries, each entry of the plurality of entries associated with a portion of a range of input values, each entry of the plurality of entries comprising a set of coefficients defining a power series approximation; selecting first entry of the plurality of entries based on a determination that a floating point input value is within a portion of the range of input values that is associated with the first entry; and calculating an output value by evaluating the power series approximation defined by the set of coefficients of the first entry at the floating point input value.
    Type: Application
    Filed: August 30, 2019
    Publication date: December 19, 2019
    Applicant: Intel Corporation
    Inventors: Brian J. Hickmann, Nitin N. Garegrat, Maciej Urbanski, Michael Rotzin
  • Publication number: 20190324723
    Abstract: Systems, apparatuses and methods may provide for technology that in response to an identification that one or more hardware units are to execute on a first type of data format, decomposes a first original floating point number to a plurality of first segmented floating point numbers that are to be equivalent to the first original floating point number. The technology may further in response to the identification, decompose a second original floating point number to a plurality of second segmented floating point numbers that are to be equivalent to the second original floating point number. The technology may further execute a multiplication operation on the first and second segmented floating point numbers to multiply the first segmented floating point numbers with the second segmented floating point numbers.
    Type: Application
    Filed: June 28, 2019
    Publication date: October 24, 2019
    Applicant: Intel Corporation
    Inventors: Nitin N. Garegrat, Maciej Urbanski, Michael Rotzin, Brian J. Hickmann, Valentina Popescu
  • Publication number: 20190042237
    Abstract: Disclosed embodiments relate to sparse matrix multiplication (SMM) acceleration using column folding and squeezing. In one example, a processor, in response to a SMM instruction having fields to specify locations of first, second, and output matrices, the second matrix being a sparse matrix, uses execution circuitry to pack the second matrix by replacing one or more zero-valued elements with non-zero elements yet to be processed, each of the replaced elements further including a field to identify its logical position within the second matrix, and, the execution circuitry further to, for each non-zero element at row M and column K of the specified first matrix, generate a product of the element and each corresponding non-zero element at row K, column N of the packed second matrix, and accumulate each generated product with a previous value of a corresponding element at row M and column N of the specified output matrix.
    Type: Application
    Filed: June 22, 2018
    Publication date: February 7, 2019
    Inventors: Omid AZIZI, Guy BOUDOUKH, Tony WERNER, Andrew YANG, Michael ROTZIN, Chen KOREN, Eriko NURVITADHI