Patents by Inventor Nihat Tunali

Nihat Tunali has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230368030
    Abstract: Weights can be pruned during DNN training to increase sparsity in the weights and reduce the amount of computation required for performing the deep learning operations in DNNs. A DNN layer may have one or more weight tensors corresponding to one or more output channels of the layer. A weight tensor has weights, the values of which are determined by training the DNN. A weight tensor may have a dimension corresponding to the input channels of the layer. The weight tensor may be partitioned into subtensors, each of which has a subset of the input channels. The subtensor may have the same number of input channels. One or more subtensors may be selected, e.g., based on the weights in the one or more subtensors. The weights in a selected subtensor are pruned, e.g., changed to zeros. The weights in an unselected subtensor may be modified by further training the DNN.
    Type: Application
    Filed: July 25, 2023
    Publication date: November 16, 2023
    Inventors: Arnab Raha, Michael Wu, Deepak Abraham Mathaikutty, Martin Langhammer, Nihat Tunali
  • Publication number: 20230229917
    Abstract: A compute block can perform hybrid multiply-accumulate (MAC) operations. The compute block may include a weight compressing module and a processing element (PE) array. The weight compression module may select a first group of one or more weights and a second group of one or more weights from a weight tensor of a DNN (deep neural network) layer. A weight in the first group is quantized to a power of two value. A weight in the second group is quantized to an integer. The integer and the exponent of the power of two value may be stored in a memory in lieu of the original values of the weights. A PE in the PE array includes a shifter configured to shift an activation of the layer by the exponent of the power of two value and a multiplier configured to multiplying the integer with another activation of the layer.
    Type: Application
    Filed: March 15, 2023
    Publication date: July 20, 2023
    Applicant: Intel Corporation
    Inventors: Michael Wu, Arnab Raha, Deepak Abraham Mathaikutty, Nihat Tunali, Martin Langhammer
  • Publication number: 20230021396
    Abstract: A method for implementing an artificial neural network in a computing system that comprises performing a compute operation using an input activation and a weight to generate an output activation, and modifying the output activation using a noise value to increase activation sparsity.
    Type: Application
    Filed: September 27, 2022
    Publication date: January 26, 2023
    Applicant: Intel Corporation
    Inventors: Nihat Tunali, Arnab Raha, Bogdan Pasca, Martin Langhammer, Michael Wu, Deepak Mathaikutty
  • Publication number: 20220292366
    Abstract: Methods, apparatus, systems, and articles of manufacture to perform low overhead sparsity acceleration logic for multi-precision dataflow in deep neural network accelerators are disclosed. An example apparatus includes a first buffer to store data corresponding to a first precision; a second buffer to store data corresponding to a second precision; and hardware control circuitry to: process a first multibit bitmap to determine an activation precision of an activation value, the first multibit bitmap including values corresponding to different precisions; process a second multibit bitmap to determine a weight precision of a weight value, the second multibit bitmap including values corresponding to different precisions; and store the activation value and the weight value in the second buffer when at least one of the activation precision or the weight precision corresponds to the second precision.
    Type: Application
    Filed: March 30, 2022
    Publication date: September 15, 2022
    Inventors: Arnab Raha, Martin Langhammer, Debabrata Mohapatra, Nihat Tunali, Michael Wu