Patents by Inventor Abdulkadir Utku Diril

Abdulkadir Utku Diril has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11481471
    Abstract: A system comprises a matrix processor unit that includes a first type of register, a group of a second type of registers, and a plurality of calculation units. The first type of register is configured to concurrently store values from different rows of a first matrix. At least a portion of the first type of register is logically divided into groups of elements, and each of the groups corresponds to a different row of the first matrix. Each of the second type of registers is configured to concurrently store values from a plurality of different rows of a second matrix. Each of the calculation units corresponds to one of the second type of registers and is configured to at least in part determine a corresponding element in a result matrix of convoluting the second matrix with the first matrix.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: October 25, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Krishnakumar Nair, Abdulkadir Utku Diril, Dheevatsa Mudigere, Olivia Wu, Ehsan Khish Ardestani Zadeh, Yuchen Hao
  • Patent number: 11468313
    Abstract: The disclosed computer-implemented method may include (1) identifying an artificial neural network comprising a set of nodes interconnected via a set of connections, and (2) training the artificial neural network by, for each connection in the set of connections, determining a quantized weight value associated with the connection. Determining the quantized weight value associated with the connection may include (1) associating a loss function with the connection, the loss function including a periodic regularization function that describes a relationship between an input value and a weight value of the connection, (2) determining a minimum of the associated loss function with respect to the weight value in accordance with the periodic regularization function, and (3) generating the quantized weight value associated with the connection based on the determined minimum of the loss function. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: October 11, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Maxim Naumov, Abdulkadir Utku Diril, Jong Soo Park, Benjamin Ray, Jedrzej Jablonski, Andrew John Tulloch
  • Patent number: 11443013
    Abstract: A processor system comprises a hardware channel convolution processor unit and dot product processor unit. The channel convolution processor unit is configured to perform depthwise convolution, including by multiplying each data element of a first group of data elements of a convolution data matrix with a corresponding data element of a second group of data elements of a plurality of depthwise convolution weight matrices and summing together, for each specific channel, multiplication results corresponding to the specific channel to determine one corresponding result data element in a corresponding channel convolution result matrix to calculate a portion of depthwise convolution results.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: September 13, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Rakesh Komuravelli, Krishnakumar Narayanan Nair, Abdulkadir Utku Diril, Ehsan Khish Ardestani Zadeh, Yuchen Hao, Martin Schatz, Thomas Mark Ulrich, Olivia Wu, Anup Ramesh Kadkol, Amin Firoozshahian
  • Patent number: 11409838
    Abstract: A system comprises a data input vector unit, a weight input vector unit, and a plurality of calculation units of a matrix processor unit. The data input vector unit is configured to concurrently receive elements of different rows of a first and second data matrix. The weight input vector unit is configured to receive a combined weight vector and at least in part concurrently provide obtained weight elements of a first and second weight matrix to a corresponding first and second group of calculation units. Each calculation unit of the first and second group of calculation units is configured to multiply elements from the data input vector unit with elements of the corresponding weight matrix from the weight input vector unit and sum together multiplication results of the corresponding calculation unit to at least in part determine a corresponding element in a first or second convolution result matrix.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: August 9, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Krishnakumar Narayanan Nair, Olivia Wu, Ehsan Khish Ardestani Zadeh, Abdulkadir Utku Diril, Thomas Mark Ulrich, Yuchen Hao, Rakesh Komuravelli, Aravind Kalaiah
  • Patent number: 11275560
    Abstract: A floating-point number in a first format representation is received. Based on an identification of a floating-point format type of the floating-point number, different components of the first format representation are identified. The different components of the first format representation are placed in corresponding components of a second format representation of the floating-point number, wherein a total number of bits of the second format representation is larger than a total number of bits of the first format representation. At least one of the components of the second format representation is padded with one or more zero bits. The floating-point number in the second format representation is stored in a register. A multiplication using the second format representation of the floating-point number is performed.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: March 15, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Thomas Mark Ulrich, Abdulkadir Utku Diril, Krishnakumar Narayanan Nair, Zhao Wang, Rakesh Komuravelli
  • Patent number: 11264011
    Abstract: The disclosed method may include (1) determining whether a next operation of a plurality of operations of an artificial neural network (ANN) is dependent upon a Boolean predication value based on a representative value for a weight or an input of a node of the ANN, (2) based on the next operation not being dependent on the Boolean predication value, allowing the next operation to update a state of the ANN, and (3) based on the next operation being dependent on the Boolean predication value, performing at least one of (a) allowing, based on the Boolean predication value being a first value, the next operation to update the state of the ANN, and (b) preventing, based on the Boolean predication value being a second value different from the first value, the next operation from updating the state of the ANN. Various other methods and systems are also disclosed.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: March 1, 2022
    Assignee: Facebook, Inc.
    Inventors: Nadav Rotem, Abdulkadir Utku Diril, Mikhail Smelyanskiy, Jong Soo Park, James Kenneth Reed
  • Patent number: 11256977
    Abstract: A disclosed computing system may include a special-purpose hardware device having an input subsystem, a linearization subsystem, and a matrix multiplication unit. The input subsystem may facilitate on-the-fly convolution lowering within a neural network convolution layer by directing input volume patches to logical unit(s) of the device. The linearization subsystem may be configured to receive a patch from the input subsystem and to linearize the patch by arranging elements of the patch as a portion of a data matrix row. The matrix multiplication unit of device may be configured to receive the data matrix from the linearization subsystem and to apply a filter matrix to the data matrix via a matrix multiplication operation. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: February 22, 2022
    Assignee: Facebook, Inc.
    Inventors: Mikhail Smelyanskiy, Abdulkadir Utku Diril, Jong Soo Park, Nadav Rotem
  • Publication number: 20210349694
    Abstract: A device (e.g., integrated circuit chip) includes a first operand register, a second operand register, a multiplication unit, and a hardware logic component. The first operand register is configured to store a first operand value. The second operand register is configured to store a second operand value. The multiplication unit is configured to at least multiply the first operand value with the second operand value. The hardware logic component is configured to detect whether a zero value is provided and in response to a detection that the zero value is being provided: cause an update of at least the first operand register to be disabled, and cause a result of a multiplication of the first operand value with the second operand value to be a zero-value result.
    Type: Application
    Filed: May 7, 2020
    Publication date: November 11, 2021
    Inventors: Thomas Mark Ulrich, Abdulkadir Utku Diril, Zhao Wang
  • Publication number: 20210334072
    Abstract: A processor system comprises a plurality of dot product processor units and element-wise multiplication units. The dot product processor units perform a depthwise convolution of a data matrix with a separate depthwise convolution weight matrix for each data matrix channel. Each dot product processor unit performs at least a portion of the depthwise convolution for one or more data matrix channels. The element-wise multiplication units perform multiplication operations of a pointwise convolution. Each element-wise multiplication unit applies to each depthwise convolution partial result element received from one or more of the dot product processor units a corresponding data element from each of a plurality of pointwise convolution weight filters to determine element-wise multiplication unit results. The processor system sums together different groups of data elements from the element-wise multiplication unit results to at least in part calculate different data elements of a result of the pointwise convolution.
    Type: Application
    Filed: April 22, 2020
    Publication date: October 28, 2021
    Inventors: Rakesh Komuravelli, Krishnakumar Narayanan Nair, Abdulkadir Utku Diril, Ehsan Khish Ardestani Zadeh, Yuchen Hao, Martin Schatz, Thomas Mark Ulrich, Olivia Wu, Anup Ramesh Kadkol, Amin Firoozshahian
  • Publication number: 20210326051
    Abstract: A system comprises a processor and a plurality of memory units. The processor is coupled to each of the plurality of memory units by a plurality of network connections. The processor includes a plurality of processing elements arranged in a two-dimensional array and a corresponding two-dimensional communication network communicatively connecting each of the plurality of processing elements to other processing elements on same axes of the two-dimensional array. Each processing element that is located along a diagonal of the two-dimensional array is configured as a request broadcasting master for a respective group of processing elements located along a same axis of the two-dimensional array.
    Type: Application
    Filed: May 4, 2021
    Publication date: October 21, 2021
    Inventors: Abdulkadir Utku Diril, Olivia Wu, Krishnakumar Narayanan Nair, Aravind Kalaiah, Anup Ramesh Kadkol, Pankaj Kansal
  • Publication number: 20210319076
    Abstract: A processor system comprises a plurality of processing elements. Each processing element includes a corresponding convolution processor unit configured to perform a portion of a groupwise convolution. The corresponding convolution processor unit determines multiplication results by multiplying each data element of a portion of data elements in a convolution data matrix with a corresponding data element in a corresponding groupwise convolution weight matrix. The portion of data elements in the convolution data matrix that are multiplied belong to different channels and different groups. For each specific channel of the different channels, the corresponding convolution processor unit sums together at least some of the multiplication results belonging to the same specific channel to determine a corresponding channel convolution result data element.
    Type: Application
    Filed: April 8, 2020
    Publication date: October 14, 2021
    Inventors: Rakesh Komuravelli, Krishnakumar Narayanan Nair, Abdulkadir Utku Diril, Ehsan Khish Ardestani Zadeh, Yuchen Hao, Martin Schatz, Thomas Mark Ulrich, Olivia Wu, Anup Ramesh Kadkol, Amin Firoozshahian
  • Patent number: 11138292
    Abstract: An electronic circuit performs depthwise convolution of an input matrix with a kernel matrix to generate an output matrix. In each of a plurality of rounds of operations, a row of kernel matrix elements is selected for the round of operations, and applied to the input matrix to obtain an intermediate data array corresponding to the selected row of kernel elements. The electronic circuit includes a plurality of subcircuits operable in parallel to generate, in each operation, a set of intermediate data elements in the intermediate data array. Each subcircuit generates a respective intermediate data element that is the sum of a respective row of the input matrix elements weighted by a set of weight elements including the selected row of kernel elements and at least one zero element. The selected row of kernel elements is successively shifted among the set of weight elements in the round of operations.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: October 5, 2021
    Assignee: FACEBOOK, INC.
    Inventors: Krishnakumar Nair, Abdulkadir Utku Diril, Dheevatsa Mudigere, Ehsan Khish Ardestani Zadeh, Olivia Wu, Yuchen Hao
  • Publication number: 20210294875
    Abstract: A processor system comprises a hardware channel convolution processor unit and dot product processor unit. The channel convolution processor unit is configured to perform depthwise convolution, including by multiplying each data element of a first group of data elements of a convolution data matrix with a corresponding data element of a second group of data elements of a plurality of depthwise convolution weight matrices and summing together, for each specific channel, multiplication results corresponding to the specific channel to determine one corresponding result data element in a corresponding channel convolution result matrix to calculate a portion of depthwise convolution results.
    Type: Application
    Filed: March 23, 2020
    Publication date: September 23, 2021
    Inventors: Rakesh Komuravelli, Krishnakumar Narayanan Nair, Abdulkadir Utku Diril, Ehsan Khish Ardestani Zadeh, Yuchen Hao, Martin Schatz, Thomas Mark Ulrich, Olivia Wu, Anup Ramesh Kadkol, Amin Firoozshahian
  • Publication number: 20210271451
    Abstract: A processor system comprises two groups of registers and a hardware channel convolution processor unit. The first group of registers is configured to store data elements of channels of a portion of a convolution data matrix. Each register stores at least one data element from each channel. The second group of registers is configured to store data elements of convolution weight matrices including a separate matrix for each channel. Each register stores at least one data element from each matrix. The hardware channel convolution processor unit is configured to multiply each data element in a first and second portion of the first group of registers with a corresponding data element in the second group of registers to determine corresponding multiplication results and sum together the multiplication results for each specific channel to determine two corresponding channel convolution result data elements in a corresponding channel convolution result matrix.
    Type: Application
    Filed: February 28, 2020
    Publication date: September 2, 2021
    Inventors: Krishnakumar Narayanan Nair, Rakesh Komuravelli, Abdulkadir Utku Diril, Ehsan Khish Ardestani Zadeh, Yuchen Hao, Martin Schatz, Thomas Mark Ulrich, Olivia Wu, Anup Ramesh Kadkol, Amin Firoozshahian
  • Publication number: 20210256363
    Abstract: A processor system comprises a first and second group of registers and a hardware channel convolution processor unit. The first group of registers is configured to store data elements of channels of a portion of a convolution data matrix. Each register stores at least one data element from each channel. The second group of registers is configured to store data elements of convolution weight matrices including a separate convolution weight matrix for each channel. Each register stores at least one data element from each convolution weight matrix. The hardware channel convolution processor unit is configured to multiply each data element in the first group of registers with a corresponding data element in the second group of registers and sum together the multiplication results for each specific channel to determine corresponding channel convolution result data elements in a corresponding channel convolution result matrix.
    Type: Application
    Filed: February 18, 2020
    Publication date: August 19, 2021
    Inventors: Krishnakumar Narayanan Nair, Rakesh Komuravelli, Abdulkadir Utku Diril, Ehsan Khish Ardestani Zadeh, Yuchen Hao, Martin Schatz, Thomas Mark Ulrich, Olivia Wu, Anup Ramesh Kadkol, Amin Firoozshahian
  • Publication number: 20210255830
    Abstract: A floating-point number in a first format representation is received. Based on an identification of a floating-point format type of the floating-point number, different components of the first format representation are identified. The different components of the first format representation are placed in corresponding components of a second format representation of the floating-point number, wherein a total number of bits of the second format representation is larger than a total number of bits of the first format representation. At least one of the components of the second format representation is padded with one or more zero bits. The floating-point number in the second format representation is stored in a register. A multiplication using the second format representation of the floating-point number is performed.
    Type: Application
    Filed: February 19, 2020
    Publication date: August 19, 2021
    Inventors: Thomas Mark Ulrich, Abdulkadir Utku Diril, Krishnakumar Narayanan Nair, Zhao Wang, Rakesh Komuravelli
  • Patent number: 11054998
    Abstract: A system comprises a processor and a plurality of memory units. The processor is coupled to each of the plurality of memory units by a plurality of network connections. The processor includes a plurality of processing elements arranged in a two-dimensional array and a corresponding two-dimensional communication network communicatively connecting each of the plurality of processing elements to other processing elements on same axes of the two-dimensional array. Each processing element that is located along a diagonal of the two-dimensional array is configured as a request broadcasting master for a respective group of processing elements located along a same axis of the two-dimensional array.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: July 6, 2021
    Assignee: Facebook, Inc.
    Inventors: Abdulkadir Utku Diril, Olivia Wu, Krishnakumar Narayanan Nair, Aravind Kalaiah, Anup Ramesh Kadkol, Pankaj Kansal
  • Publication number: 20210192359
    Abstract: The disclosed computer-implemented method may include (1) receiving, at a hardware accelerator that supports an ANN, an activation data set that is to undergo a convolution operation via a filter kernel of the ANN, (2) receiving, at the hardware accelerator, an argument indicating that the filter kernel exceeds at least one boundary of the activation data set when slid across a certain position during the convolution operation, (3) determining, based at least in part on the argument, that the hardware accelerator is to generate padding data at the boundary of the activation data set in connection with the certain position of the filter kernel, and then (4) performing, at the hardware accelerator, the convolution operation by processing a portion of the activation data set and the padding data when the filter kernel slides across the certain position. Various other systems and methods are also disclosed.
    Type: Application
    Filed: December 20, 2019
    Publication date: June 24, 2021
    Inventors: Ehsan Khish Ardestani Zadeh, Martin Schatz, Krishnakumar Narayanan Nair, Yuchen Hao, Abdulkadir Utku Diril, Rakesh Komuravelli
  • Publication number: 20210181957
    Abstract: A system comprises a processor and a plurality of memory units. The processor is coupled to each of the plurality of memory units by a plurality of network connections. The processor includes a plurality of processing elements arranged in a two-dimensional array and a corresponding two-dimensional communication network communicatively connecting each of the plurality of processing elements to other processing elements on same axes of the two-dimensional array. Each processing element that is located along a diagonal of the two-dimensional array is configured as a request broadcasting master for a respective group of processing elements located along a same axis of the two-dimensional array.
    Type: Application
    Filed: December 12, 2019
    Publication date: June 17, 2021
    Inventors: Abdulkadir Utku Diril, Olivia Wu, Krishnakumar Narayanan Nair, Aravind Kalaiah, Anup Ramesh Kadkol, Pankaj Kansal
  • Publication number: 20210182196
    Abstract: A system comprises a processor coupled to a plurality of memory units. Each of the plurality of memory units includes a request processing unit and a plurality of memory banks. Each request processing unit includes a plurality of decomposition units and a crossbar switch, the crossbar switch communicatively connecting each of the plurality of decomposition units to each of the plurality of memory banks. The processor includes a plurality of processing elements and a communication network communicatively connecting the plurality of processing elements to the plurality of memory units. At least a first processing element of the plurality of processing elements includes a control logic unit and a matrix compute engine. The control logic unit is configured to access the plurality of memory units using a dynamically programmable distribution scheme.
    Type: Application
    Filed: December 17, 2019
    Publication date: June 17, 2021
    Inventors: Olivia Wu, Abdulkadir Utku Diril, Krishnakumar Narayanan Nair, Aravind Kalaiah, Anup Ramesh Kadkol, Pankaj Kansal