Patents by Inventor Reginald Clifford Young

Reginald Clifford Young has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210125029
    Abstract: Methods, systems, and apparatus for efficiently performing a computation of a convolutional neural network layer. One of the methods includes transforming a X by Y by Z input tensor into a X? by Y? by Z? input tensor, wherein X? is smaller than or equal to X, Y? is smaller than or equal to Y, and Z? is larger than or equal to Z; obtaining one or more modified weight matrices, wherein the modified weight matrices operate on the X? by Y? by Z? input tensor to generate a U? by V? by W? output tensor, and the U? by V? by W? output tensor is a transformed U by V by W output tensor; and processing the X? by Y? by Z? input tensor using the modified weight matrices to generate the U? by V? by W? output tensor.
    Type: Application
    Filed: October 1, 2020
    Publication date: April 29, 2021
    Inventors: Reginald Clifford Young, Jonathan Ross
  • Patent number: 10909447
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium. In one aspect, a method includes the actions of receiving a request to perform computations for a neural network on a hardware circuit having a matrix computation unit, the request specifying a transpose operation to be performed on a first neural network matrix; and generating instructions that when executed by the hardware circuit cause the hardware circuit to transpose the first neural network matrix by performing first operations, wherein the first operations include repeatedly performing the following second operations: for a current subdivision of the first neural network matrix that divides the first neural network matrix into one or more current submatrices, updating the first neural network matrix by swapping an upper right quadrant and a lower left quadrant of each current submatrix, and subdividing each current submatrix into respective new submatrices to update the current subdivision.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: February 2, 2021
    Assignee: Google LLC
    Inventors: Reginald Clifford Young, Geoffrey Irving
  • Publication number: 20210019618
    Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of neural network layers, the circuit comprising: a matrix computation unit configured to, for each of the plurality of neural network layers: receive a plurality of weight inputs and a plurality of activation inputs for the neural network layer, and generate a plurality of accumulated values based on the plurality of weight inputs and the plurality of activation inputs; and a vector computation unit communicatively coupled to the matrix computation unit and configured to, for each of the plurality of neural network layers: apply an activation function to each accumulated value generated by the matrix computation unit to generate a plurality of activated values for the neural network layer.
    Type: Application
    Filed: June 29, 2020
    Publication date: January 21, 2021
    Inventors: Jonathan Ross, Norman Paul Jouppi, Andrew Everett Phelps, Reginald Clifford Young, Thomas Norrie, Gregory Michael Thorson, Dan Luu
  • Patent number: 10896367
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for depth concatenation using a matrix computation unit. One of the methods includes: receiving a request to process network inputs to a neural network using an integrated circuit, the neural network comprising a depth concatenation neural network layer; and generating instructions that, when executed by the integrated circuit, cause the integrated circuit to perform operations comprising: for each spatial location in a first input tensor to the depth concatenation layer and a second input tensor to the depth concatenation layer: multiplying, using the matrix computation unit, a second depth vector for the spatial location by a shift weight matrix for the depth concatenation layer to generate a shifted second depth vector; and adding the shifted second depth vector and a first input depth vector for the spatial location to generate a concatenated depth vector.
    Type: Grant
    Filed: June 15, 2017
    Date of Patent: January 19, 2021
    Assignee: Google LLC
    Inventors: William John Gulland, Reginald Clifford Young
  • Publication number: 20200334536
    Abstract: Methods for receiving a request to process, on a hardware circuit, a neural network comprising a first convolutional neural network layer having a stride greater than one, and in response, generating instructions that cause the hardware circuit to, during processing of an input tensor, generate a layer output tensor equivalent to an output of the first convolutional neural network layer by processing the input tensor using a second convolutional neural network layer having a stride equal to one but that is otherwise equivalent to the first convolutional neural network layer to generate a first tensor, zeroing out elements of the first tensor that would not have been generated if the second convolutional neural network layer had the stride of the first convolutional neural network layer to generate a second tensor, and performing max pooling on the second tensor to generate the layer output tensor.
    Type: Application
    Filed: July 6, 2020
    Publication date: October 22, 2020
    Inventors: Reginald Clifford Young, William John Gulland
  • Patent number: 10810483
    Abstract: Methods, systems, and apparatus for efficiently performing a computation of a convolutional neural network layer. One of the methods includes transforming a X by Y by Z input tensor into a X? by Y? by Z? input tensor; obtaining one or more modified weight matrices, wherein the modified weight matrices operate on the X? by Y? by Z? input tensor to generate a U? by V? by W? output tensor, and the U? by V? by W? output tensor comprises a transformed U by V by W output tensor; and processing the X? by Y? by Z? input tensor using the modified weight matrices to generate the U? by V? by W? output tensor, wherein the U? by V? by W? output tensor comprises the U by V by W output tensor.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: October 20, 2020
    Assignee: Google LLC
    Inventors: Reginald Clifford Young, Jonathan Ross
  • Patent number: 10733505
    Abstract: Methods for receiving a request to process, on a hardware circuit, a neural network comprising a first convolutional neural network layer having a stride greater than one, and in response, generating instructions that cause the hardware circuit to, during processing of an input tensor, generate a layer output tensor equivalent to an output of the first convolutional neural network layer by processing the input tensor using a second convolutional neural network layer having a stride equal to one but that is otherwise equivalent to the first convolutional neural network layer to generate a first tensor, zeroing out elements of the first tensor that would not have been generated if the second convolutional neural network layer had the stride of the first convolutional neural network layer to generate a second tensor, and performing max pooling on the second tensor to generate the layer output tensor.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: August 4, 2020
    Assignee: Google LLC
    Inventors: Reginald Clifford Young, William John Gulland
  • Publication number: 20200218981
    Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of neural network layers, the circuit comprising: a matrix computation unit configured to, for each of the plurality of neural network layers: receive a plurality of weight inputs and a plurality of activation inputs for the neural network layer, and generate a plurality of accumulated values based on the plurality of weight inputs and the plurality of activation inputs; and a vector computation unit communicatively coupled to the matrix computation unit and configured to, for each of the plurality of neural network layers: apply an activation function to each accumulated value generated by the matrix computation unit to generate a plurality of activated values for the neural network layer.
    Type: Application
    Filed: March 19, 2020
    Publication date: July 9, 2020
    Inventors: Jonathan Ross, Norman Paul Jouppi, Andrew Everett Phelps, Reginald Clifford Young, Thomas Norrie, Gregory Michael Thorson, Dan Luu
  • Patent number: 10706348
    Abstract: Methods, systems, and apparatus for efficiently performing a computation of a convolutional neural network layer. One of the methods includes transforming a X by Y by Z input tensor into a X? by Y? by Z? input tensor, wherein X? is smaller than or equal to X, Y? is smaller than or equal to Y, and Z? is larger than or equal to Z; obtaining one or more modified weight matrices, wherein the modified weight matrices operate on the X? by Y? by Z? input tensor to generate a U? by V? by W? output tensor, and the U? by V? by W? output tensor comprises a transformed U by V by W output tensor; and processing the X? by Y? by Z? input tensor using the modified weight matrices to generate the U? by V? by W? output tensor.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: July 7, 2020
    Assignee: Google LLC
    Inventors: Reginald Clifford Young, Jonathan Ross
  • Patent number: 10699188
    Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of neural network layers, the circuit comprising: a matrix computation unit configured to, for each of the plurality of neural network layers: receive a plurality of weight inputs and a plurality of activation inputs for the neural network layer, and generate a plurality of accumulated values based on the plurality of weight inputs and the plurality of activation inputs; and a vector computation unit communicatively coupled to the matrix computation unit and configured to, for each of the plurality of neural network layers: apply an activation function to each accumulated value generated by the matrix computation unit to generate a plurality of activated values for the neural network layer.
    Type: Grant
    Filed: August 25, 2017
    Date of Patent: June 30, 2020
    Assignee: Google LLC
    Inventors: Jonathan Ross, Norman Paul Jouppi, Andrew Everett Phelps, Reginald Clifford Young, Thomas Norrie, Gregory Michael Thorson, Dan Luu
  • Patent number: 10699182
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for depth concatenation using a matrix computation unit. One of the methods includes: receiving a request to process network inputs to a neural network using an integrated circuit, the neural network comprising a depth concatenation neural network layer; and generating instructions that, when executed by the integrated circuit, cause the integrated circuit to perform operations comprising: for each spatial location in a first input tensor to the depth concatenation layer and a second input tensor to the depth concatenation layer: multiplying, using the matrix computation unit, a second depth vector for the spatial location by a shift weight matrix for the depth concatenation layer to generate a shifted second depth vector; and adding the shifted second depth vector and a first input depth vector for the spatial location to generate a concatenated depth vector.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: June 30, 2020
    Assignee: Google LLC
    Inventors: William John Gulland, Reginald Clifford Young
  • Patent number: 10679127
    Abstract: Methods and systems for receiving a request to implement a neural network comprising an average pooling layer on a hardware circuit, and in response, generating instructions that when executed by the hardware circuit, cause the hardware circuit to, during processing of a network input by the neural network, generate a layer output tensor that is equivalent to an output of the average pooling neural network layer by performing a convolution of an input tensor to the average pooling neural network layer and a kernel with a size equal to a window of the average pooling neural network layer and composed of elements that are each an identity matrix to generate a first tensor, and performing operations to cause each element of the first tensor to be divided by a number of elements in the window of the average pooling neural network layer to generate an initial output tensor.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: June 9, 2020
    Assignee: Google LLC
    Inventors: Reginald Clifford Young, William John Gulland
  • Publication number: 20200125922
    Abstract: Methods, systems, and apparatus for efficiently performing a computation of a convolutional neural network layer. One of the methods includes transforming a X by Y by Z input tensor into a X? by Y? by Z? input tensor, wherein X? is smaller than or equal to X, Y? is smaller than or equal to Y, and Z? is larger than or equal to Z; obtaining one or more modified weight matrices, wherein the modified weight matrices operate on the X? by Y? by Z? input tensor to generate a U? by V? by W? output tensor, and the U? by V? by W? output tensor comprises a transformed U by V by W output tensor, wherein U? is smaller than or equal to U, V? is smaller than or equal to V, and W? is larger than or equal to W; and processing the X? by Y? by Z? input tensor using the modified weight matrices to generate the U? by V? by W? output tensor, wherein the U? by V? by W? output tensor comprises the U by V by W output tensor.
    Type: Application
    Filed: December 17, 2019
    Publication date: April 23, 2020
    Inventors: Reginald Clifford Young, Jonathan Ross
  • Publication number: 20200057942
    Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of neural network layers, the circuit comprising: a matrix computation unit configured to, for each of the plurality of neural network layers: receive a plurality of weight inputs and a plurality of activation inputs for the neural network layer, and generate a plurality of accumulated values based on the plurality of weight inputs and the plurality of activation inputs; and a vector computation unit communicatively coupled to the matrix computation unit and configured to, for each of the plurality of neural network layers: apply an activation function to each accumulated value generated by the matrix computation unit to generate a plurality of activated values for the neural network layer.
    Type: Application
    Filed: October 25, 2019
    Publication date: February 20, 2020
    Inventors: Jonathan Ross, Norman Paul Jouppi, Andrew Everett Phelps, Reginald Clifford Young, Thomas Norrie, Gregory Michael Thorson, Dan Luu
  • Publication number: 20190354834
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for depth concatenation using a matrix computation unit. One of the methods includes: receiving a request to process network inputs to a neural network using an integrated circuit, the neural network comprising a depth concatenation neural network layer; and generating instructions that, when executed by the integrated circuit, cause the integrated circuit to performing operations comprising: for each spatial location in a first input tensor to the depth concatenation layer and a second input tensor to the depth concatenation layer: multiplying, using the matrix computation unit, a second depth vector for the spatial location by a shift weight matrix for the depth concatenation layer to generate a shifted second depth vector; and adding the shifted second depth vector and a first input depth vector for the spatial location to generate a concatenated depth vector.
    Type: Application
    Filed: August 5, 2019
    Publication date: November 21, 2019
    Inventors: William John Gulland, Reginald Clifford Young
  • Publication number: 20190354863
    Abstract: Methods and systems for receiving a request to implement a neural network comprising an average pooling layer on a hardware circuit, and in response, generating instructions that when executed by the hardware circuit, cause the hardware circuit to, during processing of a network input by the neural network, generate a layer output tensor that is equivalent to an output of the average pooling neural network layer by performing a convolution of an input tensor to the average pooling neural network layer and a kernel with a size equal to a window of the average pooling neural network layer and composed of elements that are each an identity matrix to generate a first tensor, and performing operations to cause each element of the first tensor to be divided by a number of elements in the window of the average pooling neural network layer to generate an initial output tensor.
    Type: Application
    Filed: August 5, 2019
    Publication date: November 21, 2019
    Inventors: Reginald Clifford Young, William John Gulland
  • Publication number: 20190354862
    Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of neural network layers, the circuit comprising: a matrix computation unit configured to, for each of the plurality of neural network layers: receive a plurality of weight inputs and a plurality of activation inputs for the neural network layer, and generate a plurality of accumulated values based on the plurality of weight inputs and the plurality of activation inputs; and a vector computation unit communicatively coupled to the matrix computation unit and configured to, for each of the plurality of neural network layers: apply an activation function to each accumulated value generated by the matrix computation unit to generate a plurality of activated values for the neural network layer.
    Type: Application
    Filed: August 1, 2019
    Publication date: November 21, 2019
    Inventors: Jonathan Ross, Norman Paul Jouppi, Andrew Everett Phelps, Reginald Clifford Young, Thomas Norrie, Gregory Michael Thorson, Dan Luu
  • Patent number: 10373049
    Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium for processing a network input through a neural network having one or more initial neural network layers followed by a softmax output layer. In one aspect, the methods include obtaining a layer output generated by the one or more initial neural network layers and processing the layer output through the softmax output layer to generate a neural network output. Processing the layer output through the softmax output layer includes determining, for each possible output value, a number of occurrences in the layer output values; for each possible output value occurring in the layer output values, determining a respective exponentiation measure; determining a normalization factor for the layer output by combining the exponentiation measures in accordance with the number of occurrences of the possible output values; and determining, for each of layer output values, a softmax probability value.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: August 6, 2019
    Assignee: Google LLC
    Inventor: Reginald Clifford Young
  • Publication number: 20190122107
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a respective neural network output for each of a plurality of inputs, the method comprising, for each of the neural network layers: receiving a plurality of inputs to be processed at the neural network layer; forming one or more batches of inputs from the plurality of inputs, each batch having a number of inputs up to the respective batch size for the neural network layer; selecting a number of the one or more batches of inputs to process, where a count of the inputs in the number of the one or more batches is greater than or equal to the respective associated batch size of a subsequent layer in the sequence; and processing the number of the one or more batches of inputs to generate the respective neural network layer output.
    Type: Application
    Filed: September 24, 2018
    Publication date: April 25, 2019
    Inventor: Reginald Clifford Young
  • Publication number: 20180307970
    Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium for processing a network input through a neural network having one or more initial neural network layers followed by a softmax output layer. In one aspect, the methods include obtaining a layer output generated by the one or more initial neural network layers and processing the layer output through the softmax output layer to generate a neural network output. Processing the layer output through the softmax output layer includes determining, for each possible output value, a number of occurrences in the layer output values; for each possible output value occurring in the layer output values, determining a respective exponentiation measure; determining a normalization factor for the layer output by combining the exponentiation measures in accordance with the number of occurrences of the possible output values; and determining, for each of layer output values, a softmax probability value.
    Type: Application
    Filed: June 25, 2018
    Publication date: October 25, 2018
    Inventor: Reginald Clifford Young