Patents by Inventor Ravi Narayanaswami

Ravi Narayanaswami has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180341479
    Abstract: Methods, systems, and apparatus, including an apparatus for accessing a N-dimensional tensor, the apparatus including, for each dimension of the N-dimensional tensor, a partial address offset value element that stores a partial address offset value for the dimension based at least on an initial value for the dimension, a step value for the dimension, and a number of iterations of a loop for the dimension. The apparatus includes a hardware adder and a processor. The processor obtains an instruction to access a particular element of the N-dimensional tensor. The N-dimensional tensor has multiple elements arranged across each of the N dimensions, where N is an integer that is equal to or greater than one. The processor determines, using the partial address offset value elements and the hardware adder, an address of the particular element and outputs data indicating the determined address for accessing the particular element of the N-dimensional tensor.
    Type: Application
    Filed: February 23, 2018
    Publication date: November 29, 2018
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Patent number: 10108538
    Abstract: Methods, systems, and apparatus, including an apparatus for accessing data. In some implementations, an apparatus includes address offset value elements that are each configured to store an address offset value. For each address offset value element, the apparatus can include address computation elements that each store a value used to determine the address offset value. One or more processors are configured to receive a program for performing computations using tensor elements of a tensor. The processor(s) can identify, in the program, a prologue or epilogue loop having a corresponding data array for storing values of the prologue or epilogue loop and populate, for a first address offset value element that corresponds to the prologue or epilogue loop, the address computation elements for the first address offset value element with respective values based at least on a number of iterations of the prologue or epilogue loop.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: October 23, 2018
    Assignee: Google LLC
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20180197068
    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
    Type: Application
    Filed: November 22, 2017
    Publication date: July 12, 2018
    Inventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
  • Publication number: 20180121786
    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
    Type: Application
    Filed: October 27, 2016
    Publication date: May 3, 2018
    Inventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
  • Publication number: 20180121196
    Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.
    Type: Application
    Filed: October 27, 2016
    Publication date: May 3, 2018
    Inventors: Olivier Temam, Ravi Narayanaswami, Harshit Khaitan, Dong Hyuk Woo
  • Publication number: 20180121377
    Abstract: A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.
    Type: Application
    Filed: October 27, 2016
    Publication date: May 3, 2018
    Inventors: Dong Hyuk Woo, Ravi Narayanaswami
  • Patent number: 9959498
    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: May 1, 2018
    Inventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
  • Patent number: 9946539
    Abstract: Methods, systems, and apparatus, including an apparatus for accessing a N-dimensional tensor, the apparatus including, for each dimension of the N-dimensional tensor, a partial address offset value element that stores a partial address offset value for the dimension based at least on an initial value for the dimension, a step value for the dimension, and a number of iterations of a loop for the dimension. The apparatus includes a hardware adder and a processor. The processor obtains an instruction to access a particular element of the N-dimensional tensor. The N-dimensional tensor has multiple elements arranged across each of the N dimensions, where N is an integer that is equal to or greater than one. The processor determines, using the partial address offset value elements and the hardware adder, an address of the particular element and outputs data indicating the determined address for accessing the particular element of the N-dimensional tensor.
    Type: Grant
    Filed: May 23, 2017
    Date of Patent: April 17, 2018
    Assignee: Google LLC
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Patent number: 9928460
    Abstract: A three dimensional neural network accelerator that includes a first neural network accelerator tile that includes a first transmission coil, and a second neural network accelerator tile that includes a second transmission coil, wherein the first neural network accelerator tile is adjacent to and aligned vertically with the second neural network accelerator tile, and wherein the first transmission coil is configured to wirelessly communicate with the second transmission coil via inductive coupling.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: March 27, 2018
    Assignee: Google LLC
    Inventors: Andreas Georg Nowatzyk, Olivier Temam, Ravi Narayanaswami, Uday Kumar Dasari
  • Publication number: 20180060276
    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.
    Type: Application
    Filed: September 5, 2017
    Publication date: March 1, 2018
    Inventors: Ravi Narayanaswami, Rahul Nagarajan, Dong Hyuk Woo, Christopher Daniel Leary
  • Patent number: 9898441
    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements into a dense matrix. The system includes a data fetch unit that includes a plurality of processors, the data fetch unit configured to determine, based on identifications of the subset of the particular sparse elements, a processor designation for fetching the subset of the particular sparse elements. The system includes a concatenation unit configured to generate an output dense matrix based on a transformation that is applied to the sparse elements fetched by the data fetch unit.
    Type: Grant
    Filed: February 5, 2016
    Date of Patent: February 20, 2018
    Assignee: Google LLC
    Inventors: Ravi Narayanaswami, Rahul Nagarajan, Dong Hyuk Woo, Christopher Daniel Leary
  • Patent number: 9880976
    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements into a dense matrix. The system includes a data fetch unit that includes a plurality of processors, the data fetch unit configured to determine, based on identifications of the subset of the particular sparse elements, a processor designation for fetching the subset of the particular sparse elements. The system includes a concatenation unit configured to generate an output dense matrix based on a transformation that is applied to the sparse elements fetched by the data fetch unit.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: January 30, 2018
    Assignee: Google LLC
    Inventors: Ravi Narayanaswami, Rahul Nagarajan, Dong Hyuk Woo, Christopher Daniel Leary
  • Patent number: 9846837
    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: December 19, 2017
    Assignee: Google Inc.
    Inventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
  • Patent number: 9836691
    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
    Type: Grant
    Filed: March 10, 2017
    Date of Patent: December 5, 2017
    Assignee: Google Inc.
    Inventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
  • Patent number: 9818059
    Abstract: A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.
    Type: Grant
    Filed: March 22, 2017
    Date of Patent: November 14, 2017
    Assignee: Google Inc.
    Inventors: Dong Hyuk Woo, Ravi Narayanaswami
  • Patent number: 9805001
    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.
    Type: Grant
    Filed: February 5, 2016
    Date of Patent: October 31, 2017
    Assignee: Google Inc.
    Inventors: Ravi Narayanaswami, Rahul Nagarajan, Dong Hyuk Woo, Christopher Daniel Leary
  • Patent number: 9798701
    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: October 24, 2017
    Assignee: Google Inc.
    Inventors: Ravi Narayanaswami, Rahul Nagarajan, Dong Hyuk Woo, Christopher Daniel Leary
  • Patent number: 9772973
    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.
    Type: Grant
    Filed: February 5, 2016
    Date of Patent: September 26, 2017
    Assignee: Google Inc.
    Inventors: Ravi Narayanaswami, Rahul Nagarajan, Dong Hyuk Woo, Christopher Daniel Leary
  • Patent number: 9772974
    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: September 26, 2017
    Assignee: Google Inc.
    Inventors: Ravi Narayanaswami, Rahul Nagarajan, Dong Hyuk Woo, Christopher Daniel Leary
  • Publication number: 20170228344
    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements into a dense matrix. The system includes a data fetch unit that includes a plurality of processors, the data fetch unit configured to determine, based on identifications of the subset of the particular sparse elements, a processor designation for fetching the subset of the particular sparse elements. The system includes a concatenation unit configured to generate an output dense matrix based on a transformation that is applied to the sparse elements fetched by the data fetch unit.
    Type: Application
    Filed: December 22, 2016
    Publication date: August 10, 2017
    Inventors: Ravi Narayanaswami, Rahul Nagarajan, Dong Hyuk Woo, Christopher Daniel Leary