Patents by Inventor Dong Hyuk Woo

Dong Hyuk Woo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10534607
    Abstract: Methods, systems, and apparatus, including an apparatus for accessing a N-dimensional tensor, the apparatus including, for each dimension of the N-dimensional tensor, a partial address offset value element that stores a partial address offset value for the dimension based at least on an initial value for the dimension, a step value for the dimension, and a number of iterations of a loop for the dimension. The apparatus includes a hardware adder and a processor. The processor obtains an instruction to access a particular element of the N-dimensional tensor. The N-dimensional tensor has multiple elements arranged across each of the N dimensions, where N is an integer that is equal to or greater than one. The processor determines, using the partial address offset value elements and the hardware adder, an address of the particular element and outputs data indicating the determined address for accessing the particular element of the N-dimensional tensor.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: January 14, 2020
    Assignee: Google LLC
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20200012608
    Abstract: A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.
    Type: Application
    Filed: July 17, 2019
    Publication date: January 9, 2020
    Inventors: Dong Hyuk Woo, Ravi Narayanaswami
  • Publication number: 20200012705
    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.
    Type: Application
    Filed: September 16, 2019
    Publication date: January 9, 2020
    Inventors: Ravi Narayanaswami, Rahul Nagarajan, Dong Hyuk Woo, Christopher Daniel Leary
  • Publication number: 20200005128
    Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.
    Type: Application
    Filed: September 12, 2019
    Publication date: January 2, 2020
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Patent number: 10504022
    Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: December 10, 2019
    Assignee: Google LLC
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Patent number: 10496326
    Abstract: Methods, systems, and apparatus, including an apparatus for transferring data using multiple buffers, including multiple memories and one or more processing units configured to determine buffer memory addresses for a sequence of data elements stored in a first data storage location that are being transferred to a second data storage location. For each group of one or more of the data elements in the sequence, a value of a buffer assignment element that can be switched between multiple values each corresponding to a different one of the memories is identified. A buffer memory address for the group of one or more data elements is determined based on the value of the buffer assignment element. The value of the buffer assignment element is switched prior to determining the buffer memory address for a subsequent group of one or more data elements of the sequence of data elements.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: December 3, 2019
    Assignee: Google LLC
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20190354570
    Abstract: A circuit comprises an input register configured to receive an input vector of elements, a control register configured to receive a control vector of elements, wherein each element of the control vector corresponds to a respective element of the input vector, and wherein each element specifies a permutation of a corresponding element of the input vector, and a permute execution circuit configured to generate an output vector of elements corresponding to a permutation of the input vector. Generating each element of the output vector comprises accessing, at the input register, a particular element of the input vector, accessing, at the control register, a particular element of the control vector corresponding to the particular element of the input vector, and outputting the particular element of the input vector as an element at a particular position of the output vector that is selected based on the particular element of the control vector.
    Type: Application
    Filed: August 1, 2019
    Publication date: November 21, 2019
    Inventors: Dong Hyuk Woo, Gregory Michael Thorson, Andrew Everett Phelps, Olivier Temam, Jonathan Ross, Christopher Aaron Clark
  • Patent number: 10417303
    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: September 17, 2019
    Assignee: Google LLC
    Inventors: Ravi Narayanaswami, Rahul Nagarajan, Dong Hyuk Woo, Christopher Daniel Leary
  • Publication number: 20190258694
    Abstract: A circuit comprises an input register configured to receive an input vector of elements, a control register configured to receive a control vector of elements, wherein each element of the control vector corresponds to a respective element of the input vector, and wherein each element specifies a permutation of a corresponding element of the input vector, and a permute execution circuit configured to generate an output vector of elements corresponding to a permutation of the input vector. Generating each element of the output vector comprises accessing, at the input register, a particular element of the input vector, accessing, at the control register, a particular element of the control vector corresponding to the particular element of the input vector, and outputting the particular element of the input vector as an element at a particular position of the output vector that is selected based on the particular element of the control vector.
    Type: Application
    Filed: February 25, 2019
    Publication date: August 22, 2019
    Inventors: Dong Hyuk Woo, Gregory Michael Thorson, Andrew Everett Phelps, Olivier Temam, Jonathan Ross, Christopher Aaron Clark
  • Patent number: 10373291
    Abstract: Methods, systems, and apparatus, including an apparatus for determining pixel coordinates for image transformation and memory addresses for storing the transformed image data. In some implementations, a system includes a processing unit configured to perform machine learning computations for images using a machine learning model and pixel values for the images, a storage medium configured to store the pixel values for the images, and a memory address computation unit that includes one or more hardware processors. The processor(s) are configured to receive image data for an image and determine that the dimensions of the image do not match the dimensions of the machine learning model. In response, the processor(s) determine pixel coordinates for a transformed version of the input image and, for each of the pixel coordinates, memory address(es), in the storage medium, for storing pixel value(s) that will be used to generate an input to the machine learning model.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: August 6, 2019
    Assignee: Google LLC
    Inventors: Carrell Daniel Killebrew, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20190236755
    Abstract: Methods, systems, and apparatus, including an apparatus for determining pixel coordinates for image transformation and memory addresses for storing the transformed image data. In some implementations, a system includes a processing unit configured to perform machine learning computations for images using a machine learning model and pixel values for the images, a storage medium configured to store the pixel values for the images, and a memory address computation unit that includes one or more hardware processors. The processor(s) are configured to receive image data for an image and determine that the dimensions of the image do not match the dimensions of the machine learning model. In response, the processor(s) determine pixel coordinates for a transformed version of the input image and, for each of the pixel coordinates, memory address(es), in the storage medium, for storing pixel value(s) that will be used to generate an input to the machine learning model.
    Type: Application
    Filed: January 31, 2018
    Publication date: August 1, 2019
    Inventors: Carrell Daniel Killebrew, Ravi Narayanaswami, Dong Hyuk Woo
  • Patent number: 10360163
    Abstract: A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: July 23, 2019
    Assignee: Google LLC
    Inventors: Dong Hyuk Woo, Ravi Narayanaswami
  • Publication number: 20190213005
    Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.
    Type: Application
    Filed: January 4, 2019
    Publication date: July 11, 2019
    Inventors: Olivier Temam, Ravi Narayanaswami, Harshit Khaitan, Dong Hyuk Woo
  • Publication number: 20190205756
    Abstract: Methods, systems, and apparatus for accessing a N-dimensional tensor are described. In some implementations, a method includes, for each of one or more first iterations of a first nested loop, performing iterations of a second nested loop that is nested within the first nested loop until a first loop bound for the second nested loop is reached. A number of iterations of the second nested loop for the one or more first iterations of the first nested loop is limited by the first loop bound in response to the second nested loop having a total number of iterations that exceeds a value of a hardware property of the computing system. After a penultimate iteration of the first nested loop has completed, one or more iterations of the second nested loop are performed for a final iteration of the first nested loop until an alternative loop bound is reached.
    Type: Application
    Filed: March 8, 2019
    Publication date: July 4, 2019
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20190205141
    Abstract: Methods, systems, and apparatus, including an apparatus for processing an instruction for accessing a N-dimensional tensor, the apparatus including multiple tensor index elements and multiple dimension multiplier elements, where each of the dimension multiplier elements has a corresponding tensor index element. The apparatus includes one or more processors configured to obtain an instruction to access a particular element of a N-dimensional tensor, where the N-dimensional tensor has multiple elements arranged across each of the N dimensions, and where N is an integer that is equal to or greater than one; determine, using one or more tensor index elements of the multiple tensor index elements and one or more dimension multiplier elements of the multiple dimension multiplier elements, an address of the particular element; and output data indicating the determined address for accessing the particular element of the N-dimensional tensor.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 4, 2019
    Inventors: Dong Hyuk Woo, Andrew Everett Phelps
  • Publication number: 20190156187
    Abstract: Apparatus and methods for processing neural network models are provided. The apparatus can comprise a plurality of identical artificial intelligence processing dies. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies can include at least one inter-die input block and at least one inter-die output block. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies is communicatively coupled to another artificial intelligence processing die among the plurality of identical artificial intelligence processing dies by way of one or more communication paths from the at least one inter-die output block of the artificial intelligence processing die to the at least one inter-die input block of the artificial intelligence processing die.
    Type: Application
    Filed: November 21, 2017
    Publication date: May 23, 2019
    Inventors: Uday Kumar Dasari, Olivier Temam, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20190138243
    Abstract: Methods, systems, and apparatus, including an apparatus for transferring data using multiple buffers, including multiple memories and one or more processing units configured to determine buffer memory addresses for a sequence of data elements stored in a first data storage location that are being transferred to a second data storage location. For each group of one or more of the data elements in the sequence, a value of a buffer assignment element that can be switched between multiple values each corresponding to a different one of the memories is identified. A buffer memory address for the group of one or more data elements is determined based on the value of the buffer assignment element. The value of the buffer assignment element is switched prior to determining the buffer memory address for a subsequent group of one or more data elements of the sequence of data elements.
    Type: Application
    Filed: January 4, 2019
    Publication date: May 9, 2019
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Patent number: 10248908
    Abstract: Methods, systems, and apparatus for accessing a N-dimensional tensor are described. In some implementations, a method includes, for each of one or more first iterations of a first nested loop, performing iterations of a second nested loop that is nested within the first nested loop until a first loop bound for the second nested loop is reached. A number of iterations of the second nested loop for the one or more first iterations of the first nested loop is limited by the first loop bound in response to the second nested loop having a total number of iterations that exceeds a value of a hardware property of the computing system. After a penultimate iteration of the first nested loop has completed, one or more iterations of the second nested loop are performed for a final iteration of the first nested loop until an alternative loop bound is reached.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: April 2, 2019
    Assignee: Google LLC
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Patent number: 10228947
    Abstract: Methods, systems, and apparatus, including an apparatus for processing an instruction for accessing a N-dimensional tensor, the apparatus including multiple tensor index elements and multiple dimension multiplier elements, where each of the dimension multiplier elements has a corresponding tensor index element. The apparatus includes one or more processors configured to obtain an instruction to access a particular element of a N-dimensional tensor, where the N-dimensional tensor has multiple elements arranged across each of the N dimensions, and where N is an integer that is equal to or greater than one; determine, using one or more tensor index elements of the multiple tensor index elements and one or more dimension multiplier elements of the multiple dimension multiplier elements, an address of the particular element; and output data indicating the determined address for accessing the particular element of the N-dimensional tensor.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: March 12, 2019
    Assignee: Google LLC
    Inventors: Dong Hyuk Woo, Andrew Everett Phelps
  • Patent number: 10216705
    Abstract: A circuit comprises an input register configured to receive an input vector of elements, a control register configured to receive a control vector of elements, wherein each element of the control vector corresponds to a respective element of the input vector, and wherein each element specifies a permutation of a corresponding element of the input vector, and a permute execution circuit configured to generate an output vector of elements corresponding to a permutation of the input vector. Generating each element of the output vector comprises accessing, at the input register, a particular element of the input vector, accessing, at the control register, a particular element of the control vector corresponding to the particular element of the input vector, and outputting the particular element of the input vector as an element at a particular position of the output vector that is selected based on the particular element of the control vector.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: February 26, 2019
    Assignee: Google LLC
    Inventors: Dong Hyuk Woo, Gregory Michael Thorson, Andrew Everett Phelps, Olivier Temam, Jonathan Ross, Christopher Aaron Clark