Patents by Inventor Dong Hyuk Woo
Dong Hyuk Woo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240078417Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.Type: ApplicationFiled: June 30, 2023Publication date: March 7, 2024Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
-
Patent number: 11816045Abstract: A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.Type: GrantFiled: August 24, 2021Date of Patent: November 14, 2023Assignee: Google LLCInventors: Dong Hyuk Woo, Ravi Narayanaswami
-
Patent number: 11816480Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.Type: GrantFiled: August 22, 2022Date of Patent: November 14, 2023Assignee: Google LLCInventors: Olivier Temam, Ravi Narayanaswami, Harshit Khaitan, Dong Hyuk Woo
-
Publication number: 20230315478Abstract: A hardware accelerator can receive, from a host processor, a slice of input data at a time-step. The hardware accelerator can process the input data using a machine learning model deployed on the hardware accelerator to compute a respective probability among multiple probabilities for each of multiple classes. The respective probability for each class being a likelihood that content in the slice belongs to the class. The hardware accelerator can determine, from the multiple probabilities, a preset number of highest probabilities for the slice of input data. The hardware accelerator can transmit the preset number of highest probabilities for the slice to the host processor. Related apparatus, systems, techniques and articles are also described.Type: ApplicationFiled: August 13, 2020Publication date: October 5, 2023Inventors: Jack Liu, Dong Hyuk Woo
-
Publication number: 20230297504Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data indicating a neural network comprising a plurality of layers; for each layer in a subset of the plurality of layers: assigning a subset of the plurality of computing units to at least partially perform inference computations associated with the layer; determining a memory size and a common memory address for the respective addressable memory unit of each computing unit assigned for the layer; and generating a shared instruction comprising a memory allocation instruction that, when executed by each of the subset of the plurality of computing units, causes the computing unit to store a result of performing inference computations associated with the layer in the determined common memory address with the determined memory size in the addressable memory of the computing unit.Type: ApplicationFiled: April 26, 2021Publication date: September 21, 2023Inventors: Jack Liu, Dong Hyuk Woo
-
Patent number: 11748443Abstract: A circuit comprises an input register configured to receive an input vector of elements, a control register configured to receive a control vector of elements, wherein each element of the control vector corresponds to a respective element of the input vector, and wherein each element specifies a permutation of a corresponding element of the input vector, and a permute execution circuit configured to generate an output vector of elements corresponding to a permutation of the input vector. Generating each element of the output vector comprises accessing, at the input register, a particular element of the input vector, accessing, at the control register, a particular element of the control vector corresponding to the particular element of the input vector, and outputting the particular element of the input vector as an element at a particular position of the output vector that is selected based on the particular element of the control vector.Type: GrantFiled: March 22, 2021Date of Patent: September 5, 2023Assignee: Google LLCInventors: Dong Hyuk Woo, Gregory Michael Thorson, Andrew Everett Phelps, Olivier Temam, Jonathan Ross, Christopher Aaron Clark
-
Patent number: 11727259Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.Type: GrantFiled: November 10, 2022Date of Patent: August 15, 2023Assignee: Google LLCInventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
-
Publication number: 20230119126Abstract: A hardware accelerator can store, in multiple memory storage areas in one or more memories on the accelerator, input data for each processing time step of multiple processing time steps for processing sequential inputs to a machine learning model. For each processing time step, the following is performed. The accelerator can access a current value of a counter stored in a register within the accelerator to identify the processing time step. The accelerator can determine, based on the current value of the counter, one or more memory storage areas that store the input data for the processing time step. The accelerator can facilitate access of the input data for the processing time step from the one or more memory storage areas to at least one processor coupled to the one or more memory storage areas. The accelerator can increment the current value of the counter stored in the register.Type: ApplicationFiled: December 19, 2019Publication date: April 20, 2023Inventors: Jack Liu, Dong Hyuk Woo
-
Publication number: 20230052942Abstract: A method of performing a reshape operation specified in a reshape layer of a neural network model is described. The reshape operation reshapes an input tensor with an input tensor shape to an output tensor with an output tensor shape. The tensor data that has to be reshaped is directly routed between tile memories of the hardware accelerator in an efficient manner. This advantageously optimizes usage of memory space and allows any number and type of neural network models to be run on the hardware accelerator.Type: ApplicationFiled: March 30, 2020Publication date: February 16, 2023Inventors: Arun Chauhan, Fatih Mehmet Bakir, Phitchaya Mangpo Phothilimthana, Dong Hyuk Woo
-
Publication number: 20230004386Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.Type: ApplicationFiled: August 22, 2022Publication date: January 5, 2023Inventors: Olivier Temam, Ravi Narayanaswami, Harshit Khaitan, Dong Hyuk Woo
-
Publication number: 20220414437Abstract: Methods and systems, including computer programs encoded on a computer storage medium. In one aspect, a method includes obtaining data specifying one or more neural networks to be deployed on a neural network hardware accelerator, each of the one or more neural networks having a respective set of parameters, and the neural network hardware accelerator having one or more memories having a memory capacity; determining a maximum amount of the memory capacity that will be in use at any one time during a processing of any of the one or more neural networks by the neural network hardware accelerator; identifying a subset of the parameters of the one or more neural networks that consumes an amount of memory that is less than a difference between the memory capacity and the determined maximum amount of the memory capacity; and storing the identified subset of the parameters.Type: ApplicationFiled: December 18, 2019Publication date: December 29, 2022Inventors: Jack Liu, Dong Hyuk Woo, Jason Jong Kyu Park, Raksit Ashok
-
Publication number: 20220391472Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.Type: ApplicationFiled: June 16, 2022Publication date: December 8, 2022Inventors: Ravi Narayanaswami, Rahul Nagarajan, Dong Hyuk Woo, Christopher Daniel Leary
-
Patent number: 11501144Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.Type: GrantFiled: September 12, 2019Date of Patent: November 15, 2022Assignee: Google LLCInventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
-
Publication number: 20220318594Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.Type: ApplicationFiled: June 21, 2022Publication date: October 6, 2022Inventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
-
Publication number: 20220300826Abstract: A compiler of a computing device is described that identifies a sequence of neural network models frequently invoked by an application of the computing device, compiles the models in that sequence, and loads a static random access memory (SRAM) of a hardware accelerator with the compiled models only when the same compiled models—from another, but same, sequence that was previously invoked—are not already present in the SRAM. This prevents unnecessary reloading of compiled models into the SRAM, thereby increasing runtime speed and conserving computational energy.Type: ApplicationFiled: March 9, 2020Publication date: September 22, 2022Inventors: Arun Chauhan, Raksit Ashok, Dong Hyuk Woo
-
Patent number: 11422801Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.Type: GrantFiled: January 4, 2019Date of Patent: August 23, 2022Assignee: Google LLCInventors: Olivier Temam, Ravi Narayanaswami, Harshit Khaitan, Dong Hyuk Woo
-
Publication number: 20220245453Abstract: Methods, systems, and apparatus, including an apparatus for redistributing tensor elements among computing units are described. In one aspect, a method includes distributing tensor elements of an N-dimensional tensor among multiple computing units of a computation system. Each computing unit redistributes the subset of tensor elements previously distributed to the computing unit to computing units. Each computing unit accesses redistribution partitioning data that specifies, for each computing unit, the tensor elements that are to be stored by the computing unit after redistributing the tensor elements. For each tensor element previously distributed to the particular computing unit, the computing unit determines a global linearized index value for the tensor element based on a multi-dimensional index for the tensor element.Type: ApplicationFiled: October 7, 2020Publication date: August 4, 2022Inventors: David Alexander Majnemer, Ravi Narayanaswami, Dong Hyuk Woo, Carrell Daniel Killebrew
-
Patent number: 11379707Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.Type: GrantFiled: November 22, 2017Date of Patent: July 5, 2022Assignee: Google LLCInventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
-
Patent number: 11366877Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.Type: GrantFiled: July 14, 2020Date of Patent: June 21, 2022Assignee: Google LLCInventors: Ravi Narayanaswami, Rahul Nagarajan, Dong Hyuk Woo, Christopher Daniel Leary
-
Publication number: 20220156557Abstract: A computer-implemented method includes receiving a batch of neural network inputs to be processed using a neural network on a hardware circuit. The neural network has multiple layers arranged in a directed graph and each layer has a respective set of parameters. The method includes determining a partitioning of the neural network layers into a sequence of superlayers. Each superlayer is a partition of the directed graph that includes one or more layers. The method includes processing the batch of inputs using the hardware circuit, which includes, for each superlayer in the sequence: i) loading the respective set of parameters for the layers in the superlayer into memory of the hardware circuit, and ii) for each input in the batch, processing the input through each of the layers in the superlayer using the parameters in the memory of the hardware circuit to generate a superlayer output for the input.Type: ApplicationFiled: October 25, 2021Publication date: May 19, 2022Inventor: Dong Hyuk Woo