Patents by Inventor Dong Hyuk Woo
Dong Hyuk Woo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250238668Abstract: Apparatus and methods for processing neural network models are provided. The apparatus can comprise a plurality of identical artificial intelligence processing dies. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies can include at least one inter-die input block and at least one inter-die output block. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies is communicatively coupled to another artificial intelligence processing die among the plurality of identical artificial intelligence processing dies by way of one or more communication paths from the at least one inter-die output block of the artificial intelligence processing die to the at least one inter-die input block of the artificial intelligence processing die.Type: ApplicationFiled: January 17, 2025Publication date: July 24, 2025Inventors: Uday Kumar Dasari, Olivier Temam, Ravi Narayanaswami, Dong Hyuk Woo
-
Publication number: 20250232164Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.Type: ApplicationFiled: January 15, 2025Publication date: July 17, 2025Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
-
Publication number: 20250232165Abstract: A hardware accelerator can store, in multiple memory storage areas in one or more memories on the accelerator, input data for each processing time step of multiple processing time steps for processing sequential inputs to a machine learning model. For each processing time step, the following is performed. The accelerator can access a current value of a counter stored in a register within the accelerator to identify the processing time step. The accelerator can determine, based on the current value of the counter, one or more memory storage areas that store the input data for the processing time step. The accelerator can facilitate access of the input data for the processing time step from the one or more memory storage areas to at least one processor coupled to the one or more memory storage areas. The accelerator can increment the current value of the counter stored in the register.Type: ApplicationFiled: January 16, 2025Publication date: July 17, 2025Inventors: Jack Liu, Dong Hyuk Woo
-
Publication number: 20250231765Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.Type: ApplicationFiled: January 16, 2025Publication date: July 17, 2025Inventors: Olivier Temam, Ravi Narayanaswami, Harshit Khaitan, Dong Hyuk Woo
-
Publication number: 20250225781Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing inference computations of a fully convolutional neural network receiving inputs with different sizes. One of the methods include receiving a new input to be processed by a fully convolutional neural network, the new input having a first size different from a fixed size that the fully convolutional neural network is configured to process; determining, one or more fixed-size inputs from the new input, each fixed-size input having the fixed size; obtaining a respective fixed-size output generated by the fully convolutional neural network performing inference computations for each of the one or more fixed-size inputs; and generating, from the respective fixed-size outputs comprising one or more invalid pixel values, a final output that is equivalent to an output that would be generated by processing the new input using the fully convolutional neural network.Type: ApplicationFiled: October 25, 2021Publication date: July 10, 2025Inventors: Tushar Kumar, Soorgoli Ashok Halambi, Jason Jong Kyu Park, Arun Chauhan, Dong Hyuk Woo
-
Publication number: 20250209302Abstract: A computer-implemented method includes receiving a batch of neural network inputs to be processed using a neural network on a hardware circuit. The neural network has multiple layers arranged in a directed graph and each layer has a respective set of parameters. The method includes determining a partitioning of the neural network layers into a sequence of superlayers. Each superlayer is a partition of the directed graph that includes one or more layers. The method includes processing the batch of inputs using the hardware circuit, which includes, for each superlayer in the sequence: i) loading the respective set of parameters for the layers in the superlayer into memory of the hardware circuit, and ii) for each input in the batch, processing the input through each of the layers in the superlayer using the parameters in the memory of the hardware circuit to generate a superlayer output for the input.Type: ApplicationFiled: March 13, 2025Publication date: June 26, 2025Inventor: Dong Hyuk Woo
-
Patent number: 12339923Abstract: A circuit comprises an input register configured to receive an input vector of elements, a control register configured to receive a control vector of elements, wherein each element of the control vector corresponds to a respective element of the input vector, and wherein each element specifies a permutation of a corresponding element of the input vector, and a permute execution circuit configured to generate an output vector of elements corresponding to a permutation of the input vector. Generating each element of the output vector comprises accessing, at the input register, a particular element of the input vector, accessing, at the control register, a particular element of the control vector corresponding to the particular element of the input vector, and outputting the particular element of the input vector as an element at a particular position of the output vector that is selected based on the particular element of the control vector.Type: GrantFiled: September 1, 2023Date of Patent: June 24, 2025Assignee: Google LLCInventors: Dong Hyuk Woo, Gregory Michael Thorson, Andrew Everett Phelps, Olivier Temam, Jonathan Ross, Christopher Aaron Clark
-
Publication number: 20250139408Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing inference operations of machine learning models, are described in this document. In one aspect, the method includes receiving data representing a first machine learning model that includes inference operations. An estimated duration for the system to perform the inference operations is obtained. A priority time period reserved for performing priority inference operations of a priority machine learning model during each occurrence of a recurring time window is obtained. A remaining time period of each occurrence of the recurring time window that remains after reserving the priority time period is determined. A determination is made that the estimated duration is greater than the remaining time period. In response, the first machine learning model is partitioned into a group of sub-models. The hardware processing unit(s) perform inference operations of a sub-model during the remaining time period.Type: ApplicationFiled: September 17, 2021Publication date: May 1, 2025Inventor: Dong Hyuk Woo
-
Patent number: 12254394Abstract: A computer-implemented method includes receiving a batch of neural network inputs to be processed using a neural network on a hardware circuit. The neural network has multiple layers arranged in a directed graph and each layer has a respective set of parameters. The method includes determining a partitioning of the neural network layers into a sequence of superlayers. Each superlayer is a partition of the directed graph that includes one or more layers. The method includes processing the batch of inputs using the hardware circuit, which includes, for each superlayer in the sequence: i) loading the respective set of parameters for the layers in the superlayer into memory of the hardware circuit, and ii) for each input in the batch, processing the input through each of the layers in the superlayer using the parameters in the memory of the hardware circuit to generate a superlayer output for the input.Type: GrantFiled: October 25, 2021Date of Patent: March 18, 2025Assignee: Google LLCInventor: Dong Hyuk Woo
-
Publication number: 20250068897Abstract: Apparatus and methods for processing neural network models are provided. The apparatus can comprise a plurality of identical artificial intelligence processing dies. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies can include at least one inter-die input block and at least one inter-die output block. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies is communicatively coupled to another artificial intelligence processing die among the plurality of identical artificial intelligence processing dies by way of one or more communication paths from the at least one inter-die output block of the artificial intelligence processing die to the at least one inter-die input block of the artificial intelligence processing die.Type: ApplicationFiled: August 30, 2024Publication date: February 27, 2025Inventors: Uday Kumar Dasari, Olivier Temam, Ravi Narayanaswami, Dong Hyuk Woo
-
Publication number: 20250045559Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.Type: ApplicationFiled: July 9, 2024Publication date: February 6, 2025Inventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
-
Publication number: 20240403621Abstract: A hardware accelerator can store, in multiple memory storage areas in one or more memories on the accelerator, input data for each processing time step of multiple processing time steps for processing sequential inputs to a machine learning model. For each processing time step, the following is performed. The accelerator can access a current value of a counter stored in a register within the accelerator to identify the processing time step. The accelerator can determine, based on the current value of the counter, one or more memory storage areas that store the input data for the processing time step. The accelerator can facilitate access of the input data for the processing time step from the one or more memory storage areas to at least one processor coupled to the one or more memory storage areas. The accelerator can increment the current value of the counter stored in the register.Type: ApplicationFiled: August 13, 2024Publication date: December 5, 2024Inventors: Jack Liu, Dong Hyuk Woo
-
Patent number: 12086706Abstract: A hardware accelerator can store, in multiple memory storage areas in one or more memories on the accelerator, input data for each processing time step of multiple processing time steps for processing sequential inputs to a machine learning model. For each processing time step, the following is performed. The accelerator can access a current value of a counter stored in a register within the accelerator to identify the processing time step. The accelerator can determine, based on the current value of the counter, one or more memory storage areas that store the input data for the processing time step. The accelerator can facilitate access of the input data for the processing time step from the one or more memory storage areas to at least one processor coupled to the one or more memory storage areas. The accelerator can increment the current value of the counter stored in the register.Type: GrantFiled: December 19, 2019Date of Patent: September 10, 2024Assignee: Google LLCInventors: Jack Liu, Dong Hyuk Woo
-
Patent number: 12079711Abstract: Apparatus and methods for processing neural network models are provided. The apparatus can comprise a plurality of identical artificial intelligence processing dies. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies can include at least one inter-die input block and at least one inter-die output block. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies is communicatively coupled to another artificial intelligence processing die among the plurality of identical artificial intelligence processing dies by way of one or more communication paths from the at least one inter-die output block of the artificial intelligence processing die to the at least one inter-die input block of the artificial intelligence processing die.Type: GrantFiled: February 26, 2021Date of Patent: September 3, 2024Assignee: Google LLCInventors: Uday Kumar Dasari, Olivier Temam, Ravi Narayanaswami, Dong Hyuk Woo
-
Publication number: 20240289285Abstract: A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.Type: ApplicationFiled: November 9, 2023Publication date: August 29, 2024Inventors: Dong Hyuk Woo, Ravi Narayanaswami
-
Patent number: 12061968Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.Type: GrantFiled: June 21, 2022Date of Patent: August 13, 2024Assignee: Google LLCInventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
-
Publication number: 20240256239Abstract: This disclosure describes a system and method for compiling and executing machine learning inferences in an array of multi-core computing devices. Each multi-core computing device can be an application specific integrated circuit (ASIC) or group of ASICS. In many applications, the array of computing devices changes from inference to inference, and can be adjusted based on the requirements of the inference. Additionally, each ASIC can have multiple processing cores, and multiple types of processing cores. Therefore, performing optimizations and scheduling at compile time, can dramatically increase the efficiency of the array in executing the inference. In some implementations, it is possible to select an amount of time or effort to be spent optimizing during compiling, giving the user flexibility in determining whether to spend time during compilation or during execution.Type: ApplicationFiled: June 8, 2021Publication date: August 1, 2024Inventors: John Navil Joseph, Jack Liu, Dong Hyuk Woo, Jing Pu
-
Publication number: 20240231819Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.Type: ApplicationFiled: November 9, 2023Publication date: July 11, 2024Inventors: Olivier Temam, Ravi Narayanaswami, Harshit Khaitan, Dong Hyuk Woo
-
Publication number: 20240211534Abstract: A circuit comprises an input register configured to receive an input vector of elements, a control register configured to receive a control vector of elements, wherein each element of the control vector corresponds to a respective element of the input vector, and wherein each element specifies a permutation of a corresponding element of the input vector, and a permute execution circuit configured to generate an output vector of elements corresponding to a permutation of the input vector. Generating each element of the output vector comprises accessing, at the input register, a particular element of the input vector, accessing, at the control register, a particular element of the control vector corresponding to the particular element of the input vector, and outputting the particular element of the input vector as an element at a particular position of the output vector that is selected based on the particular element of the control vector.Type: ApplicationFiled: September 1, 2023Publication date: June 27, 2024Inventors: Dong Hyuk Woo, Gregory Michael Thorson, Andrew Everett Phelps, Olivier Temam, Jonathan Ross, Christopher Aaron Clark
-
Publication number: 20240078417Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.Type: ApplicationFiled: June 30, 2023Publication date: March 7, 2024Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo