Patents by Inventor Dong Hyuk Woo

Dong Hyuk Woo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250238668
    Abstract: Apparatus and methods for processing neural network models are provided. The apparatus can comprise a plurality of identical artificial intelligence processing dies. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies can include at least one inter-die input block and at least one inter-die output block. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies is communicatively coupled to another artificial intelligence processing die among the plurality of identical artificial intelligence processing dies by way of one or more communication paths from the at least one inter-die output block of the artificial intelligence processing die to the at least one inter-die input block of the artificial intelligence processing die.
    Type: Application
    Filed: January 17, 2025
    Publication date: July 24, 2025
    Inventors: Uday Kumar Dasari, Olivier Temam, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20250232164
    Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.
    Type: Application
    Filed: January 15, 2025
    Publication date: July 17, 2025
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20250232165
    Abstract: A hardware accelerator can store, in multiple memory storage areas in one or more memories on the accelerator, input data for each processing time step of multiple processing time steps for processing sequential inputs to a machine learning model. For each processing time step, the following is performed. The accelerator can access a current value of a counter stored in a register within the accelerator to identify the processing time step. The accelerator can determine, based on the current value of the counter, one or more memory storage areas that store the input data for the processing time step. The accelerator can facilitate access of the input data for the processing time step from the one or more memory storage areas to at least one processor coupled to the one or more memory storage areas. The accelerator can increment the current value of the counter stored in the register.
    Type: Application
    Filed: January 16, 2025
    Publication date: July 17, 2025
    Inventors: Jack Liu, Dong Hyuk Woo
  • Publication number: 20250231765
    Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.
    Type: Application
    Filed: January 16, 2025
    Publication date: July 17, 2025
    Inventors: Olivier Temam, Ravi Narayanaswami, Harshit Khaitan, Dong Hyuk Woo
  • Publication number: 20250225781
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing inference computations of a fully convolutional neural network receiving inputs with different sizes. One of the methods include receiving a new input to be processed by a fully convolutional neural network, the new input having a first size different from a fixed size that the fully convolutional neural network is configured to process; determining, one or more fixed-size inputs from the new input, each fixed-size input having the fixed size; obtaining a respective fixed-size output generated by the fully convolutional neural network performing inference computations for each of the one or more fixed-size inputs; and generating, from the respective fixed-size outputs comprising one or more invalid pixel values, a final output that is equivalent to an output that would be generated by processing the new input using the fully convolutional neural network.
    Type: Application
    Filed: October 25, 2021
    Publication date: July 10, 2025
    Inventors: Tushar Kumar, Soorgoli Ashok Halambi, Jason Jong Kyu Park, Arun Chauhan, Dong Hyuk Woo
  • Publication number: 20250209302
    Abstract: A computer-implemented method includes receiving a batch of neural network inputs to be processed using a neural network on a hardware circuit. The neural network has multiple layers arranged in a directed graph and each layer has a respective set of parameters. The method includes determining a partitioning of the neural network layers into a sequence of superlayers. Each superlayer is a partition of the directed graph that includes one or more layers. The method includes processing the batch of inputs using the hardware circuit, which includes, for each superlayer in the sequence: i) loading the respective set of parameters for the layers in the superlayer into memory of the hardware circuit, and ii) for each input in the batch, processing the input through each of the layers in the superlayer using the parameters in the memory of the hardware circuit to generate a superlayer output for the input.
    Type: Application
    Filed: March 13, 2025
    Publication date: June 26, 2025
    Inventor: Dong Hyuk Woo
  • Patent number: 12339923
    Abstract: A circuit comprises an input register configured to receive an input vector of elements, a control register configured to receive a control vector of elements, wherein each element of the control vector corresponds to a respective element of the input vector, and wherein each element specifies a permutation of a corresponding element of the input vector, and a permute execution circuit configured to generate an output vector of elements corresponding to a permutation of the input vector. Generating each element of the output vector comprises accessing, at the input register, a particular element of the input vector, accessing, at the control register, a particular element of the control vector corresponding to the particular element of the input vector, and outputting the particular element of the input vector as an element at a particular position of the output vector that is selected based on the particular element of the control vector.
    Type: Grant
    Filed: September 1, 2023
    Date of Patent: June 24, 2025
    Assignee: Google LLC
    Inventors: Dong Hyuk Woo, Gregory Michael Thorson, Andrew Everett Phelps, Olivier Temam, Jonathan Ross, Christopher Aaron Clark
  • Publication number: 20250139408
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing inference operations of machine learning models, are described in this document. In one aspect, the method includes receiving data representing a first machine learning model that includes inference operations. An estimated duration for the system to perform the inference operations is obtained. A priority time period reserved for performing priority inference operations of a priority machine learning model during each occurrence of a recurring time window is obtained. A remaining time period of each occurrence of the recurring time window that remains after reserving the priority time period is determined. A determination is made that the estimated duration is greater than the remaining time period. In response, the first machine learning model is partitioned into a group of sub-models. The hardware processing unit(s) perform inference operations of a sub-model during the remaining time period.
    Type: Application
    Filed: September 17, 2021
    Publication date: May 1, 2025
    Inventor: Dong Hyuk Woo
  • Patent number: 12254394
    Abstract: A computer-implemented method includes receiving a batch of neural network inputs to be processed using a neural network on a hardware circuit. The neural network has multiple layers arranged in a directed graph and each layer has a respective set of parameters. The method includes determining a partitioning of the neural network layers into a sequence of superlayers. Each superlayer is a partition of the directed graph that includes one or more layers. The method includes processing the batch of inputs using the hardware circuit, which includes, for each superlayer in the sequence: i) loading the respective set of parameters for the layers in the superlayer into memory of the hardware circuit, and ii) for each input in the batch, processing the input through each of the layers in the superlayer using the parameters in the memory of the hardware circuit to generate a superlayer output for the input.
    Type: Grant
    Filed: October 25, 2021
    Date of Patent: March 18, 2025
    Assignee: Google LLC
    Inventor: Dong Hyuk Woo
  • Publication number: 20250068897
    Abstract: Apparatus and methods for processing neural network models are provided. The apparatus can comprise a plurality of identical artificial intelligence processing dies. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies can include at least one inter-die input block and at least one inter-die output block. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies is communicatively coupled to another artificial intelligence processing die among the plurality of identical artificial intelligence processing dies by way of one or more communication paths from the at least one inter-die output block of the artificial intelligence processing die to the at least one inter-die input block of the artificial intelligence processing die.
    Type: Application
    Filed: August 30, 2024
    Publication date: February 27, 2025
    Inventors: Uday Kumar Dasari, Olivier Temam, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20250045559
    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
    Type: Application
    Filed: July 9, 2024
    Publication date: February 6, 2025
    Inventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
  • Publication number: 20240403621
    Abstract: A hardware accelerator can store, in multiple memory storage areas in one or more memories on the accelerator, input data for each processing time step of multiple processing time steps for processing sequential inputs to a machine learning model. For each processing time step, the following is performed. The accelerator can access a current value of a counter stored in a register within the accelerator to identify the processing time step. The accelerator can determine, based on the current value of the counter, one or more memory storage areas that store the input data for the processing time step. The accelerator can facilitate access of the input data for the processing time step from the one or more memory storage areas to at least one processor coupled to the one or more memory storage areas. The accelerator can increment the current value of the counter stored in the register.
    Type: Application
    Filed: August 13, 2024
    Publication date: December 5, 2024
    Inventors: Jack Liu, Dong Hyuk Woo
  • Patent number: 12086706
    Abstract: A hardware accelerator can store, in multiple memory storage areas in one or more memories on the accelerator, input data for each processing time step of multiple processing time steps for processing sequential inputs to a machine learning model. For each processing time step, the following is performed. The accelerator can access a current value of a counter stored in a register within the accelerator to identify the processing time step. The accelerator can determine, based on the current value of the counter, one or more memory storage areas that store the input data for the processing time step. The accelerator can facilitate access of the input data for the processing time step from the one or more memory storage areas to at least one processor coupled to the one or more memory storage areas. The accelerator can increment the current value of the counter stored in the register.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: September 10, 2024
    Assignee: Google LLC
    Inventors: Jack Liu, Dong Hyuk Woo
  • Patent number: 12079711
    Abstract: Apparatus and methods for processing neural network models are provided. The apparatus can comprise a plurality of identical artificial intelligence processing dies. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies can include at least one inter-die input block and at least one inter-die output block. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies is communicatively coupled to another artificial intelligence processing die among the plurality of identical artificial intelligence processing dies by way of one or more communication paths from the at least one inter-die output block of the artificial intelligence processing die to the at least one inter-die input block of the artificial intelligence processing die.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: September 3, 2024
    Assignee: Google LLC
    Inventors: Uday Kumar Dasari, Olivier Temam, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20240289285
    Abstract: A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.
    Type: Application
    Filed: November 9, 2023
    Publication date: August 29, 2024
    Inventors: Dong Hyuk Woo, Ravi Narayanaswami
  • Patent number: 12061968
    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
    Type: Grant
    Filed: June 21, 2022
    Date of Patent: August 13, 2024
    Assignee: Google LLC
    Inventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
  • Publication number: 20240256239
    Abstract: This disclosure describes a system and method for compiling and executing machine learning inferences in an array of multi-core computing devices. Each multi-core computing device can be an application specific integrated circuit (ASIC) or group of ASICS. In many applications, the array of computing devices changes from inference to inference, and can be adjusted based on the requirements of the inference. Additionally, each ASIC can have multiple processing cores, and multiple types of processing cores. Therefore, performing optimizations and scheduling at compile time, can dramatically increase the efficiency of the array in executing the inference. In some implementations, it is possible to select an amount of time or effort to be spent optimizing during compiling, giving the user flexibility in determining whether to spend time during compilation or during execution.
    Type: Application
    Filed: June 8, 2021
    Publication date: August 1, 2024
    Inventors: John Navil Joseph, Jack Liu, Dong Hyuk Woo, Jing Pu
  • Publication number: 20240231819
    Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.
    Type: Application
    Filed: November 9, 2023
    Publication date: July 11, 2024
    Inventors: Olivier Temam, Ravi Narayanaswami, Harshit Khaitan, Dong Hyuk Woo
  • Publication number: 20240211534
    Abstract: A circuit comprises an input register configured to receive an input vector of elements, a control register configured to receive a control vector of elements, wherein each element of the control vector corresponds to a respective element of the input vector, and wherein each element specifies a permutation of a corresponding element of the input vector, and a permute execution circuit configured to generate an output vector of elements corresponding to a permutation of the input vector. Generating each element of the output vector comprises accessing, at the input register, a particular element of the input vector, accessing, at the control register, a particular element of the control vector corresponding to the particular element of the input vector, and outputting the particular element of the input vector as an element at a particular position of the output vector that is selected based on the particular element of the control vector.
    Type: Application
    Filed: September 1, 2023
    Publication date: June 27, 2024
    Inventors: Dong Hyuk Woo, Gregory Michael Thorson, Andrew Everett Phelps, Olivier Temam, Jonathan Ross, Christopher Aaron Clark
  • Publication number: 20240078417
    Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.
    Type: Application
    Filed: June 30, 2023
    Publication date: March 7, 2024
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo