Patents by Inventor Hyuk Woo

Hyuk Woo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250261429
    Abstract: A power semiconductor device including a first gate electrode layer recessed into a semiconductor substrate, the first gate electrode layer being configured to extend in a first direction for a first length and a second gate electrode layer configured to extend in the first direction on a surface of the semiconductor substrate, the second gate electrode layer having a second length shorter than the first length, and configured to contact a first side surface of the first gate electrode layer.
    Type: Application
    Filed: January 31, 2025
    Publication date: August 14, 2025
    Applicant: HYUNDAI MOBIS CO., LTD.
    Inventors: Ju Hwan LEE, Tae Yang KIM, Sin A KIM, Jeong Mok HA, Min Gi KANG, Hyuk WOO, Jun Ha HWANG
  • Publication number: 20250238668
    Abstract: Apparatus and methods for processing neural network models are provided. The apparatus can comprise a plurality of identical artificial intelligence processing dies. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies can include at least one inter-die input block and at least one inter-die output block. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies is communicatively coupled to another artificial intelligence processing die among the plurality of identical artificial intelligence processing dies by way of one or more communication paths from the at least one inter-die output block of the artificial intelligence processing die to the at least one inter-die input block of the artificial intelligence processing die.
    Type: Application
    Filed: January 17, 2025
    Publication date: July 24, 2025
    Inventors: Uday Kumar Dasari, Olivier Temam, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20250232164
    Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.
    Type: Application
    Filed: January 15, 2025
    Publication date: July 17, 2025
    Inventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20250232165
    Abstract: A hardware accelerator can store, in multiple memory storage areas in one or more memories on the accelerator, input data for each processing time step of multiple processing time steps for processing sequential inputs to a machine learning model. For each processing time step, the following is performed. The accelerator can access a current value of a counter stored in a register within the accelerator to identify the processing time step. The accelerator can determine, based on the current value of the counter, one or more memory storage areas that store the input data for the processing time step. The accelerator can facilitate access of the input data for the processing time step from the one or more memory storage areas to at least one processor coupled to the one or more memory storage areas. The accelerator can increment the current value of the counter stored in the register.
    Type: Application
    Filed: January 16, 2025
    Publication date: July 17, 2025
    Inventors: Jack Liu, Dong Hyuk Woo
  • Publication number: 20250231765
    Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.
    Type: Application
    Filed: January 16, 2025
    Publication date: July 17, 2025
    Inventors: Olivier Temam, Ravi Narayanaswami, Harshit Khaitan, Dong Hyuk Woo
  • Publication number: 20250225781
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing inference computations of a fully convolutional neural network receiving inputs with different sizes. One of the methods include receiving a new input to be processed by a fully convolutional neural network, the new input having a first size different from a fixed size that the fully convolutional neural network is configured to process; determining, one or more fixed-size inputs from the new input, each fixed-size input having the fixed size; obtaining a respective fixed-size output generated by the fully convolutional neural network performing inference computations for each of the one or more fixed-size inputs; and generating, from the respective fixed-size outputs comprising one or more invalid pixel values, a final output that is equivalent to an output that would be generated by processing the new input using the fully convolutional neural network.
    Type: Application
    Filed: October 25, 2021
    Publication date: July 10, 2025
    Inventors: Tushar Kumar, Soorgoli Ashok Halambi, Jason Jong Kyu Park, Arun Chauhan, Dong Hyuk Woo
  • Patent number: 12349556
    Abstract: A light emitting display device includes a substrate, an organic layer, a conductor, an anode, and a pixel definition layer. The organic layer overlaps the substrate and has a connection opening. The conductor is positioned between the substrate and the organic layer. The anode is positioned on the organic layer and is partially positioned inside the connection opening. The pixel definition layer exposes an exposed portion of the anode. The organic layer has a halftone exposure portion and a neighboring portion. The halftone exposure portion overlaps the exposed portion of the anode and overlaps the conductor. The neighboring portion neighbors the halftone exposure portion. A face of the halftone exposure portion and a face of the neighboring portion are spaced from the substrate by a first distance and a second distance, respectively. A difference between the first distance and the second distance is 30 nm or less.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: July 1, 2025
    Assignee: Samsung Display Co., Ltd.
    Inventors: Jun Hee Lee, Beohm Rock Choi, Jun Hyuk Woo, Choong Youl Im, Seong Min Cho
  • Publication number: 20250209302
    Abstract: A computer-implemented method includes receiving a batch of neural network inputs to be processed using a neural network on a hardware circuit. The neural network has multiple layers arranged in a directed graph and each layer has a respective set of parameters. The method includes determining a partitioning of the neural network layers into a sequence of superlayers. Each superlayer is a partition of the directed graph that includes one or more layers. The method includes processing the batch of inputs using the hardware circuit, which includes, for each superlayer in the sequence: i) loading the respective set of parameters for the layers in the superlayer into memory of the hardware circuit, and ii) for each input in the batch, processing the input through each of the layers in the superlayer using the parameters in the memory of the hardware circuit to generate a superlayer output for the input.
    Type: Application
    Filed: March 13, 2025
    Publication date: June 26, 2025
    Inventor: Dong Hyuk Woo
  • Patent number: 12339923
    Abstract: A circuit comprises an input register configured to receive an input vector of elements, a control register configured to receive a control vector of elements, wherein each element of the control vector corresponds to a respective element of the input vector, and wherein each element specifies a permutation of a corresponding element of the input vector, and a permute execution circuit configured to generate an output vector of elements corresponding to a permutation of the input vector. Generating each element of the output vector comprises accessing, at the input register, a particular element of the input vector, accessing, at the control register, a particular element of the control vector corresponding to the particular element of the input vector, and outputting the particular element of the input vector as an element at a particular position of the output vector that is selected based on the particular element of the control vector.
    Type: Grant
    Filed: September 1, 2023
    Date of Patent: June 24, 2025
    Assignee: Google LLC
    Inventors: Dong Hyuk Woo, Gregory Michael Thorson, Andrew Everett Phelps, Olivier Temam, Jonathan Ross, Christopher Aaron Clark
  • Publication number: 20250143171
    Abstract: The present invention relates to an organic compound represented by the following Formula 1, and a high-efficiency and long-lifetime organic light emitting device enabling significantly improved low voltage driving, and having long lifetime, excellent luminous efficiency and the like by employing the same as a light emitting layer host material in the device.
    Type: Application
    Filed: October 28, 2024
    Publication date: May 1, 2025
    Inventors: Yeon-jae CHOI, Si-in KIM, Tae-young KIM, Kyeong-hyeon KIM, Joon-ho KIM, Hyuk-woo JANG, Do-yeong CHOI, Seo-youn PARK, Kyeong-wan KIM, Se-jin LEE
  • Publication number: 20250139408
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing inference operations of machine learning models, are described in this document. In one aspect, the method includes receiving data representing a first machine learning model that includes inference operations. An estimated duration for the system to perform the inference operations is obtained. A priority time period reserved for performing priority inference operations of a priority machine learning model during each occurrence of a recurring time window is obtained. A remaining time period of each occurrence of the recurring time window that remains after reserving the priority time period is determined. A determination is made that the estimated duration is greater than the remaining time period. In response, the first machine learning model is partitioned into a group of sub-models. The hardware processing unit(s) perform inference operations of a sub-model during the remaining time period.
    Type: Application
    Filed: September 17, 2021
    Publication date: May 1, 2025
    Inventor: Dong Hyuk Woo
  • Publication number: 20250143181
    Abstract: The present invention relates to an organic compound represented by the following Formula 1, and a high-efficiency and long-lifetime organic light emitting device enabling significantly improved low voltage driving, and having long lifetime, excellent luminous efficiency and the like by employing the same as a light emitting layer host material in the device.
    Type: Application
    Filed: October 28, 2024
    Publication date: May 1, 2025
    Inventors: Hyuk-woo JANG, Si-in KIM, Tae-young KIM, Kyeong-hyeon KIM, Joon-ho KIM, Yeon-jae CHOI, Do-yeong CHOI, Seo-youn PARK, Kyeong-wan KIM, Se-jin LEE
  • Patent number: 12254394
    Abstract: A computer-implemented method includes receiving a batch of neural network inputs to be processed using a neural network on a hardware circuit. The neural network has multiple layers arranged in a directed graph and each layer has a respective set of parameters. The method includes determining a partitioning of the neural network layers into a sequence of superlayers. Each superlayer is a partition of the directed graph that includes one or more layers. The method includes processing the batch of inputs using the hardware circuit, which includes, for each superlayer in the sequence: i) loading the respective set of parameters for the layers in the superlayer into memory of the hardware circuit, and ii) for each input in the batch, processing the input through each of the layers in the superlayer using the parameters in the memory of the hardware circuit to generate a superlayer output for the input.
    Type: Grant
    Filed: October 25, 2021
    Date of Patent: March 18, 2025
    Assignee: Google LLC
    Inventor: Dong Hyuk Woo
  • Publication number: 20250074928
    Abstract: The present invention relates to an organic compound represented by the following [Formula 1], and a high-efficiency and long-lifetime organic light emitting device enabling significantly improved low voltage driving and having long lifetime, excellent luminous efficiency and the like by employing the same as a light emitting layer host material in the device.
    Type: Application
    Filed: August 21, 2024
    Publication date: March 6, 2025
    Applicant: SFC CO., LTD.
    Inventors: Kyeong-hyeon KIM, Si-in KIM, Hyuk-woo JANG, Do-yeong CHOI, Seo-youn PARK, Yeon-jae CHOI, Kyeong-wan KIM, Se-jin LEE
  • Publication number: 20250068897
    Abstract: Apparatus and methods for processing neural network models are provided. The apparatus can comprise a plurality of identical artificial intelligence processing dies. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies can include at least one inter-die input block and at least one inter-die output block. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies is communicatively coupled to another artificial intelligence processing die among the plurality of identical artificial intelligence processing dies by way of one or more communication paths from the at least one inter-die output block of the artificial intelligence processing die to the at least one inter-die input block of the artificial intelligence processing die.
    Type: Application
    Filed: August 30, 2024
    Publication date: February 27, 2025
    Inventors: Uday Kumar Dasari, Olivier Temam, Ravi Narayanaswami, Dong Hyuk Woo
  • Publication number: 20250072216
    Abstract: Embodiments of the present disclosure provide a display panel including: an overcoat layer disposed on a light emitting element layer and having a first refractive index; and a cover window disposed on the overcoat layer and having a second refractive index greater than the first refractive index, wherein at least one of the overcoat layer and the cover window includes a plurality of inclined surfaces, and inclination angles of the plurality of inclined surfaces in a central area are different from those in an edge area. The display panel, the manufacturing method thereof, and the head-mounted display device including the same according to the embodiments of the present disclosure May control the angle of emitted light.
    Type: Application
    Filed: April 29, 2024
    Publication date: February 27, 2025
    Inventors: Jun Hyuk WOO, Rae Young KIM, Ree Hyang KIM, Jong Min OK, Je Won YOO
  • Publication number: 20250045559
    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
    Type: Application
    Filed: July 9, 2024
    Publication date: February 6, 2025
    Inventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
  • Patent number: 12191386
    Abstract: A power semiconductor device includes a semiconductor layer of silicon carbide (SiC), at least one trench that extends in one direction, a gate insulating layer disposed on at least an inner wall of the at least one trench, at least one gate electrode layer disposed on the gate insulating layer, a drift region disposed in the semiconductor layer at least on one side of the at least one gate electrode layer, a well region disposed in the semiconductor layer to be deeper than the at least one gate electrode layer, a source region disposed in the well region, and at least one channel region disposed in the semiconductor layer of one side of the at least one gate electrode layer between the drift region and the source region.
    Type: Grant
    Filed: March 13, 2024
    Date of Patent: January 7, 2025
    Assignee: HYUNDAI MOBIS CO., LTD.
    Inventors: Jeong Mok Ha, Hyuk Woo, Sin A Kim, Tae Youp Kim, Ju Hwan Lee, Min Gi Kang, Tae Yang Kim
  • Publication number: 20240407254
    Abstract: Disclosed is an organic compound represented by Formula 1: Also disclosed is an organic light emitting device including a light emitting layer employing the organic compound as a host. The use of the organic compound enables low voltage driving of the organic light emitting device and ensures significantly long lifetime and improved luminous efficiency of the device.
    Type: Application
    Filed: May 16, 2024
    Publication date: December 5, 2024
    Applicant: SFC CO., LTD.
    Inventors: Ji-yung KIM, Si-in KIM, Kyeong-hyeon KIM, Hyuk-woo JANG, Do-yeong CHOI, Seo-youn PARK, Yeon-jae CHOI, Kyeong-wan KIM, Se-jin LEE
  • Publication number: 20240403621
    Abstract: A hardware accelerator can store, in multiple memory storage areas in one or more memories on the accelerator, input data for each processing time step of multiple processing time steps for processing sequential inputs to a machine learning model. For each processing time step, the following is performed. The accelerator can access a current value of a counter stored in a register within the accelerator to identify the processing time step. The accelerator can determine, based on the current value of the counter, one or more memory storage areas that store the input data for the processing time step. The accelerator can facilitate access of the input data for the processing time step from the one or more memory storage areas to at least one processor coupled to the one or more memory storage areas. The accelerator can increment the current value of the counter stored in the register.
    Type: Application
    Filed: August 13, 2024
    Publication date: December 5, 2024
    Inventors: Jack Liu, Dong Hyuk Woo