Patents by Inventor Yoonho BOO

Yoonho BOO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250138872
    Abstract: A command processor determines whether a command descriptor describing a current command is in a first format or in a second format, wherein the first format includes a source memory address pointing to a memory area in a shared memory having a binary code to be accessed according to direct memory access (DMA) scheme, and the second format includes one or more object indices, a respective one of the one or more object indices indicating an object in an object database. If the command descriptor describing the current command is in the second format, the command processor converts a format of the command descriptor to the first format, generates one or more task descriptors describing neural network model tasks based on the command descriptor in the first format, and distributes the one or more task descriptors to the one or more neural processors.
    Type: Application
    Filed: January 3, 2025
    Publication date: May 1, 2025
    Inventors: Hongyun Kim, Chang-Hyo Yu, Yoonho Boo
  • Patent number: 12229587
    Abstract: A command processor determines whether a command descriptor describing a current command is in a first format or in a second format, wherein the first format includes a source memory address pointing to a memory area in a shared memory having a binary code to be accessed according to direct memory access (DMA) scheme, and the second format includes one or more object indices, a respective one of the one or more object indices indicating an object in an object database. If the command descriptor describing the current command is in the second format, the command processor converts a format of the command descriptor to the first format, generates one or more task descriptors describing neural network model tasks based on the command descriptor in the first format, and distributes the one or more task descriptors to the one or more neural processors.
    Type: Grant
    Filed: March 29, 2024
    Date of Patent: February 18, 2025
    Assignee: REBELLIONS INC.
    Inventors: Hongyun Kim, Chang-Hyo Yu, Yoonho Boo
  • Publication number: 20240330041
    Abstract: A command processor determines whether a command descriptor describing a current command is in a first format or in a second format, wherein the first format includes a source memory address pointing to a memory area in a shared memory having a binary code to be accessed according to direct memory access (DMA) scheme, and the second format includes one or more object indices, a respective one of the one or more object indices indicating an object in an object database. If the command descriptor describing the current command is in the second format, the command processor converts a format of the command descriptor to the first format, generates one or more task descriptors describing neural network model tasks based on the command descriptor in the first format, and distributes the one or more task descriptors to the one or more neural processors.
    Type: Application
    Filed: March 29, 2024
    Publication date: October 3, 2024
    Inventors: Hongyun Kim, Chang-Hyo Yu, Yoonho Boo
  • Patent number: 12099915
    Abstract: A method for quantizing a deep neural network is provided, which includes extracting first statistical information on output values of a first normalization layer included in the deep neural network, determining a discretization interval associated with input values of a subsequent layer of the first normalization layer by using the extracted first statistical information, and quantizing the input values of the subsequent layer into discretized values having the determined discretization interval.
    Type: Grant
    Filed: April 13, 2022
    Date of Patent: September 24, 2024
    Assignee: REBELLIONS INC.
    Inventor: Yoonho Boo
  • Publication number: 20240211742
    Abstract: A neural core, a neural processing device including same and a method for lauding data of a neural processing device are provided. The neural core comprises a processing unit configured to perform operations, an L0 memory configured to store input data and an LSU configured to perform a load task and a store task of data between the processing unit and the L0 memory, wherein the LSU comprises a local memory load unit configured to transmit the input data in the L0 memory to the processing unit, and the local memory load unit comprises a target decision module configured to identify and retrieve the input data in the L0 memory, a transformation logic configured to transform the input data and thereby generate transformed data and an output FIFO configured to receive the transformed data and transmit the transformed data to the processing unit in the received order.
    Type: Application
    Filed: March 6, 2024
    Publication date: June 27, 2024
    Inventors: Jinseok Kim, Kyeongryeol Bong, Jinwook Oh, Yoonho Boo
  • Patent number: 11954584
    Abstract: A neural core, a neural processing device including same and a method for lauding data of a neural processing device are provided. The neural core comprises a processing unit configured to perform operations, an L0 memory configured to store input data and an LSU configured to perform a load task and a store task of data between the processing unit and the L0 memory, wherein the LSU comprises a local memory load unit configured to transmit the input data in the L0 memory to the processing unit, and the local memory load unit comprises a target decision module configured to identify and retrieve the input data in the L0 memory, a transformation logic configured to transform the input data and thereby generate transformed data and an output FIFO configured to receive the transformed data and transmit the transformed data to the processing unit in the received order.
    Type: Grant
    Filed: May 23, 2023
    Date of Patent: April 9, 2024
    Assignee: Rebellions Inc.
    Inventors: Jinseok Kim, Kyeongryeol Bong, Jinwook Oh, Yoonho Boo
  • Publication number: 20240013038
    Abstract: A neural core, a neural processing device including same and a method for lauding data of a neural processing device are provided. The neural core comprises a processing unit configured to perform operations, an L0 memory configured to store input data and an LSU configured to perform a load task and a store task of data between the processing unit and the L0 memory, wherein the LSU comprises a local memory load unit configured to transmit the input data in the L0 memory to the processing unit, and the local memory load unit comprises a target decision module configured to identify and retrieve the input data in the L0 memory, a transformation logic configured to transform the input data and thereby generate transformed data and an output FIFO configured to receive the transformed data and transmit the transformed data to the processing unit in the received order.
    Type: Application
    Filed: May 23, 2023
    Publication date: January 11, 2024
    Inventors: Jinseok Kim, Kyeongryeol Bong, Jinwook Oh, Yoonho Boo
  • Publication number: 20220398430
    Abstract: A method for quantizing a deep neural network is provided, which includes extracting first statistical information on output values of a first normalization layer included in the deep neural network, determining a discretization interval associated with input values of a subsequent layer of the first normalization layer by using the extracted first statistical information, and quantizing the input values of the subsequent layer into discretized values having the determined discretization interval.
    Type: Application
    Filed: April 13, 2022
    Publication date: December 15, 2022
    Inventor: Yoonho BOO
  • Publication number: 20220237436
    Abstract: Disclosed is a neural network training method and apparatus. The neural network training method includes a neural network training method, including receiving a neural network model that is first trained based on a first weight, second training the first trained neural network model based on learning rates to obtain second weights from a second trained neural network, and third training the second trained neural network model based on the second weights.
    Type: Application
    Filed: November 15, 2021
    Publication date: July 28, 2022
    Applicants: Samsung Electronics Co., Ltd., SNU R&DB FOUNDATION
    Inventors: Sungho SHIN, Wonyong SUNG, Yoonho BOO