Patents Assigned to DEEPX CO., LTD.
  • Patent number: 11511772
    Abstract: A neural processing unit (NPU) includes a controller including a scheduler, the controller configured to receive from a compiler a machine code of an artificial neural network (ANN) including a fusion ANN, the machine code including data locality information of the fusion ANN, and receive heterogeneous sensor data from a plurality of sensors corresponding to the fusion ANN; at least one processing element configured to perform fusion operations of the fusion ANN including a convolution operation and at least one special function operation; a special function unit (SFU) configured to perform a special function operation of the fusion ANN; and an on-chip memory configured to store operation data of the fusion ANN, wherein the schedular is configured to control the at least one processing element and the on-chip memory such that all operations of the fusion ANN are processed in a predetermined sequence according to the data locality information.
    Type: Grant
    Filed: April 12, 2022
    Date of Patent: November 29, 2022
    Assignee: DEEPX CO., LTD.
    Inventor: Lok Won Kim
  • Publication number: 20220357792
    Abstract: A learning model creation method for performing a specific function for an electronic device, according to an embodiment of the present invention, can include the steps of: preparing big data for training an artificial neural network including, in pairs, sensing data received from a random sensing data generation unit for sensing human behaviors and specific function performance determination data for determining whether to perform a specific function of an electronic device with respect to the sensing data; preparing an artificial neural network model, which includes nodes of an input layer through which the sensing data is inputted, nodes of an output layer through which the specific function performance determination data of the electronic device is outputted, and association parameters between the nodes of the input layer and the nodes of the output layer, and calculates inputs of the sensing data for the nodes of the input layer in order to output the specific function performance determination data from
    Type: Application
    Filed: July 21, 2022
    Publication date: November 10, 2022
    Applicant: DEEPX CO., LTD.
    Inventor: Lok Won KIM
  • Publication number: 20220348229
    Abstract: A neural processing unit (NPU) includes a controller including a scheduler, the controller configured to receive from a compiler a machine code of an artificial neural network (ANN) including a fusion ANN, the machine code including data locality information of the fusion ANN, and receive heterogeneous sensor data from a plurality of sensors corresponding to the fusion ANN; at least one processing element configured to perform fusion operations of the fusion ANN including a convolution operation and at least one special function operation; a special function unit (SFU) configured to perform a special function operation of the fusion ANN; and an on-chip memory configured to store operation data of the fusion ANN, wherein the schedular is configured to control the at least one processing element and the on-chip memory such that all operations of the fusion ANN are processed in a predetermined sequence according to the data locality information.
    Type: Application
    Filed: April 12, 2022
    Publication date: November 3, 2022
    Applicant: DEEPX CO., LTD.
    Inventor: Lok Won KIM
  • Publication number: 20220335282
    Abstract: A neural processing unit includes a mode selector configured to select a first mode or a second mode; and processing element (PE) array operating in one of the first mode and the second mode and including a plurality of processing elements arranged in PE rows and PE columns, the PE array configured to receive an input of first input data and an input of second input data, respectively. In the second mode, the first input data is inputted in a PE column direction of the PE array and is transmitted along the PE column direction while being delayed by a specific number of clock cycles, and the second input data is broadcast to the plurality of processing elements of the PE array to which the first input data is delayed by the specific number of clock cycles.
    Type: Application
    Filed: April 14, 2022
    Publication date: October 20, 2022
    Applicant: DEEPX CO., LTD.
    Inventors: JungBoo PARK, Hansuk YOO
  • Publication number: 20220327368
    Abstract: A neural processing unit (NPU), a method for driving an artificial neural network (ANN) model, and an ANN driving apparatus are provided. The NPU includes a semiconductor circuit that includes at least one processing element (PE) configured to process an operation of an artificial neural network (ANN) model; and at least one memory configurable to store a first kernel and a first kernel filter. The NPU is configured to generate a first modulation kernel based on the first kernel and the first kernel filter and to generate second modulation kernel based on the first kernel and a second kernel filter generated by applying a mathematical function to the first kernel filter. Power consumption and memory read time are both reduced by decreasing the data size of a kernel read from a separate memory to an artificial neural network processor and/or by decreasing the number of memory read requests.
    Type: Application
    Filed: June 23, 2022
    Publication date: October 13, 2022
    Applicant: DEEPX CO., LTD.
    Inventor: Lok Won KIM
  • Publication number: 20220318606
    Abstract: A neural processing unit (NPU) includes an internal memory storing information on combinations of a plurality of artificial neural network (ANN) models, the plurality of ANN models including first and second ANN models; a plurality of processing elements (PEs) to process first operations and second operations of the plurality of ANN models in sequence or in parallel, the plurality of PEs including first and second groups of PEs; and a scheduler to allocate to the first group of PEs a part of the first operations for the first ANN model and to allocate to the second group of PEs a part of the second operations for the second ANN model, based on an instruction related to information on an operation sequence of the plurality of ANN models or further based on ANN data locality information. The first and second operations may be performed in parallel or in a time division.
    Type: Application
    Filed: October 14, 2021
    Publication date: October 6, 2022
    Applicant: DEEPX CO., LTD.
    Inventor: Lok Won KIM
  • Patent number: 11429180
    Abstract: A trained model creation method for performing a specific function for an electronic device includes preparing big data for training an artificial neural network and specific function performance determination data for determining whether to perform a specific function of an electronic device with respect to the sensing data; and preparing an artificial neural network model, which calculates inputs of the sensing data for the nodes of the input layer in order to output the specific function performance determination data from the nodes of the output layer. The artificial neural network model is trained by repeatedly performing a process of inputting the sensing data included in the prepared big data into the nodes of the input layer and outputting the specific function performance determination data that pairs with the sensing data included in the big data from the nodes of the output layer so as to update the association parameters.
    Type: Grant
    Filed: July 2, 2021
    Date of Patent: August 30, 2022
    Assignee: DEEPX CO., LTD.
    Inventor: Lok Won Kim
  • Patent number: 11416737
    Abstract: A neural processing unit (NPU), a method for driving an artificial neural network (ANN) model, and an ANN driving apparatus are provided. The NPU includes a semiconductor circuit that includes at least one processing element (PE) configured to process an operation of an artificial neural network (ANN) model; and at least one memory configurable to store a first kernel and a first kernel filter. The NPU is configured to generate a first modulation kernel based on the first kernel and the first kernel filter and to generate second modulation kernel based on the first kernel and a second kernel filter generated by applying a mathematical function to the first kernel filter. Power consumption and memory read time are both reduced by decreasing the data size of a kernel read from a separate memory to an artificial neural network processor and/or by decreasing the number of memory read requests.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: August 16, 2022
    Assignee: DEEPX CO., LTD.
    Inventor: Lok Won Kim
  • Publication number: 20220206068
    Abstract: This disclosure proposes an inventive system capable of testing a component in the system during runtime. The system may comprise: a substrate; a plurality of functional components, of the plurality of functional components being mounted onto the substrate and including a circuitry; a system bus formed with electrically conductive pattern onto the substrate thereby allowing the plurality of functional components to communicate with each other; one or more wrappers, each of the one or more wrappers connected to one of the plurality of functional components; and an in-system component tester (ICT) configured to: select, as a component under test (CUT), at least one functional component, in an idle state, of the plurality of the functional components; and test, via the one or more test wrappers, the at least one functional component selected as the CUT.
    Type: Application
    Filed: December 27, 2021
    Publication date: June 30, 2022
    Applicant: DEEPX CO., LTD.
    Inventor: Lok Won Kim
  • Publication number: 20220207336
    Abstract: A neural processing unit (NPU), a method for driving an artificial neural network (ANN) model, and an ANN driving apparatus are provided. The NPU includes a semiconductor circuit that includes at least one processing element (PE) configured to process an operation of an artificial neural network (ANN) model; and at least one memory configurable to store a first kernel and a first kernel filter. The NPU is configured to generate a first modulation kernel based on the first kernel and the first kernel filter and to generate second modulation kernel based on the first kernel and a second kernel filter generated by applying a mathematical function to the first kernel filter. Power consumption and memory read time are both reduced by decreasing the data size of a kernel read from a separate memory to an artificial neural network processor and/or by decreasing the number of memory read requests.
    Type: Application
    Filed: October 12, 2021
    Publication date: June 30, 2022
    Applicant: DEEPX CO., LTD.
    Inventor: Lok Won KIM
  • Publication number: 20220206978
    Abstract: A system on chip (SoC) for testing a component in a system during runtime includes a plurality of functional components; a system bus for allowing the plurality of functional components to communicate with each other; one or more wrappers, each connected to one of the plurality of functional components; and an in-system component tester (ICT). The ICT monitors, via the wrappers, states of the functional components; selects, as a component under test (CUT), at least one functional component in an idle state; tests, via the wrappers, the selected at least one functional component; interrupts the testing step with respect to the selected at least one functional component, based on a detection of a collision with an access from the system bus to the selected at least one functional component; and allows a connection of the at least one functional component to the system bus, based on the interrupting step.
    Type: Application
    Filed: October 13, 2021
    Publication date: June 30, 2022
    Applicant: DEEPX CO., LTD.
    Inventor: Lok Won KIM
  • Publication number: 20220207337
    Abstract: A method performs a plurality of operations on an artificial neural network (ANN). The plurality of operations includes storing in at least one memory a set of weights, at least a portion of a first batch channel of a plurality of batch channels, and at least a portion of a second batch channel of the plurality of batch channels; and calculating the at least a portion of the first batch channel and the at least a portion of the second batch channel by the set of weights. A batch mode, configured to process a plurality of input channels, can determine the operation sequence in which the on-chip memory and/or internal memory stores and computes the parameters of the ANN. Even if the number of input channels increases, processing may be performed with one neural processing unit including a memory configured in consideration of a plurality of input channels.
    Type: Application
    Filed: December 23, 2021
    Publication date: June 30, 2022
    Applicant: DEEPX CO., LTD.
    Inventor: Lok Won KIM
  • Publication number: 20220137866
    Abstract: A memory device for an artificial neural network (ANN) includes at least one memory cell array of N columns and M rows; and a memory controller configured to sequentially perform a read or write operation of data of the at least one memory cell array in a burst mode based on predetermined sequential access information. Each of the at least one memory cell array may include a plurality of dynamic memory cells having a leakage current characteristic. The memory device may further include a processor configured to provide the memory controller with the ANN data locality information or information for identifying an input feature map, a kernel, and an output feature map. The memory controller can prepare data of an ANN model processed at a processor-memory level before being requested by the processor, thus enabling a substantial reduction in the delay of memory data being supplied to the processor.
    Type: Application
    Filed: October 27, 2021
    Publication date: May 5, 2022
    Applicant: DEEPX CO., LTD.
    Inventor: Lok Won KIM
  • Publication number: 20220138586
    Abstract: A memory system of an artificial neural network (ANN) includes a processor configured to process an ANN model; and an ANN memory controller configured to control a rearrangement of data of the ANN model stored in a memory and to operate the data of the ANN model stored in the memory in a read-burst mode based on ANN data locality information of the ANN model. The ANN memory controller may receive pre-generated ANN data locality information, or the processor may generate a plurality of data access requests sequentially so that the ANN memory controller may generate the ANN data locality information by monitoring the plurality of data access requests. The ANN memory controller prepares, based on an artificial neural network data locality, data before receiving a request from the processor in order to reduce a delay in the data supply of the memory to the processor.
    Type: Application
    Filed: October 12, 2021
    Publication date: May 5, 2022
    Applicant: DEEPX CO., LTD.
    Inventor: Lok Won KIM
  • Publication number: 20220137868
    Abstract: A system for an artificial neural network (ANN) includes a processor configured to output a memory control signal including an ANN data locality; a main memory in which data of an ANN model corresponding to the ANN data locality is stored; and a memory controller configured to receive the memory control signal from the processor and to control the main memory based on the memory control signal. The memory controller may be further configured to control, based on the memory control signal, a read or write operation of data of the main memory required for operation of the artificial neural network. Thus, the system optimizes an ANN operation of the processor by utilizing the ANN data locality of the ANN model, which operates at a processor-memory level.
    Type: Application
    Filed: October 29, 2021
    Publication date: May 5, 2022
    Applicant: DEEPX CO., LTD.
    Inventor: Lok Won KIM
  • Publication number: 20220137869
    Abstract: A system for an artificial neural network (ANN) includes a main memory including a dynamic memory cell electrically coupled to a bit line and a word line; and a memory controller configured to selectively omit a restore operation during a read operation of the dynamic memory cell. The main memory may be configured to selectively omit the restoration operation during the read operation of the dynamic memory cell by controlling a voltage applied to the word line. The memory controller may be further configured to determine whether to perform the restoration operation by determining whether data stored in the dynamic memory cell is reused. Thus, the system optimizes an ANN operation of the processor by utilizing the ANN data locality of the ANN model, which operates at a processor-memory level.
    Type: Application
    Filed: October 29, 2021
    Publication date: May 5, 2022
    Applicant: DEEPX CO., LTD.
    Inventor: Lok Won KIM
  • Patent number: 11263513
    Abstract: The present disclosure provides a bit quantization method of an artificial neural network. This method may include: (a) of selecting one parameter or one parameter group to be quantized in the artificial neural network; (b) a bit quantizing to reduce the data representation size for the selected parameter or parameter group to a unit of bits; (c) of determining whether the accuracy of the artificial neural network is equal to or greater than a predetermined target value; and (d) repeating steps (a) to (c) when the accuracy of the artificial neural network is equal to or greater than the target value.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: March 1, 2022
    Assignee: DEEPX CO., LTD.
    Inventor: Lok Won Kim