Abstract: A device including processors configured to execute instructions and memories storing the instructions, which when executed by the processors configure the processors to perform an operation for training a transformer model having a plurality of encoders and a plurality of decoders by configuring the processors to identify the batches of training data into a plurality of micro-batches, select layer pairs for the plurality of micro-batches, assemble a processing order of the layer pairs, determining resource information to be allocated to the layer pairs, and allocate resources to the layer pairs based on the determined resource information to be allocated to the layer pairs, dependent con the processing order of the layer pairs.
Type:
Application
Filed:
August 16, 2023
Publication date:
July 11, 2024
Applicants:
SAMSUNG ELECTRONICS CO., LTD., Seoul National University R&DB Foundation
Inventors:
Jung Ho AHN, Sun Jung LEE, Jae Wan CHOI
Abstract: Neural network operation apparatus and method are provided. The neural network operation apparatus includes: one or more processors; and memory storing instructions configured to cause the one or more processors to: generate an upsampled tensor by copying pixels, of a unit of data, based on a scale factor for upsampling; and generate, based on the scale factor, a neural network operation result by performing a pooling operation on the upsampled tensor.
Type:
Application
Filed:
November 30, 2023
Publication date:
July 4, 2024
Applicants:
SAMSUNG ELECTRONICS CO., LTD., Seoul National University R&DB Foundation
Inventors:
Hanwoong JUNG, Soonhoi HA, Donghyun KANG
Abstract: An equalizer includes an amplifying adder configured to generate an output signal by operating differential input signals and a signal generated by applying an equalization coefficient to a post data signal; and a comparator configured to generate a current data signal by sampling the output signal according to a clock signal, wherein the amplifying adder has a maximum gain when a difference of the differential input signals is within a predetermined range.
Type:
Application
Filed:
July 12, 2023
Publication date:
July 4, 2024
Applicant:
Seoul National University R&DB Foundation
Abstract: An apparatus and method with encrypted data neural network operation is provided. The apparatus includes one or more processors configured to execute instructions and one or more memories storing the instructions, wherein the execution of the instructions by the one or more processors configures the one or more processors to generate a target approximate polynomial, approximating a neural network operation, of a portion of a neural network model, using a determined target approximation region, for the target approximate polynomial, based on a first approximate polynomial generated based on parameters corresponding to a generation of the first approximate polynomial, a maximum value of input data to the portion of the neural network layer, and a minimum value of the input data, and generate a neural network operation result using the target approximate polynomial and the input data.
Type:
Application
Filed:
October 17, 2023
Publication date:
June 27, 2024
Applicants:
SAMSUNG ELECTRONICS CO., LTD., Seoul National University R&DB Foundation, Daegu Gyeongbuk Institute of Science and Technology, Industry Academic Cooperation Foundation, Chosun University
Inventors:
Jong-Seon NO, Junghyun LEE, Yongjune KIM, Joon-Woo LEE, Young Sik KIM, Eunsang LEE
Abstract: An apparatus includes one or more processors configured to execute instructions; and one or more memories storing the instructions; wherein the execution of the instructions by the one or more processors configures the one or more processors to generate an approximate polynomial, approximating a neural network operation, of a portion of a deep neural network model that is configured to receive input data, by using weighted least squares based on parameters corresponding to the generation of the approximate polynomial, a mean of the input data, and a standard deviation of the input data; and generate a homomorphic encrypted data operation result based on the input data and the approximate polynomial that approximates the neural network operation.
Type:
Application
Filed:
September 20, 2023
Publication date:
June 27, 2024
Applicants:
SAMSUNG ELECTRONICS CO., LTD., Seoul National University R&DB Foundation, Daegu Gyeongbuk Institute of Science and Technology, Industry Academic Cooperation Foundation, Chosun University
Inventors:
Jong Seon NO, Yongjune KIM, Eun Sang LEE, Jung Hyun LEE, Young Sik KIM, Joon Woo LEE
Abstract: A neuron circuit includes: a membrane circuit configured to receive a weighted synaptic current from a synaptic array and receive an adaptive current from an adaptive circuit; a comparator circuit configured to control a pulse generation circuit in response to a voltage of the membrane circuit exceeding a predetermined threshold voltage; the pulse generation circuit configured to control the membrane circuit and the adaptive circuit based on an output signal from the comparator circuit and generate a pulse comprising a firing pattern; and the adaptive circuit, connected to the membrane circuit and the pulse generation circuit, and configured to determine the firing pattern of the pulse generation circuit.
Type:
Application
Filed:
June 1, 2023
Publication date:
June 27, 2024
Applicants:
SAMSUNG ELECTRONICS CO., LTD., Seoul National University R&DB Foundation
Abstract: An apparatus includes: memories storing data to perform a neural network operation; processors to generate a neural network operation result by performing a neural network operation by reading the data; and crossbars processing data transmission between the processors and the memories, wherein the crossbars include: a first crossbar of a first group processing data transmission between a first group of the processors and a first group of the memories, a second crossbar of a second group processing data transmission between a second group of the processors and a second group of the memories, wherein the first group of processors does not include any processors that are in the second group of processors and wherein the first group of memories does not include any memories that are in the second group of memories, and a third crossbar connecting the first crossbar to the second crossbar.
Type:
Application
Filed:
July 27, 2023
Publication date:
June 27, 2024
Applicants:
SAMSUNG ELECTRONICS CO., LTD., Seoul National University R&DB Foundation
Inventors:
Hanwoong JUNG, Soonhoi HA, Keonjoo LEE, Changjae YI
Abstract: A treatment of rejection of a transplant by a recipient of the transplant is disclosed. Methods for prolonging transplant survival in a recipient of the transplant, prolonging survival of the recipient, delaying and/or suppressing delayed graft function in the recipient, and/or reducing the amount of an immunosuppressant administered for transplantation. The methods include providing the transplant with a CHP or a pharmaceutically acceptable salt thereof is disclosed. Also provided is a method for the production of a pharmaceutical composition for the treatment of a transplant allowing modulating transplant survival in a recipient of the transplant.
Type:
Application
Filed:
December 8, 2023
Publication date:
June 20, 2024
Applicants:
NovMetaPharma Co., Ltd., Seoul National University R&DB Foundation, SEOUL NATIONAL UNIVERSITY HOSPITAL
Inventors:
Seung Hee Yang, Hoeyune Jung, Jong Min Kim, Yon Su Kim, Heonjong Lee
Abstract: A method of processing data is performed by a computing device including processing hardware and storage hardware, the method including: converting, by the processing hardware, a neural network, stored in the storage hardware, from a first neural network format into a second neural network format; obtaining, by the processing hardware, information about hardware configured to perform a neural network operation for the neural network and obtaining partition information; dividing the neural network in the second neural network format into partitions, wherein the dividing is based on the information about the hardware and the partition information, wherein each partition includes a respective layer with an input thereto and an output thereof; optimizing each of the partitions based on a relationship between the input and the output of the corresponding layer; and converting the optimized partitions into the first neural network format.
Type:
Application
Filed:
July 17, 2023
Publication date:
June 20, 2024
Applicants:
SAMSUNG ELECTRONICS CO., LTD., Seoul National University R&DB Foundation
Inventors:
Seok-Young YOON, Bernhard EGGER, Hyemi MIN, Jaume Mateu CUADRAT
Abstract: The present disclosure relates to a binder for an all-solid-state lithium secondary battery wherein the carboxyl group of mercaptocarboxylic acid is grafted in a butadiene polymer, wherein a molar ratio of the carboxyl group to the butadiene polymer is 0.1-30:100, a binder for an all-solid-state lithium secondary battery containing the same, an all-solid-state lithium secondary battery electrode composite containing the same, a separator in an all-solid-state lithium secondary battery containing the same, and an all-solid-state lithium secondary battery containing the same. Since the binder for an all-solid-state lithium secondary battery of the present disclosure can satisfy stability, solubility, polarity and adhesive property at the same time, it can significantly accelerate the commercialization of an all-solid-state battery based on a wet process.
Type:
Application
Filed:
April 4, 2022
Publication date:
June 13, 2024
Applicant:
Seoul National University R&DB Foundation
Abstract: A method and apparatus with scheduling a neural network (NN), which relate to extracting and scheduling priorities of operation sets, are provided. A scheduler may be configured to receive a loop structure corresponding to a NN model, generate a plurality of operation sets based on the loop structure, generate a priority table for the operation sets based on memory benefits of the operation sets, and schedule the operation sets based on the priority table.
Type:
Application
Filed:
November 3, 2023
Publication date:
June 13, 2024
Applicants:
SAMSUNG ELECTRONICS CO., LTD., Seoul National University R&DB Foundation
Abstract: Apparatuses and methods for drawing a quantization configuration are disclosed, where A method may include generating genes by cataloging possible combinations of a quantization precision and a calibration method for each of layers of a pre-trained neural network, determining layer sensitivity for each of the layers based on combinations corresponding to the genes, determining priorities of the genes and selecting some of the genes based on the respective priority of the genes, generating progeny genes by performing crossover on the selected genes, calculating layer sensitivity for each of the layers corresponding to a combination of the crossover, and updating one or more of the genes using the progeny genes based on a comparison of layer sensitivity of the genes and layer sensitivity of the progeny genes.
Type:
Application
Filed:
May 19, 2023
Publication date:
June 6, 2024
Applicants:
SAMSUNG ELECTRONICS CO., LTD., Seoul National University R&DB Foundation
Inventors:
Seok-Young YOON, Bernhard EGGER, Daon PARK, Jungyoon KWON, Hyemi MIN
Abstract: A device and method with batch normalization are provided. An accelerator includes: core modules, each core module including a respective plurality of cores configured to perform a first convolution operation using feature map data and a weight; local reduction operation modules adjacent to the respective core modules, each including a respective plurality of local reduction operators configured to perform a first local operation that obtains first local statistical values of the corresponding core module; a global reduction operation module configured to perform a first global operation that generates first global statistical values of the core module based on the first local statistical values of the core modules; and a normalization operation module configured to perform a first normalization operation on the feature map data based on the first global statistical values.
Type:
Application
Filed:
December 1, 2023
Publication date:
June 6, 2024
Applicants:
Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
Inventors:
Jung Ho AHN, Sun Jung LEE, Jae Wan CHOI, Seung Hwan HWANG
Abstract: A device for generating a high-resolution frame includes a plurality of alignment circuits configured to generate a plurality of aligned frames by blending a reference frame and a plurality of neighboring frames neighboring the reference frame; and a reconstruction circuit configured to generate the high-resolution frame corresponding to the reference frame according to the reference frame and the plurality of aligned frames. The plurality of alignment circuits and the reconstruction circuit each include neural networks.
Type:
Grant
Filed:
August 13, 2021
Date of Patent:
May 28, 2024
Assignees:
SK hynix Inc., Seoul National University R&DB Foundation
Inventors:
Haesoo Chung, Sang-hoon Lee, Nam Ik Cho
Abstract: A processor-implemented method includes generating respective final feature vectors of a plurality of frames of time-series data, while sequentially processing the plurality of frames by using a neural network comprising a plurality of layers, determining a class of the time-series data based on at least one final feature vector of the respective final feature vectors, generating a reference feature vector based on the at least one final feature vector, calculating a similarity score between the reference feature vector and a feature vector of at least one second frame, wherein the second frame includes a non-final feature frame where the final feature vector is not generated, and determining the at least one second frame to be the frame corresponding to the class, based on a result of comparing the similarity score and a threshold value.
Type:
Application
Filed:
July 20, 2023
Publication date:
May 23, 2024
Applicants:
SAMSUNG ELECTRONICS CO., LTD., Seoul National University R&DB Foundation
Abstract: The array antenna system according to an embodiment includes an active radiation layer including a plurality of unit cells and a control circuit to control properties of each unit cell, a plurality of patch antennas placed on each unit cell, and a feed line to feed waves for excitation of the plurality of patch antennas through the active radiation layer, wherein each unit cell is controlled to have different radiation properties by the control circuit, and beam steering and impedance control of the array antenna system is enabled by control of the active radiation layer. According to the embodiment, power consumption is much lower than the existing beamforming circuit, and the using of the single feed line reduces the complexity of system design.
Type:
Grant
Filed:
March 18, 2021
Date of Patent:
May 21, 2024
Assignee:
Seoul National University R&DB Foundation
Abstract: The present invention relates to a wearable device using a flexible non-powered variable impedance mechanism, wherein the device can induce a user to have a correct posture during a squat exercise or lifting work, and can assist the user's muscular strength. According to the present invention, an angle between a (1-1)th lower string and a (1-2)th lower string and an angle between a (24)th lower string and a (2-2)th lower string change according to a knee angle depending on the user's posture, whereby an impedance mechanism that the user feels through the body changes.
Type:
Grant
Filed:
October 30, 2019
Date of Patent:
May 21, 2024
Assignee:
Seoul National University R&DB Foundation