Patents by Inventor Jungwook CHOI
Jungwook CHOI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12217158Abstract: An apparatus includes circuitry for a neural network that is configured to perform forward propagation neural network operations on floating point numbers having a first n-bit floating point format. The first n-bit floating point format has a configuration consisting of a sign bit, m exponent bits and p mantissa bits where m is greater than p. The circuitry is further configured to perform backward propagation neural network operations on floating point numbers having a second n-bit floating point format that is different than the first n-bit floating point format. The second n-bit floating point format has a configuration consisting of a sign bit, q exponent bits and r mantissa bits where q is greater than m and r is less than p.Type: GrantFiled: September 3, 2019Date of Patent: February 4, 2025Assignee: International Business Machines CorporationInventors: Xiao Sun, Jungwook Choi, Naigang Wang, Chia-Yu Chen, Kailash Gopalakrishnan
-
Patent number: 12175359Abstract: An apparatus for training and inferencing a neural network includes circuitry that is configured to generate a first weight having a first format including a first number of bits based at least in part on a second weight having a second format including a second number of bits and a residual having a third format including a third number of bits. The second number of bits and the third number of bits are each less than the first number of bits. The circuitry is further configured to update the second weight based at least in part on the first weight and to update the residual based at least in part on the updated second weight and the first weight. The circuitry is further configured to update the first weight based at least in part on the updated second weight and the updated residual.Type: GrantFiled: September 3, 2019Date of Patent: December 24, 2024Assignee: International Business Machines CorporationInventors: Xiao Sun, Jungwook Choi, Naigang Wang, Chia-Yu Chen, Kailash Gopalakrishnan
-
Patent number: 12141513Abstract: A method for improving performance of a predefined Deep Neural Network (DNN) convolution processing on a computing device includes inputting parameters, as input data into a processor on a computer that formalizes a design space exploration of a convolution mapping, on a predefined computer architecture that will execute the predefined convolution processing. The parameters are predefined as guided by a specification for the predefined convolution processing to be implemented by the convolution mapping and by a microarchitectural specification for the processor that will execute the predefined convolution processing. The processor calculates performance metrics for executing the predefined convolution processing on the computing device, as functions of the predefined parameters, as proxy estimates of performance of different possible design choices to implement the predefined convolution processing.Type: GrantFiled: October 31, 2018Date of Patent: November 12, 2024Assignee: International Business Machines CorporationInventors: Chia-Yu Chen, Jungwook Choi, Kailash Gopalakrishnan, Vijayalakshmi Srinivasan, Swagath Venkataramani, Jintao Zhang
-
Publication number: 20240338419Abstract: A method of convolution operation based sparse data using artificial neural network comprises: a step of extracting index information, location information about a valid data where actual data exists in an input data; a step of generating first location information including computable row information where actual operations are performed in a kernel based on a path along which the kernel moves to perform a convolution operation on the input data and the index information; a step of generating second location information including computable column information where an actual operation is performed in the kernel based on the first location information, the index information, and the kernel size; a step of generating an operation rule for each point of the valid data and convolution output data based on the index information, and the first and second location information; and a step of performing the convolution operation based on the operation rule.Type: ApplicationFiled: June 17, 2024Publication date: October 10, 2024Inventors: Minjae Lee, Janghwan Lee, Jun Won Choi, Jungwook Choi
-
Patent number: 12056594Abstract: A compensated deep neural network (compensated-DNN) is provided. A first vector having a set of components and a second vector having a set of corresponding components are received. A component of the first vector includes a first quantized value and a first compensation instruction, and a corresponding component of the second vector includes a second quantized value and a second compensation instruction. The first quantized value is multiplied with the second quantized value to compute a raw product value. The raw product value is compensated for a quantization error according to the first and second compensation instructions to produce a compensated product value. The compensated product value is added into an accumulated value for the dot product. The accumulated value is converted into an output vector of the dot product. The output vector includes an output quantized value and an output compensation instruction.Type: GrantFiled: June 27, 2018Date of Patent: August 6, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Swagath Venkataramani, Shubham Jain, Vijayalakshmi Srinivasan, Jungwook Choi, Leland Chang
-
Publication number: 20240152753Abstract: Disclosed is a processor implemented method that includes calculating a quantization error for each channel of a neural network using activation data output from a first layer of the neural network and a quantization scale of a second layer connected to the first layer, calculating a final loss using a regularization loss term determined based on the quantization error for each channel, and updating a batch norm parameter of the first layer in a direction to decrease the final loss.Type: ApplicationFiled: October 27, 2023Publication date: May 9, 2024Applicants: SAMSUNG ELECTRONICS CO., LTD., IUCF-HYU(Industry-University Cooperation Foundation Hanyang University)Inventors: Jungwook CHOI, Seongmin PARK
-
Patent number: 11977974Abstract: A system, having a memory that stores computer executable components, and a processor that executes the computer executable components, reduces data size in connection with training a neural network by exploiting spatial locality to weight matrices and effecting frequency transformation and compression. A receiving component receives neural network data in the form of a compressed frequency-domain weight matrix. A segmentation component segments the initial weight matrix into original sub-components, wherein respective original sub-components have spatial weights. A sampling component applies a generalized weight distribution to the respective original sub-components to generate respective normalized sub-components. A transform component applies a transform to the respective normalized sub-components.Type: GrantFiled: November 30, 2017Date of Patent: May 7, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Chia-Yu Chen, Jungwook Choi, Kailash Gopalakrishnan, Suyog Gupta, Pritish Narayanan
-
Publication number: 20240028888Abstract: A method for quantization learning by a model quantizer that is operating in a computer system and compressing a transformer model. The method may include generating a student model through quantization of the transformer model, performing a first quantization learning by inserting a self-attention map of a teacher model into a self-attention map of the student model, and performing a second quantization learning using a knowledge distillation method so that the self-attention map of the student model follows the self-attention map of the teacher model.Type: ApplicationFiled: January 26, 2023Publication date: January 25, 2024Applicants: SAMSUNG ELECTRONICS CO., LTD., IUCF-HYU (Industry-University Cooperation Foundation Hanyang University)Inventors: Yongsuk Kwon, Jungwook Choi, Minsoo Kim, Seongmin Park
-
Publication number: 20230306242Abstract: An apparatus and method with neural network operation are provided. A computing apparatus includes one or more processors, storage hardware storing instructions configured to, when executed by the one or more processors, cause the one or more processors to: extract calibration data from training data that is for training a main neural network, based on the calibration data, generate a look up table (LUT) for performing a non-linear function of the main neural network through an auxiliary network corresponding to a layer of the main neural network, and update a parameter of the LUT based on an output of the non-linear function and based on an output of the auxiliary network.Type: ApplicationFiled: February 21, 2023Publication date: September 28, 2023Applicants: SAMSUNG ELECTRONICS CO., LTD., IUCF-HYU(Industry-University Cooperation Foundation Hanyang University)Inventors: Jungwook CHOI, Seongmin PARK
-
Publication number: 20230118505Abstract: A neural network operation apparatus may include a receiver configured to receive input data to perform the neural network operation and a quantized Look Up Table (LUT) corresponding to a non-linear function comprised in the neural network operation, and a processor configured to perform scale-up on the input data based on a scale factor, to extract a quantized LUT parameter from the quantized LUT based on scaled-up input data, and to generate an operation result by performing a neural network operation based on the quantized LUT parameter.Type: ApplicationFiled: August 12, 2022Publication date: April 20, 2023Applicants: Samsung Electronics Co., Ltd., IUCF-HYU (Industry-University Cooperation Foundation Hanyang University)Inventors: Donghyun Lee, Joonsang Yu, Junki Park, Jungwook Choi
-
Patent number: 11620132Abstract: Various embodiments are provided reusing an operand in an instruction set architecture (ISA) by one or more processors in a computing system. An instruction may specify that an operand register for a selected operand retain operand data used by a previous instruction. The operand data in the operand register may be reused by the instruction.Type: GrantFiled: May 8, 2019Date of Patent: April 4, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Bruce Fleischer, Sunil Shukla, Vijayalakshmi Srinivasan, Jungwook Choi
-
Patent number: 11620105Abstract: In an embodiment, a method includes configuring a specialized circuit for floating point computations using numbers represented by a hybrid format, wherein the hybrid format includes a first format and a second format. In the embodiment, the method includes operating the further configured specialized circuit to store an approximation of a numeric value in the first format during a forward pass for training a deep learning network. In the embodiment, the method includes operating the further configured specialized circuit to store an approximation of a second numeric value in the second format during a backward pass for training the deep learning network.Type: GrantFiled: December 21, 2020Date of Patent: April 4, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Naigang Wang, Jungwook Choi, Kailash Gopalakrishnan, Ankur Agrawal, Silvia Melitta Mueller
-
Patent number: 11610101Abstract: A neuromorphic device includes a plurality of first control lines, a plurality of second control lines and a matrix of resistive processing unit cells. Each resistive processing unit cell is electrically connected with one of the first control lines and one of the second control lines. A given resistive processing unit cell includes a first resistive device and a second resistive device. The first resistive device is a positively weighted resistive device and the second resistive device is a negatively weighted resistive device.Type: GrantFiled: August 30, 2019Date of Patent: March 21, 2023Assignee: International Business Machines CorporationInventors: Youngseok Kim, Jungwook Choi, Seyoung Kim, Chun-Chen Yeh
-
Patent number: 11604647Abstract: An apparatus includes a memory and a processor coupled to the memory. The processor includes first and second sets of arithmetic units having first and second precision for floating-point computations, the second precision being lower than the first precision. The processor is configured to obtain a machine learning model trained in the first precision, to utilize the second set of arithmetic units to perform inference on input data, to utilize the first set of arithmetic units to generate feedback for updating parameters of the second set of arithmetic units based on the inference performed on the input data by the second set of arithmetic units, to tune parameters of the second set of arithmetic units based at least in part on the feedback generated by the first set of arithmetic units, and to utilize the second set of arithmetic units with the tuned parameters to generate inference results.Type: GrantFiled: September 3, 2019Date of Patent: March 14, 2023Assignee: International Business Machines CorporationInventors: Xiao Sun, Chia-Yu Chen, Naigang Wang, Jungwook Choi, Kailash Gopalakrishnan
-
Patent number: 11551077Abstract: Techniques for statistics-aware weight quantization are presented. To facilitate reducing the bit precision of weights, for a set of weights, a quantizer management component can estimate a quantization scale value to apply to a weight as a linear or non-linear function of the mean of a square of a weight value of the weight and the mean of an absolute value of the weight value, wherein the quantization scale value is determined to have a smaller quantization error than all, or at least almost all, other quantization errors associated with other quantization scale values. A quantizer component applies the quantization scale value to symmetrically and/or uniformly quantize weights of a layer of the set of weights to generate quantized weights, the weights being quantized using rounding. The respective quantized weights can be used to facilitate training and inference of a deep learning system.Type: GrantFiled: June 13, 2018Date of Patent: January 10, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Zhuo Wang, Jungwook Choi, Kailash Gopalakrishnan, Pierce I-Jen Chuang
-
Patent number: 11551054Abstract: A convolutional neural network includes a front layer, a back layer, and a plurality of other layers that are connected between the front layer and the back layer. One of the other layers is a transition layer. A first precision is assigned to activations of neurons from the front layer back to the transition layer and a second precision is assigned to activations of the neurons from the transition layer back to the back layer. A third precision is assigned to weights of inputs to neurons from the front layer back to the transition layer and a fourth precision is assigned to weights of inputs to the neurons from the transition layer back to the back layer. In some embodiments the layers forward of the transition layer have a different convolutional kernel than the layers rearward of the transition layer.Type: GrantFiled: August 27, 2019Date of Patent: January 10, 2023Assignee: International Business Machines CorporationInventors: Jungwook Choi, Swagath Venkataramani, Vijayalakshmi Srinivasan, Kailash Gopalakrishnan
-
Patent number: 11354573Abstract: A minibatch in a neural network execution may be dynamically resized based on on-chip memory. For example, a size of the minibatch is configured such that the minibatch fits within on-chip memory. The size of the minibatch may be resized for a sequence of layers in the neural network execution. A next layer's execution can commence responsive to the resized minibatch being completed in a previous layer without having to wait for all of the minibatch to be completed in the previous layer.Type: GrantFiled: March 25, 2019Date of Patent: June 7, 2022Assignee: International Business Machines CorporationInventors: Swagath Venkataramani, Vijayalakshmi Srinivasan, Jungwook Choi
-
Patent number: 11347517Abstract: A reduced precision based programmable and single instruction multiple data (SIMD) dataflow architecture includes reduced precision execution units with a majority of the execution units operating at reduced precision and a minority of the execution units are capable of operating at higher precision. The execution units operate in parallel within a programmable execution element to share instruction fetch, decode, and issue pipelines and operate on the same instruction in lock-step to minimize instruction-related overhead.Type: GrantFiled: June 20, 2019Date of Patent: May 31, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kailash Gopalakrishnan, Sunil Shukla, Jungwook Choi, Silvia Mueller, Bruce Fleischer, Vijayalakshmi Srinivasan, Ankur Agrawal, Jinwook Oh
-
Patent number: 11295208Abstract: Embodiments of the present invention provide a computer-implemented method for adaptive residual gradient compression for training of a deep learning neural network (DNN). The method includes obtaining, by a first learner, a current gradient vector for a neural network layer of the DNN, in which the current gradient vector includes gradient weights of parameters of the neural network layer that are calculated from a mini-batch of training data. A current residue vector is generated that includes residual gradient weights for the mini-batch. A compressed current residue vector is generated based on dividing the residual gradient weights of the current residue vector into a plurality of bins of a uniform size and quantizing a subset of the residual gradient weights of one or more bins of the plurality of bins. The compressed current residue vector is then transmitted to a second learner of the plurality of learners or to a parameter server.Type: GrantFiled: December 4, 2017Date of Patent: April 5, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ankur Agrawal, Daniel Brand, Chia-Yu Chen, Jungwook Choi, Kailash Gopalakrishnan
-
Patent number: 11195096Abstract: Techniques that facilitate improving an efficiency of a neural network are described. In one embodiment, a system is provided that comprises a memory that stores computer-executable components and a processor that executes computer-executable components stored in the memory. In one implementation, the computer-executable components comprise an initialization component that selects an initial value of an output limit, wherein the output limit indicates a range for an output of an activation function of a neural network. The computer-executable components further comprise a training component that modifies the initial value of the output limit during training to a second value of the output limit, the second value of the output limit being provided as a parameter to the activation function. The computer-executable components further comprise an activation function component that determines the output of the activation function based on the second value of the output limit as the parameter.Type: GrantFiled: October 24, 2017Date of Patent: December 7, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jungwook Choi, Kailash Gopalakrishnan, Charbel Sakr, Swagath Venkataramani, Zhuo Wang