Patents Assigned to NOTA, INC.
-
Patent number: 12361264Abstract: Disclosed is a method and system for local compression of an artificial intelligence (AI) model. A local compression method for a model may include receiving a pretrained model as input; selecting a layer group as a portion of the input model; and partially compressing the selected layer group and retraining the compressed layer group, and the retraining of the compressed layer group may include retraining the compressed layer group based on input data and output data prestored for the selected layer group.Type: GrantFiled: October 22, 2024Date of Patent: July 15, 2025Assignee: Nota Inc.Inventor: Tae-Ho Kim
-
Patent number: 12299099Abstract: Disclosed are a method and apparatus for continuous authentication. An authentication method includes receiving image frames taken by a camera in succession, detecting a face area in the image frames, tracking a change in a location of the detected face area in the image frames, and performing continuous user authentication for the face area according to the change in the location by using the face area whose change in the location has been tracked and a deep learning model.Type: GrantFiled: June 17, 2022Date of Patent: May 13, 2025Assignee: NOTA, INC.Inventor: Myungsu Chae
-
Publication number: 20250131253Abstract: Disclosed is a method and system for local compression of an artificial intelligence (AI) model. A local compression method for a model may include receiving a pretrained model as input; selecting a layer group as a portion of the input model; and partially compressing the selected layer group and retraining the compressed layer group, and the retraining of the compressed layer group may include retraining the compressed layer group based on input data and output data prestored for the selected layer group.Type: ApplicationFiled: October 22, 2024Publication date: April 24, 2025Applicant: NOTA, INC.Inventor: Tae-Ho Kim
-
Patent number: 12283181Abstract: Provided are methods and apparatuses for controlling traffic signals of traffic lights in a sub-area by using a neural network model. The method according to an embodiment of the present disclosure may configure state information of a sub-area by using downstream information obtained in a current cycle time for each of a plurality of intersections included in the sub-area. In addition, the method may input the state information to a trained reinforcement learning model, and obtain action information of the sub-area including green times and offsets, by using an output from the trained reinforcement learning model. Furthermore, the method may generate coordinated signal values for applying the action to traffic lights in the sub-area in a subsequent cycle time.Type: GrantFiled: July 21, 2022Date of Patent: April 22, 2025Assignee: NOTA, INC.Inventors: Jin Won Yoon, Seung Eon Baek, Seong Jin Lee
-
Patent number: 12271457Abstract: Disclosed are a method and apparatus for real-time on-device authentication based on deep learning. A deep learning-based authentication method includes detecting a location of a region of interest (ROI) occupied by a face portion an input image by using a detection model, extracting a feature map from the input image by using a feature extractor of the detection model, extracting a fixed length feature for the face portion using the feature map and ROI pooling for the detected location of the ROI, and classifying a face included in the input image based on the fixed length feature.Type: GrantFiled: June 17, 2022Date of Patent: April 8, 2025Assignee: NOTA, INC.Inventor: Myungsu Chae
-
Patent number: 12229656Abstract: Provided are a method and apparatus for performing a convolution operation for optimizing the arithmetic intensity of device.Type: GrantFiled: July 19, 2023Date of Patent: February 18, 2025Assignee: NOTA, INC.Inventors: Shin Kook Choi, Jun Kyeong Choi
-
Publication number: 20250029002Abstract: Disclosed is a model compression method and system for compressing a model for optimizing to an equipment-friendly model. A model compression method may include acquiring criteria and sparsity for each filter of a model to which unstructured pruning is already applied, determining a filter for applying structured pruning among filters of the model based on the criteria and the sparsity, and applying the structured pruning to the model based on the determined filter.Type: ApplicationFiled: September 13, 2023Publication date: January 23, 2025Applicant: NOTA, INC.Inventors: Jaewoong Yun, Kyunghwan Shim
-
Patent number: 12198040Abstract: A method for compressing a neural network model is disclosed. The method for compressing a neural network model includes receiving, at a processor of the electronic apparatus, an original model including a plurality of layers each including a plurality of filters, a compression ratio to be applied to the original model, and a metric for determining an importance of the plurality of filters, determining the importance of the plurality of filters using the metric, normalizing the importance of the plurality of filters layer by layer, and compressing the original model by removing at least one filter among the plurality of filters based on the normalized importance and the compression ratio.Type: GrantFiled: May 5, 2023Date of Patent: January 14, 2025Assignee: NOTA, INC.Inventor: Kyunghwan Shim
-
Patent number: 12141317Abstract: Disclosed is a technology for de-identifying and restoring personal information in an image based on an encryption key. An image processing method for de-identifying and restoring image information, which is performed by an image processing system, may include detecting an object information area in image information, de-identifying the detected object information area by using an encryption key generated in relation to the detected object information area, and restoring the de-identified object information area by using the encryption key.Type: GrantFiled: January 21, 2022Date of Patent: November 12, 2024Assignee: NOTA, INC.Inventors: Sang Tae Kim, Dong Wook Kim, Hye Rin Yoo, Chih Yuan Hsieh, Seong Un Hong, Sung Hyun Kim, Myungsu Chae
-
Publication number: 20240296338Abstract: Provided are a method and system for generating a transfer learning model based on convergence of model compression and transfer learning convergence. The method of generating a transfer learning model may include reconstructing a first model that is pre-trained based on a first dataset, and generating a second model by removing at least some weights from the reconstructed first model based on a second dataset that is different from the first dataset, and generating the second model that is trained with transfer learning by using the second dataset, from the first model from which the at least some weights are removed.Type: ApplicationFiled: August 23, 2023Publication date: September 5, 2024Applicant: NOTA, INC.Inventor: Seul Ki YEOM
-
Patent number: 12033052Abstract: Provided are a latency prediction method and a computing device for the same. The latency prediction method includes receiving a deep learning model and predicting on-device latency of the received deep learning model using a latency predictor which is trained on the basis of a latency lookup table. The latency lookup table includes information on single neural network layers and latency information of the single neural network layers on an edge device.Type: GrantFiled: August 11, 2022Date of Patent: July 9, 2024Assignee: NOTA, INC.Inventors: Jeong Ho Kim, Min Su Kim, Tae Ho Kim
-
Patent number: 12026078Abstract: Disclosed is a method for providing a benchmark result, which is performed by a computing device. The method may include obtaining a first input data including an inference task and a dataset. The method may include determining a target model which is a subject of a benchmark for the inference task and at least one target node at which the inference task of the target model is to be executed. The determined target model corresponds to an artificial intelligence-based model, and the benchmark for the inference task of the determined target model is performed at the at least one target node based on the dataset. The method may include providing the benchmark result obtained by executing the target model at the at least one target node.Type: GrantFiled: June 30, 2023Date of Patent: July 2, 2024Assignee: NOTA, INC.Inventors: Sanggeon Park, Jimin Lee
-
Patent number: 12007867Abstract: Disclosed is a method performed by a first computing device performing a benchmark. The method may include receiving, from a second computing device comprising a plurality of modules which perform different operations related to an artificial intelligence-based model, module identification information indicating which module among the plurality of modules of the second computing device triggers a benchmark operation of the first computing device. The method may include providing, to the second computing device, a benchmark result based on the module identification information, and the benchmark result provided to the second computing device may be different according to the module identification information.Type: GrantFiled: June 30, 2023Date of Patent: June 11, 2024Assignee: NOTA, INC.Inventors: Sanggeon Park, Wonjin Shin, Jimin Lee
-
Publication number: 20240144012Abstract: Provided are a method and apparatus for compressing a neural network model by using hardware characteristics. The method includes: obtaining a target number of output channels of a target layer to be compressed, from among layers included in the neural network model that is executed by hardware; adjusting the target number of output channels to meet a certain purpose, based on at least one of a first latency characteristic for output channels of the target layer and a second latency characteristic for input channels of a next layer, according to the hardware characteristics of the hardware; and compressing the neural network model such that the number of output channels of the target layer is equal to the adjusted target number of output channels.Type: ApplicationFiled: November 18, 2022Publication date: May 2, 2024Applicant: Nota, Inc.Inventors: Shin Kook CHOI, Jun Kyeong CHOl
-
Publication number: 20240127120Abstract: Disclosed is a model compression method and system for understanding natural language through layer pruning. A model compression method may include adding an internal classification layer to each encoder layer of an input model; measuring performance for an output of the internal classification layer; determining an encoder layer in which the measured performance is lower than performance of the input model by a preset performance drop tolerance range or more; and pruning upper encoder layers of a final layer which is an upper layer of the determined encoder layer.Type: ApplicationFiled: September 7, 2023Publication date: April 18, 2024Applicant: NOTA, INC.Inventor: Hancheol Park
-
Publication number: 20240071219Abstract: Provided are a method and apparatus for generating a safety control signal of a road. The method includes inputting road state information for a first time point, including a safety control signal for the first time point and dynamic information for the first time point obtained from a video of a road, to a prediction model, inferring dangerous situation prediction information for a second time point after the first time point, by using the prediction model, and generating a safety control signal notifying a risk of accident on the road for the second time point, based on the inferred dangerous situation prediction information, wherein the prediction model is trained by using a loss function configured by dangerous situation prediction information inferred for a specific time point from road state information before the specific time point, and dangerous situation measurement information calculated from road state information for the specific time point.Type: ApplicationFiled: August 21, 2023Publication date: February 29, 2024Applicant: NOTA, INC.Inventors: Hwan Hyo PARK, Dong Ho KA, Tae Seong MOON
-
Publication number: 20240028875Abstract: Provided are a method and apparatus for performing a convolution operation for optimizing the arithmetic intensity of device.Type: ApplicationFiled: July 19, 2023Publication date: January 25, 2024Applicant: NOTA, INC.Inventors: Shin Kook CHOI, Jun Kyeong CHOI
-
Patent number: 11875263Abstract: Disclosed are a method and apparatus for energy-aware deep neural network compression. A network pruning method for deep neural network compression includes measuring importance scores of a network unit by using an energy-based criterion with respect to a deep learning model, and performing network pruning of the deep learning model based on the importance scores.Type: GrantFiled: May 11, 2022Date of Patent: January 16, 2024Assignee: NOTA, INC.Inventors: Seul Ki Yeom, KyungHwan Shim, Myungsu Chae, Tae-Ho Kim
-
Publication number: 20240013654Abstract: Provided are methods and apparatuses for controlling traffic signals of traffic lights in a sub-area by using a neural network model. The method according to an embodiment of the present disclosure may configure state information of a sub-area by using downstream information obtained in a current cycle time for each of a plurality of intersections included in the sub-area. In addition, the method may input the state information to a trained reinforcement learning model, and obtain action information of the sub-area including green times and offsets, by using an output from the trained reinforcement learning model. Furthermore, the method may generate coordinated signal values for applying the action to traffic lights in the sub-area in a subsequent cycle time.Type: ApplicationFiled: July 21, 2022Publication date: January 11, 2024Applicant: NOTA, INCInventors: Jin Won YOON, Seung Eon BAEK, Seong Jin LEE
-
Patent number: 11775806Abstract: Disclosed is a method of compressing a neural network model that is performed by a computing device. The method includes receiving a trained model and compression method instructions for compressing the trained model, identifying a compressible block and a non-compressible block among a plurality of blocks included in the trained model based on the compression method instructions, transmitting a command to a user device that causes the user device to: display a structure of the trained model representing a connection relationship between the plurality of blocks on a first screen such that the compressible block and the non-compressible block are visually distinguished, and display, on a second screen, an input field operable to receive a parameter value entered by a user for compression of the compressible block, and compressing the trained model based on the parameter value entered by the user in the input field.Type: GrantFiled: February 2, 2023Date of Patent: October 3, 2023Assignee: NOTA, INC.Inventors: Yoo Chan Kim, Jong Won Baek, Geun Jae Lee