Patents Assigned to Kneron (Taiwan) Co., Ltd.
  • Patent number: 11798269
    Abstract: A Fast Non-Maximum Suppression (NMS) Algorithm post-processing for object detection includes getting original data output from a deep learning model inference output, the original data including a plurality of bounding boxes, pre-emptively filtering out at least one bounding box of the plurality of bounding boxes from further consideration when applying the algorithm, the at least one bounding box filtered out according to a predetermined criteria, processing data, using sigmoid functions or exponential functions, from bounding boxes of the plurality of bounding boxes not filtered out to generate processed bounding boxes, calculating final scores of the processed bounding boxes, and choosing a processed bounding boxes utilizing the final scores.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: October 24, 2023
    Assignee: Kneron (Taiwan) Co., Ltd.
    Inventors: Bike Xie, Hung-Hsin Wu, Chuqiao Song, Chiu-Ling Chen
  • Patent number: 11663464
    Abstract: A system for operating a floating-to-fixed arithmetic framework includes a floating-to-fix arithmetic framework on an arithmetic operating hardware such as a central processing unit (CPU) for computing a floating pre-trained convolution neural network (CNN) model to a dynamic fixed-point CNN model. The dynamic fixed-point CNN model is capable of implementing a high performance convolution neural network (CNN) on a resource limited embedded system such as mobile phone or video cameras.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: May 30, 2023
    Assignee: Kneron (Taiwan) Co., Ltd.
    Inventors: Jie Wu, Bike Xie, Hsiang-Tsun Li, Junjie Su, Chun-Chen Liu
  • Patent number: 11488019
    Abstract: A method of pruning a batch normalization layer from a pre-trained deep neural network model is proposed. The pre-trained deep neural network model is inputted as a candidate model. The candidate model is pruned by removing the at least one batch normalization layer from the candidate model to form a pruned candidate model only when the at least one batch normalization layer is connected to and adjacent to a corresponding linear operation layer. The corresponding linear operation layer may be at least one of a convolution layer, a dense layer, a depthwise convolution layer, and a group convolution layer. Weights of the corresponding linear operation layer are adjusted to compensate for the removal of the at least one batch normalization. The pruned candidate model is then output and utilized for inference.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: November 1, 2022
    Assignee: Kneron (Taiwan) Co., Ltd.
    Inventors: Bike Xie, Junjie Su, Bodong Zhang, Chun-Chen Liu
  • Patent number: 11403528
    Abstract: A method of compressing a pre-trained deep neural network model includes inputting the pre-trained deep neural network model as a candidate model. The candidate model is compressed by increasing sparsity of the candidate, removing at least one batch normalization layer present in the candidate model, and quantizing all remaining weights into fixed-point representation to form a compressed model. Accuracy of the compressed model is then determined utilizing an end-user training and validation data set. Compression of the candidate model is repeated when the accuracy improves. Hyper parameters for compressing the candidate model are adjusted, then compression of the candidate model is repeated when the accuracy declines. The compressed model is output for inference utilization when the accuracy meets or exceeds the end-user performance metric and target.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: August 2, 2022
    Assignee: Kneron (Taiwan) Co., Ltd.
    Inventors: Bike Xie, Junjie Su, Jie Wu, Bodong Zhang, Chun-Chen Liu
  • Patent number: 10839893
    Abstract: A memory cell includes a first charge trap transistor and a second charge trap transistor. The first charge trap transistor has a substrate, a first terminal coupled to a first bitline, a second terminal coupled to a signal line, a control terminal coupled to a wordline, and a dielectric layer formed between the substrate of the first charge trap transistor and the control terminal of the first charge trap transistor. The second charge trap transistor has a substrate, a first terminal coupled to the signal line, a second terminal coupled to a second bitline, a control terminal coupled to the wordline, and a dielectric layer between the substrate of the second charge trap transistor and the control terminal of the second charge trap transistor. Charges are either trapped to or detrapped from the dielectric layer of the first charge trap transistor when writing data to the memory cell.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: November 17, 2020
    Assignee: Kneron (Taiwan) Co., Ltd.
    Inventors: Yuan Du, Mingzhe Jiang, Junjie Su, Chun-Chen Liu
  • Patent number: 10764507
    Abstract: An image processing system includes an image capturing device, a pixel binning device, a temporal filter, a first memory, a re-mosaic device, a second memory, and a blending device. The image capturing device is used for capturing a raw image. The pixel binning device is coupled to the image capturing device for outputting an enhanced image according to the raw image. The temporal filter is coupled to the pixel binning device for outputting a preview image according to the enhanced image. The first memory is used for buffering the raw image. The re-mosaic device is coupled to the first memory for outputting a processed image. The second memory is used for buffering the enhanced image. The blending device is coupled to the re-mosaic device and the second memory for outputting a snapshot image according to the processed image and the enhanced image.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: September 1, 2020
    Assignee: Kneron (Taiwan) Co., Ltd.
    Inventors: Hsiang-Tsun Li, Bike Xie, Junjie Su, Yi-Chou Chen