Patents by Inventor Chun Chen Liu

Chun Chen Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200193270
    Abstract: After inputting input data to a floating pre-trained convolution neural network to generate floating feature maps for each layer of the floating pre-trained CNN model, a statistical analysis on the floating feature maps is performed to generate a dynamic quantization range for each layer of the floating pre-trained CNN model. Based on the obtained quantization range for each layer, the proposed quantization methodologies quantize the floating pre-trained CNN model to generate the scalar factor of each layer and the fractional bit-width of a quantized CNN model. It enables the inference engine performs low-precision fixed-point arithmetic operations to generate a fixed-point inferred CNN model.
    Type: Application
    Filed: August 27, 2019
    Publication date: June 18, 2020
    Inventors: JIE WU, YUNHAN MA, Bike Xie, Hsiang-Tsun Li, JUNJIE SU, Chun-Chen Liu
  • Patent number: 10614292
    Abstract: A low-power face identification method includes detecting an object image, extracting two-dimensional image information of the object image, and when the two-dimensional image information is undetected as a face feature, disabling all related components for inhibiting a three-dimensional face recognition function.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: April 7, 2020
    Assignee: Kneron Inc.
    Inventor: Chun-Chen Liu
  • Publication number: 20200105338
    Abstract: A memory cell includes a first charge trap transistor and a second charge trap transistor. The first charge trap transistor has a substrate, a first terminal coupled to a first bitline, a second terminal coupled to a signal line, a control terminal coupled to a wordline, and a dielectric layer formed between the substrate of the first charge trap transistor and the control terminal of the first charge trap transistor. The second charge trap transistor has a substrate, a first terminal coupled to the signal line, a second terminal coupled to a second bitline, a control terminal coupled to the wordline, and a dielectric layer between the substrate of the second charge trap transistor and the control terminal of the second charge trap transistor. Charges are either trapped to or detrapped from the dielectric layer of the first charge trap transistor when writing data to the memory cell.
    Type: Application
    Filed: August 27, 2019
    Publication date: April 2, 2020
    Inventors: Yuan Du, MINGZHE JIANG, JUNJIE SU, Chun-Chen Liu
  • Publication number: 20200097816
    Abstract: A system for operating a floating-to-fixed arithmetic framework includes a floating-to-fix arithmetic framework on an arithmetic operating hardware such as a central processing unit (CPU) for computing a floating pre-trained convolution neural network (CNN) model to a dynamic fixed-point CNN model. The dynamic fixed-point CNN model is capable of implementing a high performance convolution neural network (CNN) on a resource limited embedded system such as mobile phone or video cameras.
    Type: Application
    Filed: August 27, 2019
    Publication date: March 26, 2020
    Inventors: Jie Wu, Bike Xie, Hsiang-Tsun Li, Junjie Su, Chun-Chen Liu
  • Publication number: 20200082247
    Abstract: A searching framework system includes an arithmetic operating hardware. When operating the searching framework system, input data and reconfiguration parameters are inputted to an automatic architecture searching framework of the arithmetic operating hardware. The automatic architecture searching framework then executes arithmetic operations to search for an optimized convolution neural network (CNN) model and outputs the optimized CNN model.
    Type: Application
    Filed: August 29, 2019
    Publication date: March 12, 2020
    Inventors: JIE WU, JUNJIE SU, Chun-Chen Liu
  • Patent number: 10552732
    Abstract: A multi-layer artificial neural network having at least one high-speed communication interface and N computational layers is provided. N is an integer larger than 1. The N computational layers are serially connected via the at least one high-speed communication interface. Each of the N computational layers respectively includes a computation circuit and a local memory. The local memory is configured to store input data and learnable parameters for the computation circuit. The computation circuit in the ith computational layer provides its computation results, via the at least one high-speed communication interface, to the local memory in the (i+1)th computational layer as the input data for the computation circuit in the (i+1)th computational layer, wherein i is an integer index ranging from 1 to (N?1).
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: February 4, 2020
    Assignee: Kneron Inc.
    Inventors: Yilei Li, Yuan Du, Chun-Chen Liu, Li Du
  • Publication number: 20200020655
    Abstract: A semiconductor device manufacturing method including: simultaneously forming a plurality of conductive bumps respectively on a plurality of formation sites by adjusting a forming factor in accordance with an environmental density associated with each formation site; wherein the plurality of conductive bumps including an inter-bump height uniformity smaller than a value, and the environmental density is determined by a number of neighboring formation sites around each formation site in a predetermined range.
    Type: Application
    Filed: March 14, 2019
    Publication date: January 16, 2020
    Inventors: MING-HO TSAI, JYUN-HONG CHEN, CHUN-CHEN LIU, YU-NU HSU, PENG-REN CHEN, WEN-HAO CHENG, CHI-MING TSAI
  • Publication number: 20190378013
    Abstract: A self-tuning model compression methodology for reconfiguring a Deep Neural Network includes: receiving a DNN model and a data set, wherein the DNN includes an input layer, at least one hidden layer and an output layer, and said at least one hidden layer and the output layer of the DNN model includes a plurality of neurons; compressing the DNN model into a reconfigured model according to the data set, wherein the reconfigured model includes an input layer, at least one hidden layer and an output layer, and said at least one hidden layer and the output layer of the reconfigured model includes a plurality of neurons, and a size of the reconfigured model is smaller than a size of the DNN model; and executing the reconfigured model on a user terminal for an end-user application.
    Type: Application
    Filed: June 6, 2018
    Publication date: December 12, 2019
    Inventors: JIE WU, JUNJIE SU, Bike Xie, Chun-Chen Liu
  • Publication number: 20190370658
    Abstract: A method of compressing a pre-trained deep neural network model includes inputting the pre-trained deep neural network model as a candidate model. The candidate model is compressed by increasing sparsity of the candidate, removing at least one batch normalization layer present in the candidate model, and quantizing all remaining weights into fixed-point representation to form a compressed model. Accuracy of the compressed model is then determined utilizing an end-user training and validation data set. Compression of the candidate model is repeated when the accuracy improves. Hyper parameters for compressing the candidate model are adjusted, then compression of the candidate model is repeated when the accuracy declines. The compressed model is output for inference utilization when the accuracy meets or exceeds the end-user performance metric and target.
    Type: Application
    Filed: April 18, 2019
    Publication date: December 5, 2019
    Inventors: Bike Xie, JUNJIE SU, JIE WU, BODONG ZHANG, Chun-Chen Liu
  • Publication number: 20190370656
    Abstract: A method of pruning a batch normalization layer from a pre-trained deep neural network model is proposed. The pre-trained deep neural network model is inputted as a candidate model. The candidate model is pruned by removing the at least one batch normalization layer from the candidate model to form a pruned candidate model only when the at least one batch normalization layer is connected to and adjacent to a corresponding linear operation layer. The corresponding linear operation layer may be at least one of a convolution layer, a dense layer, a depthwise convolution layer, and a group convolution layer. Weights of the corresponding linear operation layer are adjusted to compensate for the removal of the at least one batch normalization. The pruned candidate model is then output and utilized for inference.
    Type: Application
    Filed: January 24, 2019
    Publication date: December 5, 2019
    Inventors: Bike Xie, JUNJIE SU, BODONG ZHANG, Chun-Chen Liu
  • Publication number: 20190286885
    Abstract: A face identification system for a mobile device includes a housing and a central processing unit within the housing, the central processing unit configured to unlock or not unlock the mobile device according to a comparison result. The face identification system is disposed within the housing. The face identification system includes a 3D structured light emitting device configured to emit a three-dimensional structured light signal to an object external to the housing. A first neural network processing unit outputs a comparison result to the central processing unit according to processing of an inputted sampled signal. A sensor is configured to perform three-dimensional sampling of the three-dimensional structured light signal as reflected by the object and input the sampled signal directly to the first neural network processing unit.
    Type: Application
    Filed: March 13, 2018
    Publication date: September 19, 2019
    Inventor: Chun-Chen Liu
  • Publication number: 20190244011
    Abstract: A low-power face identification method includes emitting at least one first light signal to an object, receiving at least one second light signal reflected by the object, decoding the at least one second light signal to generate a decoded light signal, extracting two-dimensional image information from the decoded light signal, performing a two-dimensional face detection function by an artificial intelligence chip according to the two-dimensional image information and two-dimensional face training data, inhibiting a two-dimensional face recognition function when a two-dimensional face is undetected, and disabling an image converter by the artificial intelligence chip in order to inhibit a three-dimensional face recognition function when the two-dimensional face recognition function is inhibited.
    Type: Application
    Filed: February 6, 2018
    Publication date: August 8, 2019
    Inventor: Chun-Chen Liu
  • Publication number: 20190228210
    Abstract: A face identification system includes a transmitter, a receiver, a database, an artificial intelligence chip, and a main processor. The transmitter is used for emitting at least one first light signal to an object. The receiver is used for receiving at least one second light signal reflected by the object. The database is used for saving training data. The artificial intelligence chip is coupled to the transmitter, the receiver, and the database for identifying a face image from the object according to the at least one second light signal and the training data. The main processor is coupled to the artificial intelligence chip for receiving a face identification signal generated from the artificial intelligence chip.
    Type: Application
    Filed: January 22, 2018
    Publication date: July 25, 2019
    Inventor: Chun-Chen Liu
  • Publication number: 20190189577
    Abstract: A package structure is provided. The package structure includes a first bump structure formed over a substrate, a solder joint formed over the first bump structure and a second bump structure formed over the solder joint. The first bump structure includes a first pillar layer formed over the substrate and a first barrier layer formed over the first pillar layer. The first barrier layer has a first protruding portion which extends away from a sidewall surface of the first pillar layer, and a distance between the sidewall surface of the first pillar layer and a sidewall surface of the first barrier layer is in a range from about 0.5 ?m to about 3 ?m. The second bump structure includes a second barrier layer formed over the solder joint and a second pillar layer formed over the second barrier layer, wherein the second barrier layer has a second protruding portion which extends away from a sidewall surface of the second pillar layer.
    Type: Application
    Filed: November 19, 2018
    Publication date: June 20, 2019
    Applicant: Taiwan Semiconductor Manufacturing Co., Ltd.
    Inventors: Cheng-Hung CHEN, Yu-Nu HSU, Chun-Chen LIU, Heng-Chi HUANG, Chien-Chen LI, Shih-Yen CHEN, Cheng-Nan HSIEH, Kuo-Chio LIU, Chen-Shien CHEN, Chin-Yu KU, Te-Hsun PANG, Yuan-Feng WU, Sen-Chi CHIANG
  • Patent number: 10169295
    Abstract: A convolution operation method includes the following steps of: performing convolution operations for data inputted in channels, respectively, so as to output a plurality of convolution results; and alternately summing the convolution results of the channels in order so as to output a sum result. A convolution operation device executing the convolution operation method is also disclosed.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: January 1, 2019
    Assignee: KNERON, INC.
    Inventors: Li Du, Yuan Du, Yi-Lei Li, Yen-Cheng Kuan, Chun-Chen Liu
  • Patent number: 10162799
    Abstract: A buffer device includes input lines, an input buffer unit and a remapping unit. The input lines are coupled to a memory and configured to be inputted with data from the memory in a current clock. The input buffer unit is coupled to the input lines and configured to buffer one part of the inputted data and output the part of the inputted data in a later clock. The remapping unit is coupled to the input lines and the input buffer unit, and configured to generate remap data for a convolution operation according to the data on the input lines and the output of the input buffer unit in the current clock. A convolution operation method for a data stream is also disclosed.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: December 25, 2018
    Assignee: KNERON, INC.
    Inventors: Yuan Du, Li Du, Yi-Lei Li, Yen-Cheng Kuan, Chun-Chen Liu
  • Publication number: 20180232629
    Abstract: A pooling operation method for a convolutional neural network includes the following steps of: reading multiple new data in at least one column of a pooling window; performing a first pooling operation with the new data to generate at least a pooling result column; storing the pooling result column in a buffer; and performing a second pooling operation with the pooling result column and at least a preceding pooling result column stored in the buffer to generate a pooling result of the pooling window. The first pooling operation and the second pooling operation are max pooling operations.
    Type: Application
    Filed: November 2, 2017
    Publication date: August 16, 2018
    Inventors: Yuan DU, Li DU, Chun-Chen LIU
  • Publication number: 20180232621
    Abstract: An operation method for a convolutional neural network includes the following steps of: performing an add operation with a plurality of input data to output an accumulated result; performing a bit-shift operation with the accumulated result to output a shifted result; and performing a weight-scaling operation with the shifted result to output a weighted result. Herein, a weighting factor of the weight-scaling operation is determined according to the amount of input data, the amount of right-shifting bits in the bit-shift operation, and a scaled weight value of a consecutive layer in the convolutional neural network.
    Type: Application
    Filed: November 2, 2017
    Publication date: August 16, 2018
    Inventors: Yuan DU, Li DU, Chun-Chen LIU
  • Patent number: 10008425
    Abstract: Integrated circuits and methods of manufacturing such circuits are disclosed herein that feature metal line-via matrix insertion after place and route processes are performed and/or completed for the integrated circuit's layout. The metal line-via matrix consists of one or more additional metal lines and one or more additional vias that are inserted into the integrated circuit's layout at a specific point to lower the current and current density through a first conductive path that has been determined to suffer from electromigration, IR-voltage drop, and/or jitter. Specifically, the metal line-via matrix provides one or more auxiliary conductive paths to divert and carry a portion of the current that would otherwise flow through the first conductive path. This mitigates electromigration issues and IR-voltage drop along the first conductive path. It may also help alleviate problems due to jitter along the path.
    Type: Grant
    Filed: November 1, 2016
    Date of Patent: June 26, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Chun-Chen Liu, Ju-Yi Lu, Shengqiong Xie
  • Publication number: 20180137084
    Abstract: A convolution operation method includes the following steps of: performing convolution operations for data inputted in channels, respectively, so as to output a plurality of convolution results; and alternately summing the convolution results of the channels in order so as to output a sum result. A convolution operation device executing the convolution operation method is also disclosed.
    Type: Application
    Filed: March 15, 2017
    Publication date: May 17, 2018
    Inventors: Li DU, Yuan DU, Yi-Lei LI, Yen-Cheng KUAN, Chun-Chen LIU