Patents by Inventor Meng-Fan Chang

Meng-Fan Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220359031
    Abstract: A control circuit, a memory system and a control method are provided. The control circuit is configured to control a plurality of memory cells of a memory array. The control circuit comprises a program controller. The program is configured to program a first electrical characteristic distribution and a second electrical characteristic distribution of the memory cells according to error tolerance of a first bit of a data type. A first overlapping area between the first electrical characteristic distribution and the second electrical characteristic distribution is smaller than a first predetermined value.
    Type: Application
    Filed: August 23, 2021
    Publication date: November 10, 2022
    Applicant: Taiwan Semiconductor Manufacturing Company, Ltd.
    Inventors: Win-San Khwa, Jen-Chieh Liu, Meng-Fan Chang, Tung-Ying Lee, Jin Cai
  • Patent number: 11495287
    Abstract: A memory unit is controlled by a first word line and a second word line. The memory unit includes a memory cell and a transpose cell. The memory cell stores a weight. The memory cell is controlled by the first word line and includes a local bit line transmitting the weight. The transpose cell is connected to the memory cell and receives the weight via the local bit line. The transpose cell includes an input bit line, an input bit line bar, an output bit line and an output bit line bar. Each of the input bit line and the input bit line bar transmits a multi-bit input value, and the transpose cell is controlled by the second word line to generate a multi-bit output value on each of the output bit line and the output bit line bar according to the multi-bit input value and the weight.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: November 8, 2022
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Meng-Fan Chang, Yung-Ning Tu, Xin Si, Wei-Hsing Huang
  • Publication number: 20220291963
    Abstract: An input-shaping method for a group-modulated input scheme in a plurality of computing-in-memory applications is configured to shape a plurality of multi-bit input signals. The input-shaping method for the group-modulated input scheme in the plurality of computing-in-memory applications includes performing an input splitting step, a threshold setting step and an input shaping step. The input splitting step includes splitting the multi-bit input signals into a plurality of input sub-groups via an input-shaping unit. The threshold setting step includes setting at least one shaping threshold via the input-shaping unit. The input shaping step includes shaping at least one of the input sub-groups according to the at least one shaping threshold via the input-shaping unit to form a plurality of shaped multi-bit input signals so as to increase a probability of a bit equal to 0 occurring in the at least one of the input sub-groups.
    Type: Application
    Filed: March 15, 2021
    Publication date: September 15, 2022
    Inventors: Fu-Chun CHANG, Ta-Wei LIU, Cheng-Xin XUE, Sheng-Po HUANG, Yen-Hsiang HUANG, Meng-Fan CHANG
  • Publication number: 20220286118
    Abstract: A random number generator that includes control circuit, an oscillation circuit, an oscillation detection circuit and a latch circuit is introduced. The control circuit sweeps a configuration of a bias control signal among a plurality of configurations. The oscillation circuit generates an oscillation signal based on the configuration of the bias control signal. The oscillation detection circuit detects an onset of the oscillation signal, and outputs a lock signal. The latch circuit latches the oscillation signal according to a trigger signal to output a random number, wherein the trigger signal is asserted after the lock signal is outputted, and the configuration of bias control signal is locked after the lock signal is outputted. A method for generating a random number and an operation method of a random number generator are also introduced.
    Type: Application
    Filed: May 3, 2022
    Publication date: September 8, 2022
    Applicant: Taiwan Semiconductor Manufacturing Company, Ltd.
    Inventors: Win-San Khwa, Jui-Jen Wu, Jen-Chieh Liu, Elia Ambrosi, Xinyu BAO, Meng-Fan Chang
  • Patent number: 11423315
    Abstract: A quantization method for a plurality of partial sums of a convolution neural network based on a computing-in-memory hardware includes a probability-based quantizing step and a margin-based quantizing step. The probability-based quantizing step includes a network training step, a quantization-level generating step, a partial-sum quantizing step, a first network retraining step and a first accuracy generating step. The margin-based quantizing step includes a quantization edge changing step, a second network retraining step and a second accuracy generating step. The quantization edge changing step includes changing a quantization edge of at least one of a plurality of quantization levels. The probability-based quantizing step is performed to generate a first accuracy value, and the margin-based quantizing step is performed to generate a second accuracy value. The second accuracy value is greater than the first accuracy value.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: August 23, 2022
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Meng-Fan Chang, Jing-Hong Wang, Ta-Wei Liu
  • Publication number: 20220262432
    Abstract: A system includes a global generator and local generators. The global generator is coupled to a memory array, and is configured to generate global signals, according to a number of a computational output of the memory array. The local generators are coupled to the global generator and the memory array, and are configured to generate local signals, according to the global signals. Each one of the local generators includes a first reference circuit and a local current mirror. The first reference circuit is coupled to the global generator, and is configured to generate a first reference signal at a node, in response to a first global signal of the global signals. The local current mirror is coupled to the first reference circuit at the node, and is configured to generate the local signals, by mirroring a summation of at least one signal at the node.
    Type: Application
    Filed: July 1, 2021
    Publication date: August 18, 2022
    Applicant: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY, LTD.
    Inventors: Yu-Der CHIH, Meng-Fan CHANG, May-Be CHEN, Cheng-Xin XUE, Je-Syu LIU
  • Patent number: 11416146
    Abstract: A memory structure with input-aware maximum multiply-and-accumulate value zone prediction for computing-in-memory applications includes a memory array, an input-aware zone prediction circuit and an analog-to-digital converter. An input-aware maximum partial multiply-and-accumulate value voltage generator is configured to generate a maximum partial multiply-and-accumulate value according to at least one input value. A prediction-aware global reference voltage generator is configured to generate a plurality of global reference voltages, a maximum reference voltage and a selected minimum reference voltage. A maximum partial multiply-and-accumulate value zone detector is configured to generate a zone switch signal by comparing the maximum partial multiply-and-accumulate value and the global reference voltages.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: August 16, 2022
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Meng-Fan Chang, Jian-Wei Su, Je-Min Hung, Chuan-Jia Jhang, Ping-Chun Wu, Jin-Sheng Ren
  • Patent number: 11392820
    Abstract: A transpose memory unit for a plurality of multi-bit convolutional neural network based computing-in-memory applications includes a memory cell and a transpose cell. The memory cell stores a weight. The transpose cell is connected to the memory cell and receives the weight from the memory cell. The transpose cell includes an input bit line, at least one first input word line, a first output bit line, at least one second input word line and a second output bit line. One of the at least one first input word line and the at least one second input word line transmits at least one multi-bit input value, and the transpose cell is controlled by the second word line to generate a multiply-accumulate output value on one of the first output bit line and the second output bit line according to the at least one multi-bit input value multiplied by the weight.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: July 19, 2022
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Meng-Fan Chang, Jian-Wei Su, Yen-Chi Chou, Ru-Hui Liu
  • Patent number: 11393523
    Abstract: A memory unit with an asymmetric group-modulated input scheme and a current-to-voltage signal stacking scheme for a plurality of non-volatile computing-in-memory applications is configured to compute a plurality of multi-bit input signals and a plurality of weights. A controller splits the multi-bit input signals into a plurality of input sub-groups and generates a plurality of switching signals according to the input sub-groups, and the input sub-groups are sequentially inputted to the word lines. The current-to-voltage signal stacking converter converts the bit-line current from a plurality of non-volatile memory cells into a plurality of converted voltages according to the input sub-groups and the switching signals, and the current-to-voltage signal stacking converter stacks the converted voltages to form an output voltage. The output voltage is corresponding to a sum of a plurality of multiplication values which are equal to the multi-bit input signals multiplied by the weights.
    Type: Grant
    Filed: January 13, 2021
    Date of Patent: July 19, 2022
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Cheng-Xin Xue, Hui-Yao Kao, Sheng-Po Huang, Yen-Hsiang Huang, Meng-Fan Chang
  • Publication number: 20220223197
    Abstract: A memory unit with an asymmetric group-modulated input scheme and a current-to-voltage signal stacking scheme for a plurality of non-volatile computing-in-memory applications is configured to compute a plurality of multi-bit input signals and a plurality of weights. A controller splits the multi-bit input signals into a plurality of input sub-groups and generates a plurality of switching signals according to the input sub-groups, and the input sub-groups are sequentially inputted to the word lines. The current-to-voltage signal stacking converter converts the bit-line current from a plurality of non-volatile memory cells into a plurality of converted voltages according to the input sub-groups and the switching signals, and the current-to-voltage signal stacking converter stacks the converted voltages to form an output voltage. The output voltage is corresponding to a sum of a plurality of multiplication values which are equal to the multi-bit input signals multiplied by the weights.
    Type: Application
    Filed: January 13, 2021
    Publication date: July 14, 2022
    Inventors: Cheng-Xin XUE, Hui-Yao KAO, Sheng-Po HUANG, Yen-Hsiang HUANG, Meng-Fan CHANG
  • Patent number: 11349462
    Abstract: A random number generator that includes control circuit, an oscillation circuit, a dynamic header circuit, an oscillation detection circuit and a latch circuit is introduced. The control circuit sweeps a configuration of a bias control signal among a plurality of configurations. The dynamic header circuit generates a bias voltage based on the configuration of the bias control signal. The oscillation circuit generates an oscillation signal based on the bias voltage. The oscillation detection circuit detects an onset of the oscillation signal, and outputs a lock signal. The latch circuit latches the oscillation signal according to a trigger signal to output a random number, wherein the trigger signal is asserted after the lock signal is outputted, and the configuration of bias control signal is locked after the lock signal is outputted. A method for generating a random number and an operation method of a random number generator are also introduced.
    Type: Grant
    Filed: July 8, 2021
    Date of Patent: May 31, 2022
    Assignee: Taiwan Semiconductor Manufacturing Company, Ltd.
    Inventors: Win-San Khwa, Jui-Jen Wu, Jen-Chieh Liu, Elia Ambrosi, Xinyu Bao, Meng-Fan Chang
  • Patent number: 11335401
    Abstract: A memory unit with multiple word lines for a plurality of non-volatile computing-in-memory applications is configured to compute a plurality of input signals and a plurality of weights. The memory unit includes a non-volatile memory cell array, a replica non-volatile memory cell array and a multi-row current calibration circuit. The non-volatile memory cell array is configured to generate a bit-line current. The replica non-volatile memory cell array includes a plurality of replica non-volatile memory cells and is configured to generate a calibration current. Each of the replica non-volatile memory cells is in the high resistance state. The multi-row current calibration circuit is electrically connected to the non-volatile memory cell array and the replica non-volatile memory cell array. The multi-row current calibration circuit is configured to subtract the calibration current from a dataline current to generate a calibrated dataline current. The dataline current is equal to the bit-line current.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: May 17, 2022
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Yen-Hsiang Huang, Sheng-Po Huang, Cheng-Xin Xue, Meng-Fan Chang
  • Publication number: 20220129153
    Abstract: A memory unit is controlled by a first word line and a second word line. The memory unit includes a memory cell and a multi-bit input local computing cell. The memory cell stores a weight. The memory cell is controlled by the first word line and includes a local bit line transmitting the weight. The multi-bit input local computing cell is connected to the memory cell and receives the weight via the local bit line. The multi-bit input local computing cell includes a plurality of input lines and a plurality of output lines. Each of the input lines transmits a multi-bit input value, and the multi-bit input local computing cell is controlled by the second word line to generate a multi-bit output value on each of the output lines according to the multi-bit input value multiplied by the weight.
    Type: Application
    Filed: October 27, 2020
    Publication date: April 28, 2022
    Inventors: Meng-Fan CHANG, Pei-Jung LU
  • Publication number: 20220044714
    Abstract: A memory unit includes at least one memory cell and a computational cell. The at least one memory cell stores a weight. The at least one memory cell is controlled by a first word line and includes a local bit line transmitting the weight. The computational cell is connected to the at least one memory cell and receiving the weight via the local bit line. Each of an input bit line and an input bit line bar transmits a multi-bit input value. The computational cell is controlled by a second word line and an enable signal to generate a multi-bit output value on each of an output bit line and an output bit line bar according to the multi-bit input value multiplied by the weight. The computational cell is controlled by a first switching signal and a second switching signal for charge sharing.
    Type: Application
    Filed: August 4, 2020
    Publication date: February 10, 2022
    Inventors: Meng-Fan CHANG, Yen-Chi CHOU, Jian-Wei SU
  • Publication number: 20210390415
    Abstract: A dynamic gradient calibration method for a computing-in-memory neural network is performed to update a plurality of weights in a computing-in-memory circuit according to a plurality of inputs corresponding to a correct answer. A forward operating step includes performing a bit wise multiply-accumulate operation on a plurality of divided inputs and a plurality of divided weights to generate a plurality of multiply-accumulate values, and performing a clamping function on the multiply-accumulate values to generate a plurality of clamped multiply-accumulate values according to a predetermined upper bound value, and comparing the clamped multiply-accumulate values with the correct answer to generate a plurality of loss values. A backward operating step includes performing a partial differential operation on the loss values relative to the weights to generate a weight-based gradient. The weights are updated according to the weight-based gradient.
    Type: Application
    Filed: June 16, 2020
    Publication date: December 16, 2021
    Inventors: Meng-Fan CHANG, Shao-Hung HUANG, Ta-Wei LIU
  • Patent number: 11195090
    Abstract: A memory unit is controlled by a word line, a reference voltage and a bit-line clamping voltage. A non-volatile memory cell is controlled by the word line and stores a weight. A clamping module is electrically connected to the non-volatile memory cell via a bit line and controlled by the reference voltage and the bit-line clamping voltage. A clamping transistor of the clamping module is controlled by the bit-line clamping voltage to adjust a bit-line current. A cell detector of the clamping module is configured to detect the bit-line current to generate a comparison output according to the reference voltage. A clamping control circuit of the clamping module switches the clamping transistor according to the comparison output and the bit-line clamping voltage. When the clamping transistor is turned on by the clamping control circuit, the bit-line current is corresponding to the bit-line clamping voltage multiplied by the weight.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: December 7, 2021
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Meng-Fan Chang, Cheng-Xin Xue, Je-Syu Liu, Ting-Wei Chang, Tsung-Yuan Huang, Hui-Yao Kao
  • Publication number: 20210247962
    Abstract: A memory unit with a multiply-accumulate assist scheme for a plurality of multi-bit convolutional neural network based computing-in-memory applications is controlled by a reference voltage, a word line and a multi-bit input voltage. The memory unit includes a non-volatile memory cell, a voltage divider and a voltage keeper. The non-volatile memory cell is controlled by the word line and stores a weight. The voltage divider includes a data line and generates a charge current on the data line according to the reference voltage, and a voltage level of the data line is generated by the non-volatile memory cell and the charge current. The voltage keeper generates an output current on an output node according to the multi-bit input voltage and the voltage level of the data line, and the output current is corresponding to the multi-bit input voltage multiplied by the weight.
    Type: Application
    Filed: February 6, 2020
    Publication date: August 12, 2021
    Inventors: Meng-Fan CHANG, Han-Wen HU, Kuang-Tang CHANG
  • Publication number: 20210248478
    Abstract: A quantization method for a plurality of partial sums of a convolution neural network based on a computing-in-memory hardware includes a probability-based quantizing step and a margin-based quantizing step. The probability-based quantizing step includes a network training step, a quantization-level generating step, a partial-sum quantizing step, a first network retraining step and a first accuracy generating step. The margin-based quantizing step includes a quantization edge changing step, a second network retraining step and a second accuracy generating step. The quantization edge changing step includes changing a quantization edge of at least one of a plurality of quantization levels. The probability-based quantizing step is performed to generate a first accuracy value, and the margin-based quantizing step is performed to generate a second accuracy value. The second accuracy value is greater than the first accuracy value.
    Type: Application
    Filed: February 6, 2020
    Publication date: August 12, 2021
    Inventors: Meng-Fan CHANG, Jing-Hong WANG, Ta-Wei LIU
  • Publication number: 20210216846
    Abstract: A transpose memory unit for a plurality of multi-bit convolutional neural network based computing-in-memory applications includes a memory cell and a transpose cell. The memory cell stores a weight. The transpose cell is connected to the memory cell and receives the weight from the memory cell. The transpose cell includes an input bit line, at least one first input word line, a first output bit line, at least one second input word line and a second output bit line. One of the at least one first input word line and the at least one second input word line transmits at least one multi-bit input value, and the transpose cell is controlled by the second word line to generate a multiply-accumulate output value on one of the first output bit line and the second output bit line according to the at least one multi-bit input value multiplied by the weight.
    Type: Application
    Filed: January 14, 2020
    Publication date: July 15, 2021
    Inventors: Meng-Fan CHANG, Jian-Wei SU, Yen-Chi CHOU, Ru-Hui LIU
  • Patent number: 11057224
    Abstract: A method for performing a physical unclonable function generated by a non-volatile memory write delay difference includes a resetting step, a writing step, a detecting step, a terminating step and a write-back operating step. The resetting step includes resetting two non-volatile memory cells controlled by a bit line and a bit line bar, respectively. The writing step includes performing a write operation on each of the two non-volatile memory cells. The detecting step includes detecting a voltage drop of each of the bit line and the bit line bar, and comparing the voltage drop and a predetermined voltage difference value to generate a comparison flag. The terminating step includes terminating the write operation on one of the two non-volatile memory cells according to the comparison flag. The write-back operating step includes performing a write-back operation on another of the two non-volatile memory cells.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: July 6, 2021
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventor: Meng-Fan Chang