Patents by Inventor Meng-Fan Chang

Meng-Fan Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11392820
    Abstract: A transpose memory unit for a plurality of multi-bit convolutional neural network based computing-in-memory applications includes a memory cell and a transpose cell. The memory cell stores a weight. The transpose cell is connected to the memory cell and receives the weight from the memory cell. The transpose cell includes an input bit line, at least one first input word line, a first output bit line, at least one second input word line and a second output bit line. One of the at least one first input word line and the at least one second input word line transmits at least one multi-bit input value, and the transpose cell is controlled by the second word line to generate a multiply-accumulate output value on one of the first output bit line and the second output bit line according to the at least one multi-bit input value multiplied by the weight.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: July 19, 2022
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Meng-Fan Chang, Jian-Wei Su, Yen-Chi Chou, Ru-Hui Liu
  • Patent number: 11393523
    Abstract: A memory unit with an asymmetric group-modulated input scheme and a current-to-voltage signal stacking scheme for a plurality of non-volatile computing-in-memory applications is configured to compute a plurality of multi-bit input signals and a plurality of weights. A controller splits the multi-bit input signals into a plurality of input sub-groups and generates a plurality of switching signals according to the input sub-groups, and the input sub-groups are sequentially inputted to the word lines. The current-to-voltage signal stacking converter converts the bit-line current from a plurality of non-volatile memory cells into a plurality of converted voltages according to the input sub-groups and the switching signals, and the current-to-voltage signal stacking converter stacks the converted voltages to form an output voltage. The output voltage is corresponding to a sum of a plurality of multiplication values which are equal to the multi-bit input signals multiplied by the weights.
    Type: Grant
    Filed: January 13, 2021
    Date of Patent: July 19, 2022
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Cheng-Xin Xue, Hui-Yao Kao, Sheng-Po Huang, Yen-Hsiang Huang, Meng-Fan Chang
  • Publication number: 20220223197
    Abstract: A memory unit with an asymmetric group-modulated input scheme and a current-to-voltage signal stacking scheme for a plurality of non-volatile computing-in-memory applications is configured to compute a plurality of multi-bit input signals and a plurality of weights. A controller splits the multi-bit input signals into a plurality of input sub-groups and generates a plurality of switching signals according to the input sub-groups, and the input sub-groups are sequentially inputted to the word lines. The current-to-voltage signal stacking converter converts the bit-line current from a plurality of non-volatile memory cells into a plurality of converted voltages according to the input sub-groups and the switching signals, and the current-to-voltage signal stacking converter stacks the converted voltages to form an output voltage. The output voltage is corresponding to a sum of a plurality of multiplication values which are equal to the multi-bit input signals multiplied by the weights.
    Type: Application
    Filed: January 13, 2021
    Publication date: July 14, 2022
    Inventors: Cheng-Xin XUE, Hui-Yao KAO, Sheng-Po HUANG, Yen-Hsiang HUANG, Meng-Fan CHANG
  • Patent number: 11349462
    Abstract: A random number generator that includes control circuit, an oscillation circuit, a dynamic header circuit, an oscillation detection circuit and a latch circuit is introduced. The control circuit sweeps a configuration of a bias control signal among a plurality of configurations. The dynamic header circuit generates a bias voltage based on the configuration of the bias control signal. The oscillation circuit generates an oscillation signal based on the bias voltage. The oscillation detection circuit detects an onset of the oscillation signal, and outputs a lock signal. The latch circuit latches the oscillation signal according to a trigger signal to output a random number, wherein the trigger signal is asserted after the lock signal is outputted, and the configuration of bias control signal is locked after the lock signal is outputted. A method for generating a random number and an operation method of a random number generator are also introduced.
    Type: Grant
    Filed: July 8, 2021
    Date of Patent: May 31, 2022
    Assignee: Taiwan Semiconductor Manufacturing Company, Ltd.
    Inventors: Win-San Khwa, Jui-Jen Wu, Jen-Chieh Liu, Elia Ambrosi, Xinyu Bao, Meng-Fan Chang
  • Patent number: 11335401
    Abstract: A memory unit with multiple word lines for a plurality of non-volatile computing-in-memory applications is configured to compute a plurality of input signals and a plurality of weights. The memory unit includes a non-volatile memory cell array, a replica non-volatile memory cell array and a multi-row current calibration circuit. The non-volatile memory cell array is configured to generate a bit-line current. The replica non-volatile memory cell array includes a plurality of replica non-volatile memory cells and is configured to generate a calibration current. Each of the replica non-volatile memory cells is in the high resistance state. The multi-row current calibration circuit is electrically connected to the non-volatile memory cell array and the replica non-volatile memory cell array. The multi-row current calibration circuit is configured to subtract the calibration current from a dataline current to generate a calibrated dataline current. The dataline current is equal to the bit-line current.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: May 17, 2022
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Yen-Hsiang Huang, Sheng-Po Huang, Cheng-Xin Xue, Meng-Fan Chang
  • Publication number: 20220129153
    Abstract: A memory unit is controlled by a first word line and a second word line. The memory unit includes a memory cell and a multi-bit input local computing cell. The memory cell stores a weight. The memory cell is controlled by the first word line and includes a local bit line transmitting the weight. The multi-bit input local computing cell is connected to the memory cell and receives the weight via the local bit line. The multi-bit input local computing cell includes a plurality of input lines and a plurality of output lines. Each of the input lines transmits a multi-bit input value, and the multi-bit input local computing cell is controlled by the second word line to generate a multi-bit output value on each of the output lines according to the multi-bit input value multiplied by the weight.
    Type: Application
    Filed: October 27, 2020
    Publication date: April 28, 2022
    Inventors: Meng-Fan CHANG, Pei-Jung LU
  • Publication number: 20220044714
    Abstract: A memory unit includes at least one memory cell and a computational cell. The at least one memory cell stores a weight. The at least one memory cell is controlled by a first word line and includes a local bit line transmitting the weight. The computational cell is connected to the at least one memory cell and receiving the weight via the local bit line. Each of an input bit line and an input bit line bar transmits a multi-bit input value. The computational cell is controlled by a second word line and an enable signal to generate a multi-bit output value on each of an output bit line and an output bit line bar according to the multi-bit input value multiplied by the weight. The computational cell is controlled by a first switching signal and a second switching signal for charge sharing.
    Type: Application
    Filed: August 4, 2020
    Publication date: February 10, 2022
    Inventors: Meng-Fan CHANG, Yen-Chi CHOU, Jian-Wei SU
  • Publication number: 20210390415
    Abstract: A dynamic gradient calibration method for a computing-in-memory neural network is performed to update a plurality of weights in a computing-in-memory circuit according to a plurality of inputs corresponding to a correct answer. A forward operating step includes performing a bit wise multiply-accumulate operation on a plurality of divided inputs and a plurality of divided weights to generate a plurality of multiply-accumulate values, and performing a clamping function on the multiply-accumulate values to generate a plurality of clamped multiply-accumulate values according to a predetermined upper bound value, and comparing the clamped multiply-accumulate values with the correct answer to generate a plurality of loss values. A backward operating step includes performing a partial differential operation on the loss values relative to the weights to generate a weight-based gradient. The weights are updated according to the weight-based gradient.
    Type: Application
    Filed: June 16, 2020
    Publication date: December 16, 2021
    Inventors: Meng-Fan CHANG, Shao-Hung HUANG, Ta-Wei LIU
  • Patent number: 11195090
    Abstract: A memory unit is controlled by a word line, a reference voltage and a bit-line clamping voltage. A non-volatile memory cell is controlled by the word line and stores a weight. A clamping module is electrically connected to the non-volatile memory cell via a bit line and controlled by the reference voltage and the bit-line clamping voltage. A clamping transistor of the clamping module is controlled by the bit-line clamping voltage to adjust a bit-line current. A cell detector of the clamping module is configured to detect the bit-line current to generate a comparison output according to the reference voltage. A clamping control circuit of the clamping module switches the clamping transistor according to the comparison output and the bit-line clamping voltage. When the clamping transistor is turned on by the clamping control circuit, the bit-line current is corresponding to the bit-line clamping voltage multiplied by the weight.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: December 7, 2021
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Meng-Fan Chang, Cheng-Xin Xue, Je-Syu Liu, Ting-Wei Chang, Tsung-Yuan Huang, Hui-Yao Kao
  • Publication number: 20210247962
    Abstract: A memory unit with a multiply-accumulate assist scheme for a plurality of multi-bit convolutional neural network based computing-in-memory applications is controlled by a reference voltage, a word line and a multi-bit input voltage. The memory unit includes a non-volatile memory cell, a voltage divider and a voltage keeper. The non-volatile memory cell is controlled by the word line and stores a weight. The voltage divider includes a data line and generates a charge current on the data line according to the reference voltage, and a voltage level of the data line is generated by the non-volatile memory cell and the charge current. The voltage keeper generates an output current on an output node according to the multi-bit input voltage and the voltage level of the data line, and the output current is corresponding to the multi-bit input voltage multiplied by the weight.
    Type: Application
    Filed: February 6, 2020
    Publication date: August 12, 2021
    Inventors: Meng-Fan CHANG, Han-Wen HU, Kuang-Tang CHANG
  • Publication number: 20210248478
    Abstract: A quantization method for a plurality of partial sums of a convolution neural network based on a computing-in-memory hardware includes a probability-based quantizing step and a margin-based quantizing step. The probability-based quantizing step includes a network training step, a quantization-level generating step, a partial-sum quantizing step, a first network retraining step and a first accuracy generating step. The margin-based quantizing step includes a quantization edge changing step, a second network retraining step and a second accuracy generating step. The quantization edge changing step includes changing a quantization edge of at least one of a plurality of quantization levels. The probability-based quantizing step is performed to generate a first accuracy value, and the margin-based quantizing step is performed to generate a second accuracy value. The second accuracy value is greater than the first accuracy value.
    Type: Application
    Filed: February 6, 2020
    Publication date: August 12, 2021
    Inventors: Meng-Fan CHANG, Jing-Hong WANG, Ta-Wei LIU
  • Publication number: 20210216846
    Abstract: A transpose memory unit for a plurality of multi-bit convolutional neural network based computing-in-memory applications includes a memory cell and a transpose cell. The memory cell stores a weight. The transpose cell is connected to the memory cell and receives the weight from the memory cell. The transpose cell includes an input bit line, at least one first input word line, a first output bit line, at least one second input word line and a second output bit line. One of the at least one first input word line and the at least one second input word line transmits at least one multi-bit input value, and the transpose cell is controlled by the second word line to generate a multiply-accumulate output value on one of the first output bit line and the second output bit line according to the at least one multi-bit input value multiplied by the weight.
    Type: Application
    Filed: January 14, 2020
    Publication date: July 15, 2021
    Inventors: Meng-Fan CHANG, Jian-Wei SU, Yen-Chi CHOU, Ru-Hui LIU
  • Patent number: 11057224
    Abstract: A method for performing a physical unclonable function generated by a non-volatile memory write delay difference includes a resetting step, a writing step, a detecting step, a terminating step and a write-back operating step. The resetting step includes resetting two non-volatile memory cells controlled by a bit line and a bit line bar, respectively. The writing step includes performing a write operation on each of the two non-volatile memory cells. The detecting step includes detecting a voltage drop of each of the bit line and the bit line bar, and comparing the voltage drop and a predetermined voltage difference value to generate a comparison flag. The terminating step includes terminating the write operation on one of the two non-volatile memory cells according to the comparison flag. The write-back operating step includes performing a write-back operation on another of the two non-volatile memory cells.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: July 6, 2021
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventor: Meng-Fan Chang
  • Publication number: 20210203513
    Abstract: A method for performing a physical unclonable function generated by a non-volatile memory write delay difference includes a resetting step, a writing step, a detecting step, a terminating step and a write-back operating step. The resetting step includes resetting two non-volatile memory cells controlled by a bit line and a bit line bar, respectively. The writing step includes performing a write operation on each of the two non-volatile memory cells. The detecting step includes detecting a voltage drop of each of the bit line and the bit line bar, and comparing the voltage drop and a predetermined voltage difference value to generate a comparison flag. The terminating step includes terminating the write operation on one of the two non-volatile memory cells according to the comparison flag. The write-back operating step includes performing a write-back operation on another of the two non-volatile memory cells.
    Type: Application
    Filed: December 30, 2019
    Publication date: July 1, 2021
    Inventor: Meng-Fan CHANG
  • Patent number: 11048650
    Abstract: A method for integrating a processing-in-sensor unit and an in-memory computing includes the following steps. A providing step is performed to transmit the first command signal and the initial data to the in-memory computing unit. A converting step is performed to drive the first command signal and the initial data to convert to a second command signal and a plurality of input data through a synchronizing module. A fetching step is performed to drive a frame difference module to receive the input data to fetch a plurality of difference data. A slicing step is performed to drive a bit-slicing module to receive the difference data and slice each of the difference data into a plurality of bit slices. A controlling step is performed to encode the difference address into a control signal, and the in-memory computing unit accesses each of the bit slices according to the control signal.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: June 29, 2021
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Kea-Tiong Tang, Meng-Fan Chang, Chih-Cheng Hsieh, Syuan-Hao Sie
  • Patent number: 11049550
    Abstract: A multi-bit current sense amplifier with pipeline current sampling of a resistive memory is configured to sense a plurality of bit line currents of a plurality of bit lines in a pipeline operation. A core sense circuit is connected to one part of the bit lines and generates a reference parallel resistance current and a reference anti-parallel resistance current. A plurality of bit line precharge branch circuits are connected to the core sense circuit and another part of the bit lines. The bit line currents of the bit lines, the reference parallel resistance current and the reference anti-parallel resistance current are sensed by the core sense circuit and the bit line precharge branch circuits in the pipeline operation so as to sequentially generate a plurality of voltage levels on the core sense circuit in a clock cycle.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: June 29, 2021
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Tung-Cheng Chang, Chun-Ying Lee, Meng-Fan Chang
  • Publication number: 20210125663
    Abstract: A memory unit is controlled by a first word line and a second word line. The memory unit includes a memory cell and a transpose cell. The memory cell stores a weight. The memory cell is controlled by the first word line and includes a local bit line transmitting the weight. The transpose cell is connected to the memory cell and receives the weight via the local bit line. The transpose cell includes an input bit line, an input bit line bar, an output bit line and an output bit line bar. Each of the input bit line and the input bit line bar transmits a multi-bit input value, and the transpose cell is controlled by the second word line to generate a multi-bit output value on each of the output bit line and the output bit line bar according to the multi-bit input value and the weight.
    Type: Application
    Filed: October 28, 2019
    Publication date: April 29, 2021
    Inventors: Meng-Fan CHANG, Yung-Ning TU, Xin SI, Wei-Hsing HUANG
  • Patent number: 10770142
    Abstract: The present disclosure provides a control circuit of a memory array. The control circuit includes a first switch and a set termination circuit. The first switch is connected between a first voltage source and a data line of a resistive memory cell of the memory array. The set termination circuit has a first terminal connected to a control terminal of the first switch and a second terminal connected to the data line of the resistive memory cell of the memory array. When a data line voltage of the data line decreases to be lower than a first voltage in a first duration of the resistive memory cell performing a set operation, the set termination circuit turns off the first switch to terminate the set operation by stopping providing the first voltage of the first voltage source to the data line.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: September 8, 2020
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Meng-Fan Chang, Wen-Zhang Lin, Li-Ya Lai
  • Patent number: 10748612
    Abstract: A sensing circuit with adaptive local reference generation of a resistive memory is configured to adaptively sense a first bit line current of a first bit line and a second bit line current of a second bit line via one sense amplifier. The sense amplifier has a first output node and a second output node. The adaptive local reference generator is electrically connected to the sense amplifier and generating a reference current equal to a sum of the second bit line current and a local reference current. A first bit line current flows through the first output node during a first bit line time interval. A second bit line current flows through the first output node during a second bit line time interval. The first bit line time interval is different from the second bit line time interval.
    Type: Grant
    Filed: July 8, 2019
    Date of Patent: August 18, 2020
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Wei-Yu Lin, Meng-Fan Chang
  • Patent number: 10734039
    Abstract: A voltage-enhanced-feedback sense amplifier of a resistive memory is configured to sense a first bit line and a second bit line. The voltage-enhanced-feedback sense amplifier includes a voltage sense amplifier and a voltage-enhanced-feedback pre-amplifier. The voltage-enhanced-feedback pre-amplifier is electrically connected to the voltage sense amplifier. A first bit-line amplifying module receives a voltage level of the second input node to suppress a voltage drop of the first bit line and amplifies a voltage level of the first input node according to a voltage level of the first bit line. A second bit-line amplifying module receives the voltage level of the first input node to suppress a voltage drop of the second bit line and amplifies the voltage level of the second input node according to a voltage level of the second bit line. A margin enhanced voltage difference is greater than a read voltage difference.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: August 4, 2020
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Meng-Fan Chang, Huan-Ting Lin