Patents by Inventor Qingtian Zhang

Qingtian Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12217164
    Abstract: A neural network and its information processing method, information processing system. The neural network includes N layers of neuron layers connected to each other one by one, except for a first layer of neuron layer, each of the neurons of the other neuron layers includes m dendritic units and one hippocampal unit; the dendritic unit includes a resistance value graded device, the hippocampal unit includes a resistance value mutation device, and the m dendritic units can be provided with different threshold voltage or current, respectively; and the neurons on the nth layer neuron layer are connected to the m dendritic units of the neurons on the n+1th layer neuron layer; wherein N is an integer larger than 3, m is an integer larger than 1, n is an integer larger than 1 and less than N.
    Type: Grant
    Filed: February 24, 2018
    Date of Patent: February 4, 2025
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyi Li, Huaqiang Wu, He Qian, Bin Gao, Sen Song, Qingtian Zhang
  • Publication number: 20240386163
    Abstract: A method for simulating and predicting deep reservoir structural fractures in consideration of thickness change is disclosed. The method firstly calculates a structural fracture apparent density by using a stress-based reservoir structural fracture prediction formula group, secondly obtains structural fracture linear density based on simulation experiment, thickness unit division and reservoir structural fracture prediction optimization formula, finally carries out a reliability judgment based on the structural fracture linear density, the apparent density and the measured inspection values by using parameter inspection and error analysis. The above method can effectively reduce the difficulty and cost of energy development.
    Type: Application
    Filed: July 29, 2024
    Publication date: November 21, 2024
    Applicants: China University of Mining and Technology, Shanxi Huayang Group New Energy Co., Ltd., No. 1 Mine
    Inventors: Zhenghui Qu, Bangxu Rong, Liang Guo, Kebin Wei, Qingtian Zhang, Changxing Li, Jie Luo, Weike Wan, Xingyun Liu, Weijun Hou
  • Publication number: 20240046086
    Abstract: Disclosed are a quantization method and quantization apparatus for a weight of a neural network, and a storage medium. The neural network is implemented on the basis of a crossbar-enabled analog computing-in-memory (CACIM) system, and the quantization method includes: acquiring a distribution characteristic of a weight; and determining, according to the distribution characteristic of the weight, an initial quantization parameter for quantizing the weight to reduce a quantization error in quantizing the weight. The quantization method provided by the embodiments of the present disclosure does not pre-define the quantization method used, but determines the quantization parameter used for quantizing the weight according to the distribution characteristic of the weight to reduce the quantization error, so that the effect of the neural network model is better under the same mapping overhead, and the mapping overhead is smaller under the same effect of the neural network model.
    Type: Application
    Filed: December 13, 2021
    Publication date: February 8, 2024
    Applicant: TSINGHUA UNIVERSITY
    Inventors: Huaqiang WU, Qingtian ZHANG, Lingjun DAI
  • Publication number: 20220374688
    Abstract: A training method and a training device for a neural network based on memristors are provided. The neural network includes a plurality of neuron layers connected one by one and weight parameters between the plurality of neuron layers, and the training method includes: training the weight parameters of the neural network, and programming a memristor array based on the weight parameters after being trained to write the weight parameters after being trained into the memristor array; and updating a critical layer or several critical layers of the weight parameters of the neural network by adjusting conductance values of at least part of memristors of the memristor array.
    Type: Application
    Filed: March 6, 2020
    Publication date: November 24, 2022
    Applicant: TSINGHUA UNIVERSITY
    Inventors: Huaqiang WU, Peng YAO, Bin GAO, Qingtian ZHANG, He QIAN
  • Patent number: 11468300
    Abstract: A circuit structure and a driving method thereof, a neural network are disclosed. The circuit structure includes at least one circuit unit, each circuit unit includes a first group of resistive switching devices and a second group of resistive switching devices, the first group of resistive switching devices includes a resistance gradual-change device, the second group of resistive switching devices includes a resistance abrupt-change device, the first group of resistive switching devices and the second group of resistive switching devices are connected in series, in a case that no voltage is applied, a resistance value of the first group of resistive switching devices is larger than a resistance value of the second group of resistive switching devices.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: October 11, 2022
    Assignee: Tsinghua University
    Inventors: Xinyi Li, Huaqiang Wu, Sen Song, Qingtian Zhang, Bin Gao, He Qian
  • Publication number: 20210174173
    Abstract: A circuit structure and a driving method thereof, a neural network are disclosed. The circuit structure includes at least one circuit unit, each circuit unit includes a first group of resistive switching devices and a second group of resistive switching devices, the first group of resistive switching devices includes a resistance gradual-change device, the second group of resistive switching devices includes a resistance abrupt-change device, the first group of resistive switching devices and the second group of resistive switching devices are connected in series, in a case that no voltage is applied, a resistance value of the first group of resistive switching devices is larger than a resistance value of the second group of resistive switching devices. The circuit structure utilizes a resistance gradual-change device and a resistance abrupt-change device connected in series to form a neuron-like structure, so as to achieve to simulate functions of a human brain neuron.
    Type: Application
    Filed: November 14, 2017
    Publication date: June 10, 2021
    Inventors: Xinyi LI, Huaqiang Wu, Sen SONG, Qingtian ZHANG, Bin Gao, He Qian
  • Publication number: 20210049448
    Abstract: A neural network and its information processing method, information processing system. The neural network includes N layers of neuron layers connected to each other one by one, except for a first layer of neuron layer, each of the neurons of the other neuron layers includes m dendritic units and one hippocampal unit; the dendritic unit includes a resistance value graded device, the hippocampal unit includes a resistance value mutation device, and the m dendritic units can be provided with different threshold voltage or current, respectively; and the neurons on the nth layer neuron layer are connected to the m dendritic units of the neurons on the n+1th layer neuron layer; wherein N is an integer larger than 3, m is an integer larger than 1, n is an integer larger than 1 and less than N.
    Type: Application
    Filed: February 24, 2018
    Publication date: February 18, 2021
    Inventors: Xinyi LI, Huaqiang Wu, He Qian, Bin Gao, Sen Song, Qingtian Zhang