Patents by Inventor Xunzhao YIN

Xunzhao YIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240069780
    Abstract: Disclosed are an in-memory computing architecture for a nearest neighbor search of a cosine distance and an operating method thereof. The in-memory computing architecture comprises two FeFET-based storage arrays, Translinear circuits and a WTA circuit, and the two storage arrays are a first storage array and a second storage array, respectively; wherein each of the storage cells comprises a FeFET and a resistor which are electrically connected; an input vector is inputted into the first storage array for outputting the inner product X of the input vector multiplied by all the storage vectors in the first storage array; the second storage array outputs the sum of squares Y of all vector elements in the storage vectors; the output values of the first storage array and the second storage array are respectively inputted into the Translinear circuits through current mirrors; and the Translinear circuits output X2/Y to the WTA circuit.
    Type: Application
    Filed: December 13, 2022
    Publication date: February 29, 2024
    Applicant: ZHEJIANG UNIVERSITY
    Inventors: Xunzhao YIN, Che-Kai Liu, Haobang Chen, Cheng ZHUO
  • Publication number: 20240005134
    Abstract: The invention discloses a neural network retraining and gradient sparse method based on the aging sensing of memristors. For the accuracy rate of hardware online reasoning decreases after cross-array aging, the extreme values of programmable weights under a current aging condition is calculated by using the known aging information of memristor, and a neural network model is retrained according to this, so as to improve the accuracy rate of the current hardware online reasoning. In the process of retraining, network weights exceeding the extreme values of programmable weights are automatically truncated. For the working life of the memristor, the sparsity of derivatives of the neural network is utilized to discard the derivatives with a small absolute value in hardware adjustment process, so as to ensure that the memristor corresponding to small derivatives would not be applied voltage, prevent the aging process of the memristors and prolong the service life thereof.
    Type: Application
    Filed: April 29, 2022
    Publication date: January 4, 2024
    Applicant: ZHEJIANG UNIVERSITY
    Inventors: Xunzhao YIN, Wenwen YE, Cheng ZHUO
  • Publication number: 20230377650
    Abstract: Disclosed in the present invention is an ultra-compact CAM array based on a single MTJ and an operating method thereof. The CAM array comprises an M*N CAM core for storing contents, additional reference rows for storing “0” and “1” and reference columns for storing “0” and “1”, a row decoder, a column decoder, transmission gates ENs, write drivers WDs, search current sources Isearchs and two-stage detection amplifiers. The present invention utilizes 1T-1MTJ cells to construct the CAM array, and combines the advantages of the MTJ and CMOS. While ensuring search energy efficiency, a unique structure of the MTJ is utilized to implement a less area overhead and a lower search delay compared with a traditional CMOS-based CAM, and non-volatility is achieved.
    Type: Application
    Filed: November 30, 2022
    Publication date: November 23, 2023
    Applicant: ZHEJIANG UNIVERSITY
    Inventors: Xunzhao YIN, Zeyu Yang, Cheng ZHUO
  • Publication number: 20230334379
    Abstract: The present invention discloses an energy-efficient capacitance extraction method based on machine learning, and involves improving parameter extraction efficiency by using a machine learning model to extract parasitic capacitance; generally representing an interconnection line structure by grid-based data representation; reducing a workload of parameter extraction and enhancing the robustness of different semiconductor technologies with the idea of an adaptive extraction window; establishing a machine learning model of capacitance extraction for a two-dimensional interconnection line structure, and extracting grid parameters of a target interconnection line structure and inputting the grid parameters into the machine learning model, thereby obtaining parasitic capacitance parameters. Compared with an existing capacitance extraction technology, an capacitance extractor has achieved excellent performance in accuracy, speed and time and space consumption.
    Type: Application
    Filed: March 23, 2023
    Publication date: October 19, 2023
    Applicant: ZHEJIANG UNIVERSITY
    Inventors: Cheng ZHUO, Yuan Xu, Yu Qian, Chenyi Wen, Xunzhao YIN
  • Publication number: 20230274781
    Abstract: Disclosed in the present invention are a highly energy-efficient CAM based on a single FeFET and an operating method thereof, which relate to a design of an FeFET-based memory suitable for low power consumption and high performance. A brand-new design of a CAM cell based on the single FeFET is achieved by fully utilizing the storage characteristics of the FeFET, so that the number of transistors is saved, the search energy consumption is reduced, and the nonvolatility of data storage is obtained. The present invention utilizes a 2T-1FeFET structure, and combines the advantages of the FeFET and CMOS. Without reducing performance, only one FeFET is utilized to implement a less area overhead and a lower energy consumption compared with a traditional CMOS-based CAM, and non-volatility is achieved.
    Type: Application
    Filed: December 15, 2022
    Publication date: August 31, 2023
    Applicant: ZHEJIANG UNIVERSITY
    Inventors: Xunzhao YIN, Jiahao Cai, Cheng ZHUO
  • Publication number: 20220327352
    Abstract: Systems, apparatuses, and methods are provided herein for providing natural language explanation to black-box algorithm generated outcome. The system is configured to determine a regression coefficient for each of the plurality of attributes based regression analysis, determine a decision tree based on the input data and the output data and a decision path of a select data item in the decision tree, generate natural language explanation of a categorization of the select data item based on the relevant attributes and regression coefficients associated with each of the relevant attributes, wherein the natural language explanation identifies at least one relevant attribute and an effect of the at least one relevant attribute of the data item on the categorization, and transmit to a user interface device for display, the categorization of the select data item along with the natural language explanation of the categorization of the select data.
    Type: Application
    Filed: April 9, 2021
    Publication date: October 13, 2022
    Inventors: Anindya S. Dey, Shangwen Huang, Yifan Wang, Xunzhao Yin
  • Patent number: 11449754
    Abstract: The present invention discloses a neural network training method for a memristor memory for memristor errors, which is mainly used for solving the problem of decrease in inference accuracy of a neural network based on the memristor memory due to a process error and a dynamic error. The method comprises the following steps: performing modeling on a conductance value of a memristor under the influence of the process error and the dynamic error, and performing conversion to obtain a distribution of corresponding neural network weights; constructing a prior distribution of the weights by using the weight distribution obtained after modeling, and performing Bayesian neural network training based on variational inference to obtain a variational posterior distribution of the weights; and converting a mean value of the variational posterior of the weights into a target conductance value of the memristor memory.
    Type: Grant
    Filed: February 16, 2022
    Date of Patent: September 20, 2022
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Cheng Zhuo, Xunzhao Yin, Qingrong Huang, Di Gao
  • Publication number: 20220207374
    Abstract: Disclosed in the present invention is a mixed-granularity-based joint sparse method for a neural network. The joint sparse method comprises independent vector-wise fine-grained sparsity and block-wise coarse-grained sparsity; and a final pruning mask is obtained by performing a bitwise logic AND operation on pruning masks independently generated by two sparse methods, and then a weight matrix of the neural network after sparsity is obtained. The joint sparsity of the present invention always obtains the reasoning speed between a block sparsity mode and a balanced sparsity mode without considering the vector row size of the vector-wise fine-grained sparsity and the vector block size of the block-wise coarse-grained sparsity. Pruning for a convolutional layer and a fully-connected layer of a neural network has the advantages of variable sparse granularity, acceleration of general hardware reasoning and high accuracy of model reasoning.
    Type: Application
    Filed: November 2, 2021
    Publication date: June 30, 2022
    Applicant: ZHEJIANG UNIVERSITY
    Inventors: Cheng ZHUO, Chuliang GUO, Xunzhao YIN