Patents by Inventor Shaodi WANG
Shaodi WANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11977969Abstract: A data loading circuit and method are provided. The circuit is configured to load data for a feature map calculated by a neural network into a calculation circuit, wherein the size of the convolution kernel of the neural network is K*K data, and a window corresponding to the convolution kernel slides with a step size of S in the feature map, where K and S are positive integers and S<K, the circuit comprising: two data loaders comprising a first data loader and a second data loader; and a controller configured to: control the first data loader to be in a data outputting mode and control the second data loader to be in a data reading mode, when the window slides within K consecutive rows of the feature map.Type: GrantFiled: September 23, 2020Date of Patent: May 7, 2024Assignee: HANGZHOU ZHICUN INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Xuguang Sun, Xiaodi Xing, Shaodi Wang
-
Patent number: 11379673Abstract: An analog vector-matrix multiplication circuit is achieved by using a programmable storage device array. In a programmable semiconductor device array, gates of all of programmable semiconductor devices of each row are all connected to the same analog voltage input end. M rows of programmable semiconductor devices are correspondingly connected to M analog voltage input ends. Drains (or sources) of all of programmable semiconductor devices of each column are all connected to the same bias voltage input end. N columns of programmable semiconductor devices are correspondingly connected to N bias voltage input ends. Sources (or drains) of all of programmable semiconductor devices of each column are all connected to the same analog current output end. The N columns of programmable semiconductor devices are correspondingly connected to N analog current output ends.Type: GrantFiled: February 1, 2021Date of Patent: July 5, 2022Assignee: BEIJING ZHICUN WITIN TECHNOLOGY CORPORATION LIMITEDInventor: Shaodi Wang
-
Patent number: 11335400Abstract: In a computing-in-memory chip and a memory cell array structure, a memory cell array therein includes a plurality of memory cell sub-arrays arranged in an array. Each memory cell sub-array comprises a plurality of switch units and a plurality of memory cells arranged in an array; and first terminals of all memory cells in each column are connected to a source line, second terminals of all the memory cells are connected to a bit line, third terminals of all memory cells in each row are connected to a word line through a switch unit, a plurality of rows of memory cells are correspondingly connected to a plurality of switch units, control terminals of the plurality of switch units are connected to a local word line of the memory cell sub-array, and whether to activate the memory cell sub-array is controlled by controlling the local word line.Type: GrantFiled: January 27, 2021Date of Patent: May 17, 2022Assignee: Beijing Zhicun (Witin) Technology Corporation Ltd.Inventor: Shaodi Wang
-
Publication number: 20220140834Abstract: A multiplexing device for a digital-to-analog conversion circuit and an analog-to-digital conversion circuit in a storage and calculation integrated chip, comprising a digital-to-analog conversion circuit (DAC) module, an analog vector-matrix multiplication operation circuit(AMAC) module, an analog-to-digital conversion circuit(ADC) module, a first many-to-one multiplexer (M1-MUX) module, a second M1-MUX module, a first one-to-many multiplexer (1M-MUX) module, a second 1M-MUX module, and a switching transistor module. At an AMAC input end, each DAC corresponds to a plurality of input ends and is shared with the first 1M-MUX module in a time multiplexing mode by means of the first M1-MUX module; at an AMAC output end, each ADC corresponds to a plurality of output ends, and is shared with the second 1M-MUX module in a time multiplexing mode by means of the second M1-MUX module; the number of DACs and ADCs is reduced, and the chip area is reduced.Type: ApplicationFiled: April 3, 2019Publication date: May 5, 2022Applicant: BEIJING ZHICUN (WITIN) TECHNOLOGY CORPORATION LIMITEDInventor: Shaodi WANG
-
Publication number: 20220137924Abstract: A dynamic bias analog vector-matrix multiplication operation circuit and an operation control method therefor. The dynamic bias analog vector-matrix multiplication operation circuit comprises: positive value weight columns (101-10N), constant columns (201-20M) and subtractors (301-30N), wherein the number of the subtractors is equal to the number of the positive value weight columns, the subtractors are correspondingly connected to the positive value weight columns on a one-to-one basis, and the number of the constant columns is less than the number of the positive value weight columns; minuend input ends of the subtractors are correspondingly connected to output ends of the positive value weight columns, subtrahend input ends thereof are connected to output ends of the constant columns, and output ends thereof output operation results; and subtrahend input ends of a plurality of subtractors are connected to the same constant column.Type: ApplicationFiled: April 3, 2019Publication date: May 5, 2022Applicant: BELJING ZHICUN (WITIN) TECHNOLOGY CORPORATION LIMITEDInventor: Shaodi WANG
-
Patent number: 11216375Abstract: A data caching circuit and method are provided. The circuit is configured to cache data for a feature map calculated by a neural network, wherein a size of a convolution kernel of the neural network is K*K data, and a window corresponding to the convolution kernel slides at a step of S in the feature map, where K is a positive integer and S is a positive integer, the circuit comprising: a cache comprising K caching units, each caching unit being configured to respectively store a plurality of rows of the feature map, the plurality of rows comprising a corresponding row in every K consecutive rows of the feature map.Type: GrantFiled: April 15, 2020Date of Patent: January 4, 2022Assignee: Hangzhou Zhicun Intelligent Technology Co., Ltd.Inventors: Qilin Zheng, Shaodi Wang
-
Publication number: 20210390379Abstract: A data loading circuit and method are provided. The circuit is configured to load data for a feature map calculated by a neural network into a calculation circuit, wherein the size of the convolution kernel of the neural network is K*K data, and a window corresponding to the convolution kernel slides with a step size of S in the feature map, where K and S are positive integers and S<K, the circuit comprising: two data loaders comprising a first data loader and a second data loader; and a controller configured to: control the first data loader to be in a data outputting mode and control the second data loader to be in a data reading mode, when the window slides within K consecutive rows of the feature map.Type: ApplicationFiled: September 23, 2020Publication date: December 16, 2021Inventors: Xuguang SUN, Xiaodi XING, Shaodi WANG
-
Publication number: 20210365646Abstract: An analog vector-matrix multiplication circuit is achieved by using a programmable storage device array. In a programmable semiconductor device array, gates of all of programmable semiconductor devices of each row are all connected to the same analog voltage input end. M rows of programmable semiconductor devices are correspondingly connected to M analog voltage input ends. Drains (or sources) of all of programmable semiconductor devices of each column are all connected to the same bias voltage input end. N columns of programmable semiconductor devices are correspondingly connected to N bias voltage input ends. Sources (or drains) of all of programmable semiconductor devices of each column are all connected to the same analog current output end. The N columns of programmable semiconductor devices are correspondingly connected to N analog current output ends.Type: ApplicationFiled: February 1, 2021Publication date: November 25, 2021Inventor: Shaodi Wang
-
Publication number: 20210303198Abstract: Disclosed are a flash memory chip and a calibration method and apparatus therefor. A working array in the flash memory chip can be calibrated by using adjustable weight level of flash memory units, specifically, at least one reference array used for calibrating the working array can be provided, and the number of flash memory units in the reference array is greater than or equal to the adjustable weight grades N of the flash memory units; initial weight values of the N flash memory units of the reference array correspond to N level of adjustable weights of the flash memory units on a one-to-one basis, and spare flash memory units are used as redundant units for standby application; thereby realizing off-line updating calibration for weights of the flash memory units in the working array compensating for the influence of electricity leakage on the weights of the flash memory units.Type: ApplicationFiled: May 28, 2021Publication date: September 30, 2021Inventor: Shaodi WANG
-
Publication number: 20210263849Abstract: A data caching circuit and method are provided. The circuit is configured to cache data for a feature map calculated by a neural network, wherein a size of a convolution kernel of the neural network is K*K data, and a window corresponding to the convolution kernel slides at a step of S in the feature map, where K is a positive integer and S is a positive integer, the circuit comprising: a cache comprising K caching units, each caching unit being configured to respectively store a plurality of rows of the feature map, the plurality of rows comprising a corresponding row in every K consecutive rows of the feature map.Type: ApplicationFiled: April 15, 2020Publication date: August 26, 2021Inventors: Qilin ZHENG, Shaodi WANG
-
Publication number: 20210256364Abstract: The present disclosure provides a neural network weight matrix adjusting method, a writing control method and a related apparatus, The method comprises: judging whether a weight distribution of a neural network weight matrix is lower than a first preset threshold; if yes, multiplying all weight values in the neural network weight matrix by a first constant; if no, judging whether the weight distribution of the neural network weight matrix is higher than a second preset threshold, wherein the second preset threshold is greater than the first preset threshold; and dividing all weight values in the neural network weight matrix by a second constant, if the weight distribution of the neural network weight matrix is higher than the second preset threshold; wherein the first constant and the second constant are both greater than 1, thereby improving the operation precision.Type: ApplicationFiled: July 6, 2020Publication date: August 19, 2021Inventor: Shaodi WANG
-
Publication number: 20210151106Abstract: In a computing-in-memory chip and a memory cell array structure, a memory cell array therein includes a plurality of memory cell sub-arrays arranged in an array. Each memory cell sub-array comprises a plurality of switch units and a plurality of memory cells arranged in an array; and first terminals of all memory cells in each column are connected to a source line, second terminals of all the memory cells are connected to a bit line, third terminals of all memory cells in each row are connected to a word line through a switch unit, a plurality of rows of memory cells are correspondingly connected to a plurality of switch units, control terminals of the plurality of switch units are connected to a local word line of the memory cell sub-array, and whether to activate the memory cell sub-array is controlled by controlling the local word line.Type: ApplicationFiled: January 27, 2021Publication date: May 20, 2021Inventor: Shaodi WANG
-
Patent number: 10832752Abstract: A random access memory (RAM) includes a bit-line, a source-line, a memory cell connected to the bit-line and the source-line, and a read/write circuit connected to the bit-line and the source-line and including a negative differential resistance (NDR) device.Type: GrantFiled: August 1, 2017Date of Patent: November 10, 2020Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Puneet Gupta, Andrew S. Pan, Shaodi Wang
-
Publication number: 20190198079Abstract: A random access memory (RAM) includes a bit-line, a source-line, a memory cell connected to the bit-line and the source-line, and a read/write circuit connected to the bit-line and the source-line and including a negative differential resistance (NDR) device.Type: ApplicationFiled: August 1, 2017Publication date: June 27, 2019Inventors: Puneet GUPTA, Andrew S. PAN, Shaodi WANG