Patents by Inventor Yinhe Han
Yinhe Han has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11551068Abstract: The present invention provides a processing system for a binary weight convolutional neural network. The system comprises: at least one storage unit for storing data and instructions; at least one control unit for acquiring the instructions stored in the storage unit and sending out a control signal; and, at least one calculation unit for acquiring, from the storage unit, node values of a layer in a convolutional neural network and corresponding binary weight value data and obtaining node values of a next layer by performing addition and subtraction operations. With the system of the present invention, the data bit width during the calculation process of a convolutional neural network is reduced, the convolutional operation speed is improved, and the storage capacity and operational energy consumption are reduced.Type: GrantFiled: February 11, 2018Date of Patent: January 10, 2023Assignee: Institute of Computing Technology, Chinese Academy of SciencesInventors: Yinhe Han, Haobo Xu, Ying Wang
-
Patent number: 11531889Abstract: Disclosed are a weight data storage method and a convolution computation method that may be implemented in a neural network. The weight data storage method comprises searching for effective weights in a weight convolution kernel matrix and acquiring an index of effective weights. The effective weights are non-zero weights, and the index of effective weights is used to mark the position of the effective weights in the weight convolution kernel matrix. The weight data storage method further comprises storing the effective weights and the index of effective weights. According to the weight data storage method and the convolution computation method of the present disclosure, storage space can be saved, and computation efficiency can be improved.Type: GrantFiled: February 28, 2018Date of Patent: December 20, 2022Assignee: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCESInventors: Yinhe Han, Feng Min, Haobo Xu, Ying Wang
-
Patent number: 11521048Abstract: The present invention relates to a weight management method and system for neural network processing. The method includes two stages, i.e., off-chip encryption stage and on-chip decryption stage: encrypting trained neural network weight data in advance, inputting the encrypted weight into a neural network processor chip, and decrypting the weight in real time by a decryption unit inside the neural network processor chip to perform related neural network calculation. The method and system realizes the protection of weight data without affecting the normal operation of a neural network processor.Type: GrantFiled: March 22, 2018Date of Patent: December 6, 2022Assignee: Institute of Computing Technology, Chinese Academy of SciencesInventors: Yinhe Han, Haobo Xu, Ying Wang
-
Patent number: 11331794Abstract: An inverse kinematics solution system for use with a robot, which is used for obtaining a joint angle value corresponding to a target pose value on the basis of an inputted target pose value and degree of freedom of a robot and which comprises: a parameters initialization module, an inverse kinematics scheduler, a Jacobian calculating unit, a pose updating unit and a parameters selector. The system is implemented by means of hardware and may quickly obtain motion parameters, which are used for controlling a robot, while reducing power consumption.Type: GrantFiled: February 11, 2018Date of Patent: May 17, 2022Assignee: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCESInventors: Hang Xiao, Yinhe Han, Ying Wang, Shiqi Lian
-
Publication number: 20210182666Abstract: Disclosed are a weight data storage method and a convolution computation method that may be implemented in a neural network. The weight data storage method comprises searching for effective weights in a weight convolution kernel matrix and acquiring an index of effective weights. The effective weights are non-zero weights, and the index of effective weights is used to mark the position of the effective weights in the weight convolution kernel matrix. The weight data storage method further comprises storing the effective weights and the index of effective weights. According to the weight data storage method and the convolution computation method of the present disclosure, storage space can be saved, and computation efficiency can be improved.Type: ApplicationFiled: February 28, 2018Publication date: June 17, 2021Applicant: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCESInventors: Yinhe HAN, Feng MIN, Haobo XU, Ying WANG
-
Publication number: 20210089871Abstract: The present invention provides a processing system for a binary weight convolutional neural network. The system comprises: at least one storage unit for storing data and instructions; at least one control unit for acquiring the instructions stored in the storage unit and sending out a control signal; and, at least one calculation unit for acquiring, from the storage unit, node values of a layer in a convolutional neural network and corresponding binary weight value data and obtaining node values of a next layer by performing addition and subtraction operations. With the system of the present invention, the data bit width during the calculation process of a convolutional neural network is reduced, the convolutional operation speed is improved, and the storage capacity and operational energy consumption are reduced.Type: ApplicationFiled: February 11, 2018Publication date: March 25, 2021Inventors: YINHE HAN, HAOBO XU, YING WANG
-
Patent number: 10671447Abstract: A task allocation method, a chip are disclosed. The method includes: determining the number of threads included in a to-be-processed task; determining, in a network-on-chip formed by a multi-core processor, a continuous area formed by routers-on-chip corresponding to multiple continuous idle processor cores whose number is equal to the number of the threads; when the area is a non-rectangular area, determining an extended area extended from the non-rectangular area; and when predicted traffic of each router-on-chip that is connected to a processor core in the extended area does not exceed a preset threshold, allocating the multiple threads of the to-be-processed task to the idle processor cores in the non-rectangular area. According to the task allocation method provided in the embodiments of the present invention, problems of large hardware overheads, a low network throughput, low system utilization are avoided.Type: GrantFiled: April 2, 2018Date of Patent: June 2, 2020Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Hang Lu, Yinhe Han, Binzhang Fu, Xiaowei Li
-
Publication number: 20200139541Abstract: An inverse kinematics solution system for use with a robot, which is used for obtaining a joint angle value corresponding to a target pose value on the basis of an inputted target pose value and degree of freedom of a robot and which comprises: a parameters initialization module, an inverse kinematics scheduler, a Jacobian calculating unit, a pose updating unit and a parameters selector. The system is implemented by means of hardware and may quickly obtain motion parameters, which are used for controlling a robot, while reducing power consumption.Type: ApplicationFiled: February 11, 2018Publication date: May 7, 2020Inventors: HANG XIAO, YINHE HAN, YING WANG, SHIQI LIAN
-
Publication number: 20200019843Abstract: The present invention relates to a weight management method and system for neural network processing. The method includes two stages, i.e., off-chip encryption stage and on-chip decryption stage: encrypting trained neural network weight data in advance, inputting the encrypted weight into a neural network processor chip, and decrypting the weight in real time by a decryption unit inside the neural network processor chip to perform related neural network calculation. The method and system realizes the protection of weight data without affecting the normal operation of a neural network processor.Type: ApplicationFiled: March 22, 2018Publication date: January 16, 2020Inventors: Yinhe HAN, Haobo XU, Ying WANG
-
Publication number: 20180225156Abstract: A task allocation method, a chip are disclosed. The method includes: determining the number of threads included in a to-be-processed task; determining, in a network-on-chip formed by a multi-core processor, a continuous area formed by routers-on-chip corresponding to multiple continuous idle processor cores whose number is equal to the number of the threads; when the area is a non-rectangular area, determining an extended area extended from the non-rectangular area; and when predicted traffic of each router-on-chip that is connected to a processor core in the extended area does not exceed a preset threshold, allocating the multiple threads of the to-be-processed task to the idle processor cores in the non-rectangular area. According to the task allocation method provided in the embodiments of the present invention, problems of large hardware overheads, a low network throughput, low system utilization are avoided.Type: ApplicationFiled: April 2, 2018Publication date: August 9, 2018Inventors: Hang Lu, Yinhe Han, Binzhang Fu, Xiaowei Li
-
Patent number: 9965335Abstract: A task allocation method, a chip are disclosed. The method includes: determining a number of threads included in a to-be-processed task; determining, in a network-on-chip formed by a multi-core processor, a continuous area formed by routers-on-chip corresponding to multiple continuous idle processor cores whose number is equal to the number of the threads; if the area is a non-rectangular area, determining a rectangular area extended from the area; and if predicted traffic of each router-on-chip that is connected to a non-idle processor core and in the extended rectangular area does not exceed a preset threshold, allocating the multiple threads of the to-be-processed task to the idle processor cores in the area. According to the task allocation method provided in the embodiments of the present invention, problems of large hardware overheads, a low network throughput, low system utilization are avoided.Type: GrantFiled: November 13, 2015Date of Patent: May 8, 2018Assignee: Huawei Technologies Co., Ltd.Inventors: Hang Lu, Yinhe Han, Binzhang Fu, Xiaowei Li
-
Publication number: 20160070603Abstract: A task allocation method, a chip are disclosed. The method includes: determining a number of threads included in a to-be-processed task; determining, in a network-on-chip formed by a multi-core processor, a continuous area formed by routers-on-chip corresponding to multiple continuous idle processor cores whose number is equal to the number of the threads; if the area is a non-rectangular area, determining a rectangular area extended from the area; and if predicted traffic of each router-on-chip that is connected to a non-idle processor core and in the extended rectangular area does not exceed a preset threshold, allocating the multiple threads of the to-be-processed task to the idle processor cores in the area. According to the task allocation method provided in the embodiments of the present invention, problems of large hardware overheads, a low network throughput, low system utilization are avoided.Type: ApplicationFiled: November 13, 2015Publication date: March 10, 2016Inventors: Hang Lu, Yinhe Han, Binzhang Fu, Xiaowei Li