Patents by Inventor Yinhe Han

Yinhe Han has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11551068
    Abstract: The present invention provides a processing system for a binary weight convolutional neural network. The system comprises: at least one storage unit for storing data and instructions; at least one control unit for acquiring the instructions stored in the storage unit and sending out a control signal; and, at least one calculation unit for acquiring, from the storage unit, node values of a layer in a convolutional neural network and corresponding binary weight value data and obtaining node values of a next layer by performing addition and subtraction operations. With the system of the present invention, the data bit width during the calculation process of a convolutional neural network is reduced, the convolutional operation speed is improved, and the storage capacity and operational energy consumption are reduced.
    Type: Grant
    Filed: February 11, 2018
    Date of Patent: January 10, 2023
    Assignee: Institute of Computing Technology, Chinese Academy of Sciences
    Inventors: Yinhe Han, Haobo Xu, Ying Wang
  • Patent number: 11531889
    Abstract: Disclosed are a weight data storage method and a convolution computation method that may be implemented in a neural network. The weight data storage method comprises searching for effective weights in a weight convolution kernel matrix and acquiring an index of effective weights. The effective weights are non-zero weights, and the index of effective weights is used to mark the position of the effective weights in the weight convolution kernel matrix. The weight data storage method further comprises storing the effective weights and the index of effective weights. According to the weight data storage method and the convolution computation method of the present disclosure, storage space can be saved, and computation efficiency can be improved.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: December 20, 2022
    Assignee: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCES
    Inventors: Yinhe Han, Feng Min, Haobo Xu, Ying Wang
  • Patent number: 11521048
    Abstract: The present invention relates to a weight management method and system for neural network processing. The method includes two stages, i.e., off-chip encryption stage and on-chip decryption stage: encrypting trained neural network weight data in advance, inputting the encrypted weight into a neural network processor chip, and decrypting the weight in real time by a decryption unit inside the neural network processor chip to perform related neural network calculation. The method and system realizes the protection of weight data without affecting the normal operation of a neural network processor.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: December 6, 2022
    Assignee: Institute of Computing Technology, Chinese Academy of Sciences
    Inventors: Yinhe Han, Haobo Xu, Ying Wang
  • Patent number: 11331794
    Abstract: An inverse kinematics solution system for use with a robot, which is used for obtaining a joint angle value corresponding to a target pose value on the basis of an inputted target pose value and degree of freedom of a robot and which comprises: a parameters initialization module, an inverse kinematics scheduler, a Jacobian calculating unit, a pose updating unit and a parameters selector. The system is implemented by means of hardware and may quickly obtain motion parameters, which are used for controlling a robot, while reducing power consumption.
    Type: Grant
    Filed: February 11, 2018
    Date of Patent: May 17, 2022
    Assignee: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCES
    Inventors: Hang Xiao, Yinhe Han, Ying Wang, Shiqi Lian
  • Publication number: 20210182666
    Abstract: Disclosed are a weight data storage method and a convolution computation method that may be implemented in a neural network. The weight data storage method comprises searching for effective weights in a weight convolution kernel matrix and acquiring an index of effective weights. The effective weights are non-zero weights, and the index of effective weights is used to mark the position of the effective weights in the weight convolution kernel matrix. The weight data storage method further comprises storing the effective weights and the index of effective weights. According to the weight data storage method and the convolution computation method of the present disclosure, storage space can be saved, and computation efficiency can be improved.
    Type: Application
    Filed: February 28, 2018
    Publication date: June 17, 2021
    Applicant: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCES
    Inventors: Yinhe HAN, Feng MIN, Haobo XU, Ying WANG
  • Publication number: 20210089871
    Abstract: The present invention provides a processing system for a binary weight convolutional neural network. The system comprises: at least one storage unit for storing data and instructions; at least one control unit for acquiring the instructions stored in the storage unit and sending out a control signal; and, at least one calculation unit for acquiring, from the storage unit, node values of a layer in a convolutional neural network and corresponding binary weight value data and obtaining node values of a next layer by performing addition and subtraction operations. With the system of the present invention, the data bit width during the calculation process of a convolutional neural network is reduced, the convolutional operation speed is improved, and the storage capacity and operational energy consumption are reduced.
    Type: Application
    Filed: February 11, 2018
    Publication date: March 25, 2021
    Inventors: YINHE HAN, HAOBO XU, YING WANG
  • Patent number: 10671447
    Abstract: A task allocation method, a chip are disclosed. The method includes: determining the number of threads included in a to-be-processed task; determining, in a network-on-chip formed by a multi-core processor, a continuous area formed by routers-on-chip corresponding to multiple continuous idle processor cores whose number is equal to the number of the threads; when the area is a non-rectangular area, determining an extended area extended from the non-rectangular area; and when predicted traffic of each router-on-chip that is connected to a processor core in the extended area does not exceed a preset threshold, allocating the multiple threads of the to-be-processed task to the idle processor cores in the non-rectangular area. According to the task allocation method provided in the embodiments of the present invention, problems of large hardware overheads, a low network throughput, low system utilization are avoided.
    Type: Grant
    Filed: April 2, 2018
    Date of Patent: June 2, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Hang Lu, Yinhe Han, Binzhang Fu, Xiaowei Li
  • Publication number: 20200139541
    Abstract: An inverse kinematics solution system for use with a robot, which is used for obtaining a joint angle value corresponding to a target pose value on the basis of an inputted target pose value and degree of freedom of a robot and which comprises: a parameters initialization module, an inverse kinematics scheduler, a Jacobian calculating unit, a pose updating unit and a parameters selector. The system is implemented by means of hardware and may quickly obtain motion parameters, which are used for controlling a robot, while reducing power consumption.
    Type: Application
    Filed: February 11, 2018
    Publication date: May 7, 2020
    Inventors: HANG XIAO, YINHE HAN, YING WANG, SHIQI LIAN
  • Publication number: 20200019843
    Abstract: The present invention relates to a weight management method and system for neural network processing. The method includes two stages, i.e., off-chip encryption stage and on-chip decryption stage: encrypting trained neural network weight data in advance, inputting the encrypted weight into a neural network processor chip, and decrypting the weight in real time by a decryption unit inside the neural network processor chip to perform related neural network calculation. The method and system realizes the protection of weight data without affecting the normal operation of a neural network processor.
    Type: Application
    Filed: March 22, 2018
    Publication date: January 16, 2020
    Inventors: Yinhe HAN, Haobo XU, Ying WANG
  • Publication number: 20180225156
    Abstract: A task allocation method, a chip are disclosed. The method includes: determining the number of threads included in a to-be-processed task; determining, in a network-on-chip formed by a multi-core processor, a continuous area formed by routers-on-chip corresponding to multiple continuous idle processor cores whose number is equal to the number of the threads; when the area is a non-rectangular area, determining an extended area extended from the non-rectangular area; and when predicted traffic of each router-on-chip that is connected to a processor core in the extended area does not exceed a preset threshold, allocating the multiple threads of the to-be-processed task to the idle processor cores in the non-rectangular area. According to the task allocation method provided in the embodiments of the present invention, problems of large hardware overheads, a low network throughput, low system utilization are avoided.
    Type: Application
    Filed: April 2, 2018
    Publication date: August 9, 2018
    Inventors: Hang Lu, Yinhe Han, Binzhang Fu, Xiaowei Li
  • Patent number: 9965335
    Abstract: A task allocation method, a chip are disclosed. The method includes: determining a number of threads included in a to-be-processed task; determining, in a network-on-chip formed by a multi-core processor, a continuous area formed by routers-on-chip corresponding to multiple continuous idle processor cores whose number is equal to the number of the threads; if the area is a non-rectangular area, determining a rectangular area extended from the area; and if predicted traffic of each router-on-chip that is connected to a non-idle processor core and in the extended rectangular area does not exceed a preset threshold, allocating the multiple threads of the to-be-processed task to the idle processor cores in the area. According to the task allocation method provided in the embodiments of the present invention, problems of large hardware overheads, a low network throughput, low system utilization are avoided.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: May 8, 2018
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Hang Lu, Yinhe Han, Binzhang Fu, Xiaowei Li
  • Publication number: 20160070603
    Abstract: A task allocation method, a chip are disclosed. The method includes: determining a number of threads included in a to-be-processed task; determining, in a network-on-chip formed by a multi-core processor, a continuous area formed by routers-on-chip corresponding to multiple continuous idle processor cores whose number is equal to the number of the threads; if the area is a non-rectangular area, determining a rectangular area extended from the area; and if predicted traffic of each router-on-chip that is connected to a non-idle processor core and in the extended rectangular area does not exceed a preset threshold, allocating the multiple threads of the to-be-processed task to the idle processor cores in the area. According to the task allocation method provided in the embodiments of the present invention, problems of large hardware overheads, a low network throughput, low system utilization are avoided.
    Type: Application
    Filed: November 13, 2015
    Publication date: March 10, 2016
    Inventors: Hang Lu, Yinhe Han, Binzhang Fu, Xiaowei Li