Patents by Inventor Tian Zhi
Tian Zhi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11734002Abstract: The present disclosure provides a counting device and counting method. The device includes a storage unit, a counting unit, and a register unit, where the storage unit may be connected to the counting unit for storing input data to be counted and storing a number of elements satisfying a given condition in the input data after counting; the register unit may be configured to store an address where input data to be counted is stored in the storage unit; and the counting unit may be connected to the register unit, and may be configured to acquire a counting instruction, read a storage address of the input data to be counted in the register unit according to the counting instruction, acquire corresponding input data to be counted in the storage unit, perform statistical counting on a number of elements in the input data to be counted that satisfy the given condition, and obtain a counting result.Type: GrantFiled: November 27, 2019Date of Patent: August 22, 2023Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Tianshi Chen, Jie Wei, Tian Zhi, Zai Wang
-
Patent number: 11698786Abstract: The present disclosure provides a computation device and method. The device may include an input module configured to acquire input data; a model generation module configured to construct an offline model according to an input network structure and weight data; a neural network operation module configured to generate a computation instruction based on the offline model and cache the computation instruction, and compute the data to be processed based on the computation instruction to obtain a computation result; and an output module configured to output a computation result. The device and method may avoid the overhead caused by running an entire software architecture, which is a problem in a traditional method.Type: GrantFiled: November 27, 2019Date of Patent: July 11, 2023Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Shaoli Liu, Wei Li, Tian Zhi, Tianshi Chen
-
Publication number: 20230214322Abstract: The present disclosure relates to a method, a device and a computation apparatus for allocating a space address to data in a memory, where the computation apparatus is included in a combined processing apparatus, which includes a general interconnection interface and other processing apparatuses. The computation apparatus interacts with other processing apparatuses to jointly complete computations specified by the user. The combined processing apparatus also includes a storage apparatus. The storage apparatus is respectively connected to the computation apparatus and the other processing apparatuses, and is used for storing data of the computation apparatus and other processing apparatuses. The technical solutions of the present disclosure improve utilization of storage space of the memory.Type: ApplicationFiled: May 12, 2021Publication date: July 6, 2023Inventors: Xiaofu MENG, Tian ZHI, Zhenxing ZHANG, Xunyu CHEN
-
Patent number: 11551067Abstract: The present disclosure provides a neural network processor and neural network computation method that deploy a memory and a cache to perform a neural network computation, where the memory may be configured to store data and instructions of the neural network computation, the cache may be connected to the memory via a memory bus, thereby, the actual compute ability of hardware may be fully utilized, the cost and power consumption overhead may be reduced, parallelism of the network may be fully utilized, and the efficiency of the neural network computation may be improved.Type: GrantFiled: July 23, 2019Date of Patent: January 10, 2023Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Tianshi Chen, Xiaobin Chen, Tian Zhi, Zidong Du
-
Patent number: 11531540Abstract: A processing device with dynamically configurable operation bit width, characterized by comprising: a memory for storing data, the data comprising data to be operated, intermediate operation result, final operation result, and data to be buffered in a neural network; a data width adjustment circuit for adjusting the width of the data to be operated, the intermediate operation result, the final operation result, and/or the data to be buffered; an operation circuit for operating the data to be operated, including performing operation on data to be operated of different bit widths by using an adder circuit and a multiplier; and a control circuit for controlling the memory, the data width adjustment circuit and the operation circuit. The device of the present disclosure can have the advantages of strong flexibility, high configurability, fast operation speed, low power consumption or the like.Type: GrantFiled: April 17, 2018Date of Patent: December 20, 2022Assignee: CAMBRICON (XI'AN) SEMICONDUCTOR CO., LTD.Inventors: Tianshi Chen, Jie Wei, Tian Zhi, Zai Wang, Shaoli Liu, Yuzhe Luo, Qi Guo, Wei Li, Shengyuan Zhou, Zidong Du
-
Patent number: 11507640Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.Type: GrantFiled: October 26, 2018Date of Patent: November 22, 2022Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
-
Patent number: 11436301Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.Type: GrantFiled: October 26, 2018Date of Patent: September 6, 2022Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
-
Patent number: 11409524Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a vector, wherein the vector includes one or more elements. The aspects may further include a computation module that includes one or more comparers configured to compare the one or more elements to generate an output result that satisfies a predetermined condition included in an instruction.Type: GrantFiled: October 25, 2018Date of Patent: August 9, 2022Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Tian Zhi, Shaoli Liu, Qi Guo, Tianshi Chen, Yunji Chen
-
Patent number: 11341211Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.Type: GrantFiled: October 26, 2018Date of Patent: May 24, 2022Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
-
Patent number: 11126429Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include a computation module that includes one or more bitwise processors and a combiner. The bitwise processors may be configured to perform bitwise operations between each of the first elements and a corresponding one of the second elements to generate one or more operation results. The combiner may be configured to combine the one or more operation results into an output vector.Type: GrantFiled: January 17, 2019Date of Patent: September 21, 2021Assignee: Cambricon Technologies Corporation LimitedInventors: Tao Luo, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
-
Patent number: 11100192Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.Type: GrantFiled: October 26, 2018Date of Patent: August 24, 2021Assignee: Cambricon Technologies Corporation LimitedInventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
-
Patent number: 10997276Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector, wherein the first vector includes one or more first elements and the second vector includes one or more second elements. The aspects may further include one or more adders and a combiner. The one or more adders may be configured to respectively add each of the first elements to a corresponding one of the second elements to generate one or more addition results. The combiner may be configured to combine a combiner configured to combine the one or more addition results into an output vector.Type: GrantFiled: October 26, 2018Date of Patent: May 4, 2021Assignee: Cambricon Technologies Corporation LimitedInventors: Jinhua Tao, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
-
Patent number: 10860316Abstract: Aspects for generating a dot product for two vectors in neural network are described herein. The aspects may include a controller unit configured to receive a vector load instruction that includes a first address of a first vector and a length of the first vector. The aspects may further include a direct memory access unit configured to retrieve the first vector from a storage device based on the first address of the first vector. Further still, the aspects may include a caching unit configured to store the first vector.Type: GrantFiled: October 26, 2018Date of Patent: December 8, 2020Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Tian Zhi, Qi Guo, Shaoli Liu, Tianshi Chen, Yunji Chen
-
Patent number: 10831861Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector. The first vector may include one or more first elements and the second vector may include one or more second elements. The aspects may further include a computation module configured to calculate a cross product between the first vector and the second vector in response to an instruction.Type: GrantFiled: October 26, 2018Date of Patent: November 10, 2020Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Tao Luo, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
-
Patent number: 10671913Abstract: The present disclosure provides a computation device and method, which are capable of using a single instruction to complete a transpose computation of a matrix of any size within constant time. Compared with conventional methods for performing a matrix transpose computation, the device and method may reduce the time complexity of a matrix transpose computation as well as making the usage of the computation simpler and more efficient.Type: GrantFiled: July 24, 2019Date of Patent: June 2, 2020Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Shaoli Liu, Wei Li, Tian Zhi, Tianshi Chen
-
Patent number: 10643129Abstract: Aspects for backpropagation of a convolutional neural network are described herein. The aspects may include a direct memory access unit configured to receive input data from a storage device and a master computation module configured to select one or more portions of the input data based on a predetermined convolution window. Further, the aspects may include one or more slave computation modules respectively configured to convolute one of the one or more portions of the input data with one of one or more previously calculated first data gradients to generate a kernel gradient, wherein the master computation module is further configured to update a prestored convolution kernel based on the kernel gradient.Type: GrantFiled: October 29, 2018Date of Patent: May 5, 2020Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Yunji Chen, Tian Zhi, Shaoli Liu, Qi Guo, Tianshi Chen
-
Publication number: 20200111007Abstract: Aspects for backpropagation of a convolutional neural network are described herein. The aspects may include a direct memory access unit configured to receive input data from a storage device and a master computation module configured to select one or more portions of the input data based on a predetermined convolution window. Further, the aspects may include one or more slave computation modules respectively configured to convolute one of the one or more portions of the input data with one of one or more previously calculated first data gradients to generate a kernel gradient, wherein the master computation module is further configured to update a prestored convolution kernel based on the kernel gradient.Type: ApplicationFiled: December 11, 2019Publication date: April 9, 2020Inventors: Yunji CHEN, Tian ZHI, Shaoli LIU, Qi GUO, Tianshi CHEN
-
Publication number: 20200097794Abstract: The present disclosure provides a counting device and counting method. The device includes a storage unit, a counting unit, and a register unit, where the storage unit may be connected to the counting unit for storing input data to be counted and storing a number of elements satisfying a given condition in the input data after counting; the register unit may be configured to store an address where input data to be counted is stored in the storage unit; and the counting unit may be connected to the register unit, and may be configured to acquire a counting instruction, read a storage address of the input data to be counted in the register unit according to the counting instruction, acquire corresponding input data to be counted in the storage unit, perform statistical counting on a number of elements in the input data to be counted that satisfy the given condition, and obtain a counting result.Type: ApplicationFiled: November 27, 2019Publication date: March 26, 2020Inventors: Tianshi Chen, Jie Wei, Tian Zhi, Zai Wang
-
Publication number: 20200097520Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a first vector and a second vector. The first vector may include one or more first elements and the second vector may include one or more second elements. The aspects may further include a computation module configured to calculate a cross product between the first vector and the second vector in response to an instruction.Type: ApplicationFiled: October 26, 2018Publication date: March 26, 2020Inventors: Tao Luo, Tian Zhi, Shaoli Liu, Tianshi Chen, Yunji Chen
-
Publication number: 20200097795Abstract: The present disclosure provides a computation device and method. The device may include an input module configured to acquire input data; a model generation module configured to construct an offline model according to an input network structure and weight data; a neural network operation module configured to generate a computation instruction based on the offline model and cache the computation instruction, and compute the data to be processed based on the computation instruction to obtain a computation result; and an output module configured to output a computation result. The device and method may avoid the overhead caused by running an entire software architecture, which is a problem in a traditional method.Type: ApplicationFiled: November 27, 2019Publication date: March 26, 2020Inventors: Shaoli Liu, Wei Li, Tian Zhi, Tianshi Chen