Patents by Inventor Huiying LAN
Huiying LAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240111536Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.Type: ApplicationFiled: December 7, 2023Publication date: April 4, 2024Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Bingrui Wang, Xiaoyong ZHOU, Yimin ZHUANG, Huiying LAN, Jun LIANG, Hongbo ZENG
-
Patent number: 11886880Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.Type: GrantFiled: June 24, 2022Date of Patent: January 30, 2024Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Bingrui Wang, Xiaoyong Zhou, Yimin Zhuang, Huiying Lan, Jun Liang, Hongbo Zeng
-
Publication number: 20230274158Abstract: The present disclosure relates to an apparatus and a method for performing neural network computing, a board card, and a readable storage medium. The computing apparatus of the present disclosure is included in an integrated circuit apparatus. The integrated circuit apparatus includes a general interconnection interface and other processing apparatus. The computing apparatus interacts with other processing apparatus to jointly complete a computing operation specified by a user. The integrated circuit apparatus further includes a storage apparatus. The storage apparatus is connected to the computing apparatus and other processing apparatus, respectively. The storage apparatus is used for data storage of the computing apparatus and other processing apparatus.Type: ApplicationFiled: September 23, 2021Publication date: August 31, 2023Inventors: Huiying LAN, Ruitao WANG, Haizhao LUO, Bo CAO, Xunyu CHEN
-
Publication number: 20230259746Abstract: The present disclosure relates to an apparatus and a method for forward fusing a neural network, a board card, and a readable storage medium. The computing apparatus of the present disclosure is included in an integrated circuit apparatus. The integrated circuit apparatus includes a general interconnection interface and other processing apparatus. The computing apparatus interacts with other processing apparatus to jointly complete a computing operation specified by a user. The integrated circuit apparatus further includes a storage apparatus. The storage apparatus is connected to the computing apparatus and other processing apparatus, respectively. The storage apparatus is used for data storage of the computing apparatus and other processing apparatus.Type: ApplicationFiled: September 24, 2021Publication date: August 17, 2023Inventors: Huiying LAN, Ruitao WANG, Haizhao LUO, Bo CAO, Xunyu CHEN
-
Publication number: 20220334840Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.Type: ApplicationFiled: June 24, 2022Publication date: October 20, 2022Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Bingrui WANG, Xiaoyong ZHOU, Yimin ZHUANG, Huiying LAN, Jun LIANG, Hongbo ZENG
-
Patent number: 11385895Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.Type: GrantFiled: September 29, 2021Date of Patent: July 12, 2022Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Bingrui Wang, Xiaoyong Zhou, Yimin Zhuang, Huiying Lan, Jun Liang, Hongbo Zeng
-
Patent number: 11373084Abstract: Aspects for forward propagation in fully connected layers of a convolutional artificial neural network are described herein. The aspects may include multiple slave computation modules configured to parallelly calculate multiple groups of slave output values based on an input vector received via the interconnection unit. Further, the aspects may include a master computation module connected to the multiple slave computation modules via an interconnection unit, wherein the master computation module is configured to generate an output vector based on the intermediate result vector.Type: GrantFiled: October 29, 2018Date of Patent: June 28, 2022Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Huiying Lan, Qi Guo, Yunji Chen, Tianshi Chen
-
Publication number: 20220019439Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.Type: ApplicationFiled: September 29, 2021Publication date: January 20, 2022Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Bingrui WANG, Xiaoyong ZHOU, Yimin ZHUANG, Huiying LAN, Jun LIANG, Hongbo ZENG
-
Publication number: 20210150325Abstract: The present disclosure provides a data processing method and an apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.Type: ApplicationFiled: December 29, 2020Publication date: May 20, 2021Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Bingrui WANG, Xiaoyong ZHOU, Yimin ZHUANG, Huiying LAN, Jun LIANG
-
Patent number: 10860050Abstract: A nonlinear function operation device and method are provided. The device may include a table looking-up module and a linear fitting module. The table looking-up module may be configured to acquire a first address of a slope value k and a second address of an intercept value b based on a floating-point number. The linear fitting module may be configured to obtain a linear function expressed as y=k×x+b based on the slope value k and the intercept value b, and substitute the floating-point number into the linear function to calculate a function value of the linear function, wherein the calculated function value is determined as the function value of a nonlinear function corresponding to the floating-point number.Type: GrantFiled: October 18, 2018Date of Patent: December 8, 2020Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Huiying Lan, Qi Guo, Yunji Chen, Tianshi Chen, Shangying Li, Zhen Li
-
Patent number: 10805233Abstract: A communication structure comprises: a central node that is a communication data center of a network-on-chip and used for broadcasting or multicasting communication data to a plurality of leaf nodes; a plurality of leaf nodes that are communication data nodes of the network-on-chip and used for transmitting the communication data to the central node; and forwarder modules for connecting the central node with the plurality of leaf nodes and forwarding the communication data, wherein the plurality of leaf nodes are divided into N groups, each group having the same number of leaf nodes, the central node is individually in communication connection with each group of leaf nodes by means of the forwarder modules, the communication structure is a fractal-tree structure, the communication structure constituted by each group of leaf nodes has self-similarity, and the forwarder modules comprises a central forwarder module, leaf forwarder modules, and intermediate forwarder modules.Type: GrantFiled: June 17, 2016Date of Patent: October 13, 2020Assignee: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCEInventors: Huiying Lan, Tao Luo, Shaoli Liu, Shijin Zhang, Yunji Chen
-
Publication number: 20190065934Abstract: Aspects for forward propagation in fully connected layers of a convolutional artificial neural network are described herein. The aspects may include multiple slave computation modules configured to parallelly calculate multiple groups of slave output values based on an input vector received via the interconnection unit. Further, the aspects may include a master computation module connected to the multiple slave computation modules via an interconnection unit, wherein the master computation module is configured to generate an output vector based on the intermediate result vector.Type: ApplicationFiled: October 29, 2018Publication date: February 28, 2019Inventors: Shaoli Liu, Huiying Lan, Qi Guo, Yunji Chen, Tianshi Chen
-
Publication number: 20190050369Abstract: A nonlinear function operation device and method are provided. The device may include a table looking-up module and a linear fitting module. The table looking-up module may be configured to acquire a first address of a slope value k and a second address of an intercept value b based on a floating-point number. The linear fitting module may be configured to obtain a linear function expressed as y=k×x+b based on the slope value k and the intercept value b, and substitute the floating-point number into the linear function to calculate a function value of the linear function, wherein the calculated function value is determined as the function value of a nonlinear function corresponding to the floating-point number.Type: ApplicationFiled: October 18, 2018Publication date: February 14, 2019Inventors: Huiying Lan, Qi Guo, Yunji Chen, Tianshi Chen, Shangying Li, Zhen Li
-
Publication number: 20180375789Abstract: A communication structure comprises: a central node that is a communication data center of a network-on-chip and used for broadcasting or multicasting communication data to a plurality of leaf nodes; a plurality of leaf nodes that are communication data nodes of the network-on-chip and used for transmitting the communication data to the central node; and forwarder modules for connecting the central node with the plurality of leaf nodes and forwarding the communication data, wherein the plurality of leaf nodes are divided into N groups, each group having the same number of leaf nodes, the central node is individually in communication connection with each group of leaf nodes by means of the forwarder modules, the communication structure is a fractal-tree structure, the communication structure constituted by each group of leaf nodes has self-similarity, and the forwarder modules comprises a central forwarder module, leaf forwarder modules, and intermediate forwarder modules.Type: ApplicationFiled: June 17, 2016Publication date: December 27, 2018Inventors: Huiying LAN, Tao LUO, Shaoli LIU, Shijin ZHANG, Yunji CHEN