Patents by Inventor Huiying LAN

Huiying LAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240111536
    Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.
    Type: Application
    Filed: December 7, 2023
    Publication date: April 4, 2024
    Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Bingrui Wang, Xiaoyong ZHOU, Yimin ZHUANG, Huiying LAN, Jun LIANG, Hongbo ZENG
  • Patent number: 11886880
    Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: January 30, 2024
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Bingrui Wang, Xiaoyong Zhou, Yimin Zhuang, Huiying Lan, Jun Liang, Hongbo Zeng
  • Publication number: 20230274158
    Abstract: The present disclosure relates to an apparatus and a method for performing neural network computing, a board card, and a readable storage medium. The computing apparatus of the present disclosure is included in an integrated circuit apparatus. The integrated circuit apparatus includes a general interconnection interface and other processing apparatus. The computing apparatus interacts with other processing apparatus to jointly complete a computing operation specified by a user. The integrated circuit apparatus further includes a storage apparatus. The storage apparatus is connected to the computing apparatus and other processing apparatus, respectively. The storage apparatus is used for data storage of the computing apparatus and other processing apparatus.
    Type: Application
    Filed: September 23, 2021
    Publication date: August 31, 2023
    Inventors: Huiying LAN, Ruitao WANG, Haizhao LUO, Bo CAO, Xunyu CHEN
  • Publication number: 20230259746
    Abstract: The present disclosure relates to an apparatus and a method for forward fusing a neural network, a board card, and a readable storage medium. The computing apparatus of the present disclosure is included in an integrated circuit apparatus. The integrated circuit apparatus includes a general interconnection interface and other processing apparatus. The computing apparatus interacts with other processing apparatus to jointly complete a computing operation specified by a user. The integrated circuit apparatus further includes a storage apparatus. The storage apparatus is connected to the computing apparatus and other processing apparatus, respectively. The storage apparatus is used for data storage of the computing apparatus and other processing apparatus.
    Type: Application
    Filed: September 24, 2021
    Publication date: August 17, 2023
    Inventors: Huiying LAN, Ruitao WANG, Haizhao LUO, Bo CAO, Xunyu CHEN
  • Publication number: 20220334840
    Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.
    Type: Application
    Filed: June 24, 2022
    Publication date: October 20, 2022
    Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Bingrui WANG, Xiaoyong ZHOU, Yimin ZHUANG, Huiying LAN, Jun LIANG, Hongbo ZENG
  • Patent number: 11385895
    Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: July 12, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Bingrui Wang, Xiaoyong Zhou, Yimin Zhuang, Huiying Lan, Jun Liang, Hongbo Zeng
  • Patent number: 11373084
    Abstract: Aspects for forward propagation in fully connected layers of a convolutional artificial neural network are described herein. The aspects may include multiple slave computation modules configured to parallelly calculate multiple groups of slave output values based on an input vector received via the interconnection unit. Further, the aspects may include a master computation module connected to the multiple slave computation modules via an interconnection unit, wherein the master computation module is configured to generate an output vector based on the intermediate result vector.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: June 28, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Huiying Lan, Qi Guo, Yunji Chen, Tianshi Chen
  • Publication number: 20220019439
    Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.
    Type: Application
    Filed: September 29, 2021
    Publication date: January 20, 2022
    Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Bingrui WANG, Xiaoyong ZHOU, Yimin ZHUANG, Huiying LAN, Jun LIANG, Hongbo ZENG
  • Publication number: 20210150325
    Abstract: The present disclosure provides a data processing method and an apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.
    Type: Application
    Filed: December 29, 2020
    Publication date: May 20, 2021
    Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Bingrui WANG, Xiaoyong ZHOU, Yimin ZHUANG, Huiying LAN, Jun LIANG
  • Patent number: 10860050
    Abstract: A nonlinear function operation device and method are provided. The device may include a table looking-up module and a linear fitting module. The table looking-up module may be configured to acquire a first address of a slope value k and a second address of an intercept value b based on a floating-point number. The linear fitting module may be configured to obtain a linear function expressed as y=k×x+b based on the slope value k and the intercept value b, and substitute the floating-point number into the linear function to calculate a function value of the linear function, wherein the calculated function value is determined as the function value of a nonlinear function corresponding to the floating-point number.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: December 8, 2020
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Huiying Lan, Qi Guo, Yunji Chen, Tianshi Chen, Shangying Li, Zhen Li
  • Patent number: 10805233
    Abstract: A communication structure comprises: a central node that is a communication data center of a network-on-chip and used for broadcasting or multicasting communication data to a plurality of leaf nodes; a plurality of leaf nodes that are communication data nodes of the network-on-chip and used for transmitting the communication data to the central node; and forwarder modules for connecting the central node with the plurality of leaf nodes and forwarding the communication data, wherein the plurality of leaf nodes are divided into N groups, each group having the same number of leaf nodes, the central node is individually in communication connection with each group of leaf nodes by means of the forwarder modules, the communication structure is a fractal-tree structure, the communication structure constituted by each group of leaf nodes has self-similarity, and the forwarder modules comprises a central forwarder module, leaf forwarder modules, and intermediate forwarder modules.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: October 13, 2020
    Assignee: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCE
    Inventors: Huiying Lan, Tao Luo, Shaoli Liu, Shijin Zhang, Yunji Chen
  • Publication number: 20190065934
    Abstract: Aspects for forward propagation in fully connected layers of a convolutional artificial neural network are described herein. The aspects may include multiple slave computation modules configured to parallelly calculate multiple groups of slave output values based on an input vector received via the interconnection unit. Further, the aspects may include a master computation module connected to the multiple slave computation modules via an interconnection unit, wherein the master computation module is configured to generate an output vector based on the intermediate result vector.
    Type: Application
    Filed: October 29, 2018
    Publication date: February 28, 2019
    Inventors: Shaoli Liu, Huiying Lan, Qi Guo, Yunji Chen, Tianshi Chen
  • Publication number: 20190050369
    Abstract: A nonlinear function operation device and method are provided. The device may include a table looking-up module and a linear fitting module. The table looking-up module may be configured to acquire a first address of a slope value k and a second address of an intercept value b based on a floating-point number. The linear fitting module may be configured to obtain a linear function expressed as y=k×x+b based on the slope value k and the intercept value b, and substitute the floating-point number into the linear function to calculate a function value of the linear function, wherein the calculated function value is determined as the function value of a nonlinear function corresponding to the floating-point number.
    Type: Application
    Filed: October 18, 2018
    Publication date: February 14, 2019
    Inventors: Huiying Lan, Qi Guo, Yunji Chen, Tianshi Chen, Shangying Li, Zhen Li
  • Publication number: 20180375789
    Abstract: A communication structure comprises: a central node that is a communication data center of a network-on-chip and used for broadcasting or multicasting communication data to a plurality of leaf nodes; a plurality of leaf nodes that are communication data nodes of the network-on-chip and used for transmitting the communication data to the central node; and forwarder modules for connecting the central node with the plurality of leaf nodes and forwarding the communication data, wherein the plurality of leaf nodes are divided into N groups, each group having the same number of leaf nodes, the central node is individually in communication connection with each group of leaf nodes by means of the forwarder modules, the communication structure is a fractal-tree structure, the communication structure constituted by each group of leaf nodes has self-similarity, and the forwarder modules comprises a central forwarder module, leaf forwarder modules, and intermediate forwarder modules.
    Type: Application
    Filed: June 17, 2016
    Publication date: December 27, 2018
    Inventors: Huiying LAN, Tao LUO, Shaoli LIU, Shijin ZHANG, Yunji CHEN