Patents by Inventor Yaolong Zhu

Yaolong Zhu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230196103
    Abstract: An embodiment of the present disclosure provides a weight precision configuration method, including: determining a pre-trained preset neural network including a plurality of layers each having a preset weight precision; reducing, based on a current threshold, the weight precision of at least one layer in the preset neural network to obtain a corrected neural network having a recognition rate greater than the current threshold; and reducing the weight precision of a layer includes: adjusting the weight precision of the layer; setting, if a termination condition is met, the weight precision of the layer to a corrected weight precision that is less than or equal to the preset weight precision of the layer; and returning, if the termination condition is not met, to the operation of adjusting the weight precision of the layer; and determining a final weight precision of each layer to obtain a final neural network.
    Type: Application
    Filed: July 8, 2021
    Publication date: June 22, 2023
    Inventors: Wei HE, Yaolong ZHU, Han LI
  • Publication number: 20230112826
    Abstract: The present disclosure provides a signal transmission method and a signal transmission device, which are applied to a digital circuit including a plurality of circuit modules connected in series, and each circuit module is configured to perform corresponding operation processing based on a first clock signal provided by a first clock. The method includes: under driving of a second clock signal provided by a second clock, transmitting a first signal output by a current circuit module to a target circuit module in response to reception of the first signal, the first signal is a signal output by the current circuit module when operating based on the first clock signal, transmission of the first signal is completed within a current clock cycle of the first clock, and a clock rate of the second clock is greater than that of the first clock.
    Type: Application
    Filed: September 7, 2021
    Publication date: April 13, 2023
    Inventors: Yangshu SHEN, Xiaohuan JIN, Tong SHANG, Yaolong ZHU
  • Publication number: 20230104942
    Abstract: The present disclosure provides a method for converting numerical values into spikes. The method includes: generating an initial spike sequence according to input numerical values, where the initial spike sequence includes at least one data string, each of which is independently selected from one of a consecutive spike train or a consecutive non-spike train, the number of spikes in all of the at least one data string is equal to an expected value of the number of spikes in a target spike sequence to be generated, and the target spike sequence to be generated is a spike sequence of a spiking neural network within a time period; and randomly selecting and modifying data from the initial spike sequence, so as to form the target spike sequence. The present disclosure further provides an apparatus for converting numerical values into spikes, an electronic device, and a computer-readable storage medium.
    Type: Application
    Filed: July 23, 2021
    Publication date: April 6, 2023
    Inventors: Zhenzhi WU, Wei HE, Luojun JIN, Yaolong ZHU
  • Publication number: 20230099117
    Abstract: A computing core circuit, including: an encoding module, a route sending module, and a control module, wherein the control module is configured to control the encoding module to perform encoding processing on a pulse sequence determined by pulses to be transmitted of at least one neuron in a current computing core, so as to obtain an encoded pulse sequence, and control the route sending module to determine a corresponding route packet according to the encoded pulse sequence, so as to send the route packet. The present disclosure further provides a data processing method, a chip, a board, an electronic device, and a computer-readable storage medium.
    Type: Application
    Filed: April 22, 2021
    Publication date: March 30, 2023
    Inventors: Zhenzhi WU, Yaolong ZHU, Luojun JIN, Wei HE, Qikun ZHANG
  • Publication number: 20230089320
    Abstract: The disclosed method is applicable to a many-core system. The method includes: acquiring multiple pieces of routing information, each of which includes two logical nodes and a data transmission amount between the two logical nodes; determining a piece of unprocessed routing information with a maximum data transmission amount as current routing information; mapping each unlocked logical node of the current routing information to one unlocked processing node, and locking the mapped logical node and processing node, wherein if there is an unlocked edge processing node, the unlocked logical node is mapped to the unlocked edge processing node; and returning, if there is at least one unlocked logical node, to the step of determining the piece of unprocessed routing information with the maximum data transmission amount as the current routing information.
    Type: Application
    Filed: August 17, 2021
    Publication date: March 23, 2023
    Inventors: Wei HE, Yangshu SHEN, Yaolong ZHU
  • Publication number: 20230042187
    Abstract: A behavior recognition method and system, including: dividing video data into a plurality of video clips, performing frame extraction processing on each video clip to obtain frame images, and performing optical flow extraction on the frame images to obtain optical flow images; performing feature extraction on the frame images and the optical flow images to obtain feature maps of the frame images and the optical flow images; performing spatio-temporal convolution processing on the feature maps of the frame images and the optical flow images, and determining a spatial prediction result and a temporal prediction result; fusing the spatial prediction results of all the video clips to obtain a spatial fusion result, and fusing the temporal prediction results of all the video clips to obtain a temporal fusion result; and performing two-stream fusion on the spatial fusion result and the temporal fusion result to obtain a behavior recognition result.
    Type: Application
    Filed: March 8, 2021
    Publication date: February 9, 2023
    Inventors: Zhenzhi WU, Yaolong ZHU
  • Publication number: 20230040375
    Abstract: Disclosed are a network accuracy quantification method, system, and device, an electronic device and a readable medium, which are applicable to a many-core chip. The method includes: determining a reference accuracy according to a total core resource number of the many-core chip and the number of core resources required by each network to be quantified, with the number of the core resources required by each network to be quantified being the number of the core resources which is determined after each network to be quantified is quantified; and determining a target accuracy corresponding to each network to be quantified according to the reference accuracy and the total core resource number of the many-core chip.
    Type: Application
    Filed: June 9, 2021
    Publication date: February 9, 2023
    Inventors: Fanhui MENG, Chuan HU, Han LI, Xinyang WU, Yaolong ZHU
  • Publication number: 20220417169
    Abstract: Provided in the present disclosure are a data processing method and apparatus, and an electronic device, the method includes: determining a plurality of candidate data pieces, where the candidate data pieces are provided from corresponding data sources; and determining a target data piece based on priorities of the data sources corresponding to the plurality of candidate data pieces in a current cycle, wherein a same data source has different priorities in different processing cycles, and priority sequence numbers of a same data source in different processing cycles satisfy a nonlinear relationship.
    Type: Application
    Filed: December 9, 2020
    Publication date: December 29, 2022
    Inventors: Yangshu SHEN, Yaolong ZHU
  • Publication number: 20220327391
    Abstract: Disclosed are a global pooling method for a neural network and a many-core system. The global pooling method for a neural network includes: receiving point data of to-be-processed data sequentially input by a previous network layer; and performing a preset pooling operation on the received point data after each piece of point data is received until the pooling operations of all the point data of the to-be-processed data are completed.
    Type: Application
    Filed: July 30, 2020
    Publication date: October 13, 2022
    Inventors: Haitao QI, Han LI, Yaolong ZHU
  • Publication number: 20220318608
    Abstract: A neural network mapping method and a neural network mapping apparatus are provided. The method includes: mapping a calculation task for a preset feature map of each network layer in a plurality of network layers in a convolutional neural network to at least one processing element of a chip; acquiring the number of Phases needed by a plurality of processing elements in the chip for completing the calculation tasks, and performing a first stage of balancing on the number of Phases of the plurality of processing elements; and based on the number of the Phases of the plurality of processing elements obtained after the first stage of balancing, mapping the calculation task for the preset feature map of each network layer in the plurality of network layers in the convolutional neural network to at least one processing element of the chip subjected to the first stage of balancing.
    Type: Application
    Filed: October 27, 2020
    Publication date: October 6, 2022
    Inventors: Weihao ZHANG, Han LI, Chuan HU, Yaolong ZHU
  • Patent number: 11461626
    Abstract: The present disclosure provides a brain-like computing chip and a computing device. The brain-like computing chip includes is a many-core system composed of one or more functional cores, and data transmission is performed between the functional cores by means of a network-on-chip. The functional core includes at least one neuron processor configured to compute various neuron models, and at least one coprocessor coupled to the neuron processor and configured to perform an integral operation and/or a multiply-add-type operation; and the neuron processor is capable of calling the coprocessor to perform the multiply-add-type operation.
    Type: Grant
    Filed: January 15, 2020
    Date of Patent: October 4, 2022
    Assignee: LYNXI TECHNOLOGIES CO., LTD.
    Inventors: Zhenzhi Wu, Guanrui Wang, Luping Shi, Yaolong Zhu
  • Patent number: 11455108
    Abstract: The present application provides a method and a device for controlling a storage format of an on-chip storage resource, the method for controlling the storage format of the on-chip storage resource includes: while mapping a neural network model to a many-core system, generating an on-chip storage resource of each processing core in the many-core system, and storing the on-chip storage resource into a specified file; and parsing out a storage format of the on-chip storage resource based on the specified file, obtaining occupied storage space of each processing core, and adjusting the storage format of the on-chip storage resource of each processing core based on the occupied storage space of each processing core.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: September 27, 2022
    Assignee: LYNXI TECHNOLOGIES CO., LTD.
    Inventors: Ruiqiang Ding, Han Li, Chuan Hu, Feng Wang, Fanhui Meng, Yaolong Zhu
  • Publication number: 20220269430
    Abstract: The present application provides a method and a device for controlling a storage format of an on-chip storage resource, the method for controlling the storage format of the on-chip storage resource includes: while mapping a neural network model to a many-core system, generating an on-chip storage resource of each processing core in the many-core system, and storing the on-chip storage resource into a specified file; and parsing out a storage format of the on-chip storage resource based on the specified file, obtaining occupied storage space of each processing core, and adjusting the storage format of the on-chip storage resource of each processing core based on the occupied storage space of each processing core.
    Type: Application
    Filed: November 20, 2020
    Publication date: August 25, 2022
    Inventors: Ruiqiang DING, Han LI, Chuan HU, Feng WANG, Fanhui MENG, Yaolong ZHU
  • Patent number: 11398981
    Abstract: Provided are a path creation method and device for a network on chip and electronic apparatus. The method includes: receiving, by a second network node, a first data packet sent by a first network node, the first data packet carrying first idle address identification information, a destination network node address and path creation identification information, the first idle address identification information indicating a first idle position in a first path routing table of the first network node, the path creation identification information indicating a creation of a path; storing, by the second network node, the first idle address identification information in a second idle position in its second path routing table, and determining second idle address identification information; determining, by the second network node, a second data packet according to the second idle address identification information; and sending, by the second network node, the second data packet.
    Type: Grant
    Filed: November 28, 2019
    Date of Patent: July 26, 2022
    Assignee: LYNXI TECHNOLOGIES CO., LTD.
    Inventors: Yangshu Shen, Luping Shi, Yaolong Zhu
  • Patent number: 11392426
    Abstract: Embodiments of the present disclosure provide multitask parallel processing method and apparatus, a computer device and a storage medium. The method is applied to a neural network consisting of a plurality of nodes, the neural network including at least one closed-loop path, and the method includes: inputting a data sequence to be computed into the neural network in a form of data packets, each of the data packets including multiple pieces of data; and computing, by the nodes in the closed-loop path, all the data in a currently received data packet each time a computation flow is started.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: July 19, 2022
    Assignee: LYNXI TECHNOLOGIES CO., LTD.
    Inventors: Yangshu Shen, Yaolong Zhu, Wei He, Luping Shi
  • Publication number: 20220138542
    Abstract: The present disclosure provides a brain-like computing chip and a computing device. The brain-like computing chip includes is a many-core system composed of one or more functional cores, and data transmission is performed between the functional cores by means of a network-on-chip. The functional core includes at least one neuron processor configured to compute various neuron models, and at least one coprocessor coupled to the neuron processor and configured to perform an integral operation and/or a multiply-add-type operation; and the neuron processor is capable of calling the coprocessor to perform the multiply-add-type operation.
    Type: Application
    Filed: January 15, 2020
    Publication date: May 5, 2022
    Inventors: Zhenzhi WU, Guanrui WANG, Luping SHI, Yaolong ZHU
  • Publication number: 20220091906
    Abstract: Embodiments of the present disclosure provide multitask parallel processing method and apparatus, a computer device and a storage medium. The method is applied to a neural network consisting of a plurality of nodes, the neural network including at least one closed-loop path, and the method includes: inputting a data sequence to be computed into the neural network in a form of data packets, each of the data packets including multiple pieces of data; and computing, by the nodes in the closed-loop path, all the data in a currently received data packet each time a computation flow is started.
    Type: Application
    Filed: December 21, 2020
    Publication date: March 24, 2022
    Inventors: Yangshu SHEN, Yaolong ZHU, Wei HE, Luping SHI
  • Publication number: 20220083498
    Abstract: The present disclosure provides a data transmission method and device for a network on chip and an electronic apparatus. The method includes: receiving, by a second network node, a first data packet sent by a first network node, the first data packet including first identification information and a data packet payload; determining, by the second network node, valid transmission information and second identification information corresponding to the valid transmission information according to the first identification information; determining, by the second network node, a second data packet according to the second identification information and the data packet payload; and sending, by the second network node, the second data packet according to the valid transmission information.
    Type: Application
    Filed: November 28, 2019
    Publication date: March 17, 2022
    Inventors: Yangshu SHEN, Luping SHI, Yaolong ZHU
  • Publication number: 20220045948
    Abstract: Provided are a path creation method and device for a network on chip and electronic apparatus. The method includes: receiving, by a second network node, a first data packet sent by a first network node, the first data packet carrying first idle address identification information, a destination network node address and path creation identification information, the first idle address identification information indicating a first idle position in a first path routing table of the first network node, the path creation identification information indicating a creation of a path; storing, by the second network node, the first idle address identification information in a second idle position in its second path routing table, and determining second idle address identification information; determining, by the second network node, a second data packet according to the second idle address identification information; and sending, by the second network node, the second data packet.
    Type: Application
    Filed: November 28, 2019
    Publication date: February 10, 2022
    Inventors: Yangshu SHEN, Luping SHI, Yaolong ZHU
  • Patent number: 8145873
    Abstract: A data management method for network storage system that said network storage system includes a storage network, a cluster of storage servers that provide data storage services for application servers connecting to the storage network and storage space corresponding to each storage server, setting a core manager in storage server, said core manager centralizing the metadata of all storage servers in a common storage space; separating the metadata from said storage servers to make a storage server become a storage manager and the storage spaces corresponding to each storage server form the common storage space, allocating the storage space of metadata in said common storage space, and managing the corresponding relationship between metadata and said storage manager.
    Type: Grant
    Filed: February 23, 2006
    Date of Patent: March 27, 2012
    Inventors: Yaolong Zhu, Hui Xiong, Jie Yan