Patents by Inventor Shaoli Liu
Shaoli Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11971836Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.Type: GrantFiled: December 29, 2021Date of Patent: April 30, 2024Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Yao Zhang, Shaoli Liu, Jun Liang, Yu Chen
-
Patent number: 11966583Abstract: The present disclosure provides a data pre-processing method and device and related computer device and storage medium. By storing the target output data corresponding to the target operation into the first memory close to the processor and reducing the time of reading the target output data, the occupation time of I/O read operations during the operation process can be reduced, and the speed and efficiency of the processor can be improved.Type: GrantFiled: June 27, 2019Date of Patent: April 23, 2024Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Xiaofu Meng
-
Publication number: 20240126548Abstract: The present disclosure relates to a data processing method, a data processing apparatus, and related products. The data processing apparatus includes an address determining unit and a data storage unit. The address determining unit is configured to determine a source data address and a plurality of discrete destination data addresses of data corresponding to a processing instruction when the decoded processing instruction is a discrete store instruction, where the source data address may include continuous data addresses.Type: ApplicationFiled: April 28, 2021Publication date: April 18, 2024Inventors: Xuyan MA, Jianhua WU, Shaoli LIU, Xiangxuan GE, Hanbo LIU, Lei ZHANG
-
Publication number: 20240126553Abstract: A data processing method and apparatus, and a related product. The data processing method comprises: when a decoded processing instruction is a vector extension instruction, determining a source data address, a destination data address and an extension parameter of data corresponding to the processing instruction; according to the extension parameter, extending first vector data of the source data address, so as to obtain extended second vector data; storing the second vector data to the destination data address, wherein the source data address and the destination data address comprise consecutive data addresses. Vector extension and storage are implemented by means of an extension parameter in a vector extension instruction, so as to obtain extended vector data, thereby simplifying processing, and reducing data overhead.Type: ApplicationFiled: April 28, 2021Publication date: April 18, 2024Inventors: Xuyan MA, Jianhua WU, Shaoli LIU, Xiangxuan GE, Hanbo LIU, Lei ZHANG
-
Patent number: 11960431Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system, and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.Type: GrantFiled: December 29, 2021Date of Patent: April 16, 2024Assignee: GUANGZHOU UNIVERSITYInventors: Shaoli Liu, Zhen Li, Yao Zhang
-
Patent number: 11959160Abstract: Disclosed is a copper-niobium alloy for a medical biopsy puncture needle. A needle core and/or needle tube of the puncture needle are/is made of the copper-niobium alloy. The copper-chromium alloy includes the following components by mass: 5?Nb?15 and the balance of Cu. A copper alloy with designed components is obtained by combining a diamagnetic material Cu with paramagnetic Nb, and compared with existing medical stainless steel and titanium alloy, the copper alloy has greatly reduced magnetic susceptibility, and specifically, the artifact area and volume are also significantly reduced. In addition, the blank of use of the copper alloy in medical biopsy paracentesis is filled.Type: GrantFiled: July 8, 2022Date of Patent: April 16, 2024Assignee: University of Shanghai for Science and TechnologyInventors: Xiaohong Chen, Xiaofei Liang, Honglei Zhou, Jian Zhao, Ping Liu, Shaoli Fu
-
Publication number: 20240111536Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.Type: ApplicationFiled: December 7, 2023Publication date: April 4, 2024Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Bingrui Wang, Xiaoyong ZHOU, Yimin ZHUANG, Huiying LAN, Jun LIANG, Hongbo ZENG
-
Patent number: 11934337Abstract: An electronic device includes a CPU, an acceleration module, and a memory. The acceleration module is communicatively connected with the CPU, and includes chips. The chip according to an embodiment includes a data bus, and a memory, a data receiver, a computing and processing unit, and a data transmitter connected to the data bus. The data receiver receives first data and header information from outside, writes the first data to a corresponding area of the memory through the data bus, and configures a corresponding computing and processing unit and/or data transmitter according to the header information. The computing and processing unit receives first task information, performs an operation processing according to the first task information and a configuration operation on the data transmitter. The data transmitter obtains second task information and second data, and outputs third data to outside based on at least part of the second data.Type: GrantFiled: August 31, 2020Date of Patent: March 19, 2024Assignee: ANHUI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Yao Zhang, Shaoli Liu, Dong Han
-
Patent number: 11934940Abstract: The present disclosure discloses a data processing method and related products, in which the data processing method includes: generating, by a general-purpose processor, a binary instruction according to device information of an AI processor, and generating an AI learning task according to the binary instruction; transmitting, by the general-purpose processor, the AI learning task to the cloud AI processor for running; receiving, by the general-purpose processor, a running result corresponding to the AI learning task; and determining, by the general-purpose processor, an offline running file according to the running result, where the offline running file is generated according to the device information of the AI processor and the binary instruction when the running result satisfies a preset requirement. By implementing the present disclosure, the debugging between the AI algorithm model and the AI processor can be achieved in advance.Type: GrantFiled: December 19, 2019Date of Patent: March 19, 2024Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Yao Zhang, Xiaofu Meng, Shaoli Liu
-
Patent number: 11922132Abstract: Disclosed are an information processing method and a terminal device. The method comprises: acquiring first information, wherein the first information is information to be processed by a terminal device; calling an operation instruction in a calculation apparatus to calculate the first information so as to obtain second information; and outputting the second information. By means of the examples in the present disclosure, a calculation apparatus of a terminal device can be used to call an operation instruction to process first information, so as to output second information of a target desired by a user, thereby improving the information processing efficiency. The present technical solution has advantages of a fast computation speed and high efficiency.Type: GrantFiled: December 11, 2020Date of Patent: March 5, 2024Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Tianshi Chen, Shaoli Liu, Zai Wang, Shuai Hu
-
Publication number: 20240066182Abstract: The present disclosure provides a fiber membrane and a preparation method and use thereof, and belongs to the field of biological materials. The fiber membrane includes a fiber with a core-shell structure, where a core of the fiber includes simvastatin and a first spinnable polymer, and a shell of the fiber includes hydroxyapatite and a second spinnable polymer. In the fiber, a release of the simvastatin mainly depends on a rate of water invasion. After water invades the fiber, the simvastatin in the fiber leaves the fiber with the diffusion of water. In the present disclosure, a barrier function of the shell prevents moisture from entering the core. Therefore, the simvastatin in the core cannot leave the fiber with the diffusion of water molecules in an early stage, and a release rate of drugs is slowed down in the early stage, thereby controlling sustained release of the drugs.Type: ApplicationFiled: November 10, 2022Publication date: February 29, 2024Applicant: SHANGHAI RUIZHIKANG MEDICAL TECHNOLOGY CO., LTD.Inventors: Xiaohong CHEN, Yubo LIU, Honglei ZHOU, Wei LI, Fengcang MA, Shaoli FU, Guosen SHAO, Haochen WU
-
Publication number: 20240066177Abstract: The present disclosure provides a hydrophilic fiber membrane with a sustained-release drug, and belongs to the field of biological materials. The present disclosure provides a hydrophilic fiber membrane with a sustained-release drug, including a fiber with a core-shell structure, where a core of the fiber includes curcumin and a first spinnable polymer, and a shell of the fiber includes polyethylene glycol and a second spinnable polymer; and in the shell, the polyethylene glycol and the second spinnable polymer have a mass ratio of not greater than 1:12. In the present disclosure, the fiber membrane has a core-shell structure, the curcumin is provided in the core, and the shell prevents a rapid release of the curcumin in an early stage, thereby delaying a release rate of the curcumin to prevent the drug from forming a burst release in the early stage and causing toxic reactions.Type: ApplicationFiled: November 11, 2022Publication date: February 29, 2024Applicant: SHANGHAI RUIZHIKANG MEDICAL TECHNOLOGY CO., LTD.Inventors: Xiaohong CHEN, Yubo LIU, Honglei ZHOU, Wei LI, Fengcang MA, Shaoli FU, Guosen SHAO, Haochen WU
-
Patent number: 11907844Abstract: The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption.Type: GrantFiled: November 28, 2019Date of Patent: February 20, 2024Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Zidong Du, Xuda Zhou, Shaoli Liu, Tianshi Chen
-
Publication number: 20240053988Abstract: Provided are a data processing method and device, and a related product. The method comprises: when a decoded processing instruction is a data transfer instruction, determining a source data address and a destination data address of data corresponding to the processing instruction; and storing data read from the source data address to the destination data address to obtain vector data, wherein the source data address comprises multiple discrete data addresses, and the destination data address comprises continuous data addresses. By means of the method, the processing process can be simplified and the data overhead can be reduced.Type: ApplicationFiled: April 28, 2021Publication date: February 15, 2024Inventors: Xuyan MA, Jianhua WU, Shaoli LIU, Xiangxuan GE, Hanbo LIU, Lei ZHANG
-
Publication number: 20240054012Abstract: The present disclosure provides a circuit, method and system for inter-chip communication. The method is implemented in a computation apparatus, where the computation apparatus is included in a combined processing apparatus, and the combined processing apparatus includes a general interconnection interface and other processing apparatus. The computation apparatus interacts with other processing apparatus to jointly complete a computation operation specified by a user. The combined processing apparatus also includes a storage apparatus. The storage apparatus is respectively connected to the computation apparatus and other processing apparatus and is used for storing data of the computation apparatus and other processing apparatus.Type: ApplicationFiled: December 30, 2021Publication date: February 15, 2024Inventors: Yingnan ZHANG, Qinglong CHAI, Lu CHAO, Yao ZHANG, Shaoli LIU, Jun LIANG
-
Patent number: 11900241Abstract: Provided are an integrated circuit chip apparatus and a related product, the integrated circuit chip apparatus being used for executing a multiplication operation, a convolution operation or a training operation of a neural network. The present technical solution has the advantages of a small amount of calculation and low power consumption.Type: GrantFiled: March 7, 2022Date of Patent: February 13, 2024Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
-
Patent number: 11900242Abstract: Provided are an integrated circuit chip apparatus and a related product, the integrated circuit chip apparatus being used for executing a multiplication operation, a convolution operation or a training operation of a neural network. The present technical solution has the advantages of a small amount of calculation and low power consumption.Type: GrantFiled: March 7, 2022Date of Patent: February 13, 2024Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
-
Patent number: 11886880Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.Type: GrantFiled: June 24, 2022Date of Patent: January 30, 2024Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Bingrui Wang, Xiaoyong Zhou, Yimin Zhuang, Huiying Lan, Jun Liang, Hongbo Zeng
-
Publication number: 20240028334Abstract: A data processing method includes obtaining content of a descriptor when an operand of a first processing instruction includes the descriptor, where the descriptor is configured to indicate a shape of tensor data and to indicate data address of the tensor data, and executing the first processing instruction according to the content of the descriptor by determining the data address of the tensor data corresponding to the operand of the first processing instruction in a data storage space, according to the content of the descriptor, and according to the data address, executing data processing corresponding to the first processing instruction.Type: ApplicationFiled: September 28, 2023Publication date: January 25, 2024Inventors: Shaoli LIU, Bingrui WANG, Jun LIANG
-
Patent number: 11880328Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system, and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.Type: GrantFiled: December 29, 2021Date of Patent: January 23, 2024Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Yao Zhang, Shaoli Liu, Jun Liang, Yu Chen