Patents by Inventor Xiaoqian Zhang
Xiaoqian Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12117961Abstract: This application describes a network-on-chip system on a hardware accelerator for accelerating neural network computations. An example NoC system in the NN accelerator may include interconnected routers with routing control circuits and cores respectively coupled to the routers. The cores are arranged into a matrix. Each row of cores are connected with a first uni-directional ring-shape data link and every two adjacent data links are in opposite directions. Each column of cores are connected with a second uni-directional ring-shape data link and every two adjacent data links are in opposite directions. In a given router of the plurality of routers, the routing control circuit is configured to: receive a data package; convert physical addresses of the given router and the target router into logical addresses; determine a routing port of the given router based on the logical addresses; and output the data package through the routing port.Type: GrantFiled: May 15, 2023Date of Patent: October 15, 2024Assignee: Moffett International Co., LimitedInventors: Xiaoqian Zhang, Zhibin Xiao
-
Publication number: 20240338339Abstract: This application describes a hardware accelerator and a device for accelerating neural network computations. An example accelerator may include multiple cores and a central processing unit (CPU) respectively associated with DDRs, a data exchange interface connecting a host device to the accelerator, and a three-layer NoC architecture. The three-layer NoC architecture includes an outer-layer NoC configured to transfer data between the host device and the DDRs, a middle-layer NoC configured to transfer data among the plurality of cores; and an inner-layer NoC within each core and including a cross-bar network for broadcasting weights and activations of neural networks from a global buffer of the core to a plurality of processing entity (PE) clusters within the core.Type: ApplicationFiled: October 23, 2023Publication date: October 10, 2024Inventors: Xiaoqian ZHANG, Zhibin XIAO
-
Publication number: 20240338338Abstract: This application describes a network-on-chip system on a hardware accelerator for accelerating neural network computations. An example NoC system in the NN accelerator may include interconnected routers with routing control circuits and cores respectively coupled to the routers. The cores are arranged into a matrix. Each row of cores are connected with a first uni-directional ring-shape data link and every two adjacent data links are in opposite directions. Each column of cores are connected with a second uni-directional ring-shape data link and every two adjacent data links are in opposite directions. In a given router of the plurality of routers, the routing control circuit is configured to: receive a data package; convert physical addresses of the given router and the target router into logical addresses; determine a routing port of the given router based on the logical addresses; and output the data package through the routing port.Type: ApplicationFiled: May 15, 2023Publication date: October 10, 2024Inventors: Xiaoqian ZHANG, Zhibin XIAO
-
Patent number: 12113667Abstract: This application provides a network slice configuration method, apparatus, and system, and pertains to the field of wireless communications technologies. The method includes: after receiving a management request of a network slice, obtaining or determining, by a network slice manager, network resource information corresponding to a subnet included in the network slice, and then sending, in a form of a subnet management request to a subnet manager, the network resource information corresponding to the subnet, so that the subnet manager configures the corresponding subnet based on the network resource information corresponding to the subnet. In this application, network slice configuration efficiency can be improved.Type: GrantFiled: July 31, 2023Date of Patent: October 8, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Ruiyue Xu, Kai Zhang, Xiaoqian Jia
-
Publication number: 20240310352Abstract: The present disclosure relates to a method for analyzing a geological body with the help of physical and chemical properties thereof, and in particular, to a method for identifying an exudative sandstone uranium deposit. With the method for identifying an exudative sandstone uranium deposit according to the embodiments of the present disclosure, an exudative sandstone uranium deposit formed by an exudation metallogenesis may be systematically identified, so as to guide prediction and prospecting evaluation of a uranium deposit in red variegated sandstone formations of a sedimentary basin, avoid ore dislocation and ore leakage, open up new prospecting positions and spaces, and break through new uranium resources.Type: ApplicationFiled: July 21, 2023Publication date: September 19, 2024Inventors: Ziying Li, Wusheng Liu, Mingkuan Qin, Yuqi Cai, Qingyin Guo, Feng He, Jun Zhong, Xide Li, Ye Sun, Yunlong Zhang, Weitao Li, Guo Wang, Shengfu Li, Jianfang Cai, Gui Wang, Shan Jiang, Jielin Zhang, Sheng He, Qubo Wu, Zilong Zhang, Chiheng Liu, Linfei Qiu, Hu Liu, Hongwei Ji, Qiang Guo, Pengfei Zhu, Xinyang Liu, Yuyan Zhang, Zhixin Huang, Jian Guo, Meizhi Han, Zhongbo He, Jinrong Lin, Licheng Jia, Junxian Wang, Longsheng Yi, Mingming Tian, Xiaoneng Luo, Bo Peng, Xiaoqian Xiu, Ruixiang Hao, Wenquan Wang, Changfa Yu
-
Publication number: 20240300869Abstract: The present disclosure belongs to the field of microbial technology, and in particular relates to a microbial agent for promoting root nodule number increase and root nodule nitrogenase activity increase in leguminous crops and use thereof. The microbial agent is compounded from 4 microbes including Bacillus amyloliquefaciens, Brevibacillus laterosporu, Bacillus mucilaginosus Krassilnikov, and Enterobacter ludwigii by separate fermentative cultivation, concentration and mixing. The microbial agent can effectively promote root nodule number increase and root nodule nitrogenase activity increase, promote crop growth, and improve crop yield and product quality.Type: ApplicationFiled: May 15, 2024Publication date: September 12, 2024Applicant: OIL CROPS RESEARCH INSTITUTE, CHINESE ACADEMY OF AGRICULTURAL SCIENCESInventors: Peiwu LI, Xiaofeng YUE, Qi ZHANG, Xiaoqian TANG, Yang ZHOU, Yizhen BAI, Jun JIANG
-
Publication number: 20240298648Abstract: The present disclosure relates to a microbial agent with functions of preventing and controlling aflatoxin and aflatoxigenic strain thereof and promoting yield increase of crops. The microbial agent is compounded from 5 microbes comprising Bacillus amyloliquefaciens, Brevibacillus laterosporu, Bacillus mucilaginosus Krassilnikov, Enterobacter ludwigii and Myroides odoratimimus by separate fermentative cultivation, concentration and mixing. The microbial agent is applied to a field planting stage of crops such as peanuts, which can effectively reduce the abundance and infection probability of toxin-producing strain such as Aspergillus flavus in soil from the source, reduce the risk of aflatoxin pollution of peanuts after production, improve the quality and safety level of peanuts, and at the same time, promote crop growth, enhance the resistance, and improve the full pod rate and yield, and has significant economic, social and ecological benefits.Type: ApplicationFiled: May 15, 2024Publication date: September 12, 2024Applicant: OIL CROPS RESEARCH INSTITUTE, CHINESE ACADEMY OF AGRICULTURAL SCIENCESInventors: Qi ZHANG, Peiwu LI, Xiaofeng YUE, Xiaoqian TANG, Yang ZHOU, Yizhen BAI
-
Patent number: 12072834Abstract: This application describes a hardware accelerator and a device for accelerating neural network computations. An example accelerator may include multiple cores and a central processing unit (CPU) respectively associated with DDRs, a data exchange interface connecting a host device to the accelerator, and a three-layer NoC architecture. The three-layer NoC architecture includes an outer-layer NoC configured to transfer data between the host device and the DDRs, a middle-layer NoC configured to transfer data among the plurality of cores; and an inner-layer NoC within each core and including a cross-bar network for broadcasting weights and activations of neural networks from a global buffer of the core to a plurality of processing entity (PE) clusters within the core.Type: GrantFiled: May 15, 2023Date of Patent: August 27, 2024Assignee: Moffett International Co., LimitedInventors: Xiaoqian Zhang, Zhibin Xiao
-
Publication number: 20240279298Abstract: A glucagon analog and the medical use thereof. Specifically, the glucagon analog has a significantly improved in vitro activity, excellent physical/chemical stability, and high solubility, and can be used to treat metabolic diseases such as hypoglycemia, obesity, and diabetes.Type: ApplicationFiled: June 17, 2022Publication date: August 22, 2024Inventors: Weibing LIU, Xuchao HUANG, Xiaoqian ZHANG, Fangzhou WU, Lei WANG, Liang QU
-
Publication number: 20240272490Abstract: A dimming module and method for manufacturing the same, and a dimming glass, relate to the field of smart glass technology. The dimming module includes: a first dimming structure (10) and a second dimming structure (20). Each of the first dimming structure (10) and the second dimming structure (20) includes a first substrate (1), a second substrate (2) and a liquid crystal layer (3), and a first flexible circuit board (4) and a second flexible circuit board (5). The first substrate (1) is provided with a first binding area (11), and a first electrode (6) on one side facing the liquid crystal layer (3). The second substrate (2) is provided with a second binding area (21), and a plurality of second electrodes (7) on one side facing the liquid crystal layer (3).Type: ApplicationFiled: February 28, 2022Publication date: August 15, 2024Applicants: Beijing BOE Sensor Technology Co., Ltd., BOE Technology Group Co., Ltd.Inventors: Deshen ZHAI, Chunlei WANG, Sikai ZHANG, Juan CHEN, Ying WANG, Changyin WANG, Peng LIANG, Xiaoqian JU, Xiaolong WU, Yongzhong ZHANG, Jing PANG
-
Publication number: 20240264802Abstract: This application describes hybrid hardware accelerators, systems, and apparatus for performing various computations in neural network applications using the same set of hardware resources. An example accelerator may include weight selectors, activation input interfaces, and a plurality of Multiplier-Accumulation (MAC) circuits organized as a plurality of MAC lanes Each of the plurality of MAC lanes may be configured to: receive a control signal indicating whether to perform convolution or vector operations; receive one or more weights according to the control signal; receive one or more activations according to the control signal; and generate output data based on the one or more weights and the one or more input activations according to the control signal and feed the output data into an output buffer. Each of the plurality of MAC lanes includes a plurality of multiplier circuits and a plurality of adder-subtractor circuits.Type: ApplicationFiled: April 17, 2024Publication date: August 8, 2024Inventors: Xiaoqian ZHANG, Zhibin XIAO, Changxu ZHANG, Renjie CHEN
-
Patent number: 12020001Abstract: This application describes hybrid hardware accelerators, systems, and apparatus for performing various computations in neural network applications using the same set of hardware resources. An example accelerator may include weight selectors, activation input interfaces, and a plurality of Multiplier-Accumulation (MAC) circuits organized as a plurality of MAC lanes Each of the plurality of MAC lanes may be configured to: receive a control signal indicating whether to perform convolution or vector operations; receive one or more weights according to the control signal; receive one or more activations according to the control signal; and generate output data based on the one or more weights and the one or more input activations according to the control signal and feed the output data into an output buffer. Each of the plurality of MAC lanes includes a plurality of multiplier circuits and a plurality of adder-subtractor circuits.Type: GrantFiled: April 3, 2023Date of Patent: June 25, 2024Assignee: Moffett International Co., LimitedInventors: Xiaoqian Zhang, Zhibin Xiao, Changxu Zhang, Renjie Chen
-
Publication number: 20240086151Abstract: This application describes hybrid hardware accelerators, systems, and apparatus for performing various computations in neural network applications using the same set of hardware resources. An example accelerator may include weight selectors, activation input interfaces, and a plurality of Multiplier-Accumulation (MAC) circuits organized as a plurality of MAC lanes Each of the plurality of MAC lanes may be configured to: receive a control signal indicating whether to perform convolution or vector operations; receive one or more weights according to the control signal; receive one or more activations according to the control signal; and generate output data based on the one or more weights and the one or more input activations according to the control signal and feed the output data into an output buffer. Each of the plurality of MAC lanes includes a plurality of multiplier circuits and a plurality of adder-subtractor circuits.Type: ApplicationFiled: April 3, 2023Publication date: March 14, 2024Inventors: Xiaoqian ZHANG, Zhibin XIAO, Changxu ZHANG, Renjie CHEN
-
Patent number: 11868307Abstract: This application describes a hardware accelerator and a device for accelerating neural network computations. An example accelerator may include multiple cores and a central processing unit (CPU) respectively associated with DDRs, a data exchange interface connecting a host device to the accelerator, and a three-layer NoC architecture. The three-layer NoC architecture includes an outer-layer NoC configured to transfer data between the host device and the DDRs, a middle-layer NoC configured to transfer data among the plurality of cores; and an inner-layer NoC within each core and including a cross-bar network for broadcasting weights and activations of neural networks from a global buffer of the core to a plurality of processing entity (PE) clusters within the core.Type: GrantFiled: May 15, 2023Date of Patent: January 9, 2024Assignee: Moffett International Co., LimitedInventors: Xiaoqian Zhang, Zhibin Xiao
-
Publication number: 20230407928Abstract: A brake dust filtering apparatus (100), comprising: a base (10) which is provided with multiple first through holes (101); a scribing sheet (20) which is provided with multiple second through holes (201) and can be slidably connected to the base (10); a filter screen (30) which is disposed on the side of the base (10) close to a brake caliper (200) and covers the first through holes (101); and a drive member (40) which is used for driving the scribing sheet (20) to slide with respect to the base (10), so as to form an off state in which each of the first through holes (101) and each of the second through holes (201) are staggered and an on state that the multiple first through holes (101) at least partially overlap the multiple second through holes (201). Further disclosed is a vehicle.Type: ApplicationFiled: November 23, 2020Publication date: December 21, 2023Applicant: Wuhan Lotus Cars Co., Ltd.Inventors: Bowen ZHENG, Xiaoqian ZHANG
-
Publication number: 20230259758Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for improving efficiency of neural network computations using adaptive tensor compute kernels. First, the adaptive tensor compute kernels may adjust shapes according to the different shapes of input/weight tensors for distributing the weights and input values to a processing elements (PE) array for parallel processing. Depending on the shape of the tensor compute kernels, additional inter-cluster or intra-cluster adders may be needed to perform convolution computations. Second, the adaptive tensor compute kernels may support two different tensor operation modes, i.e., 1×1 tensor operation mode and 3×3 tensor operation mode, to cover all types of convolution computations. Third, the underlying PE array may configure each PE-internal buffer (e.g., a register file) differently to support different compression ratios and sparsity granularities of sparse neural networks.Type: ApplicationFiled: February 16, 2022Publication date: August 17, 2023Inventors: XIAOQIAN ZHANG, ENXU YAN, ZHIBIN XIAO
-
Patent number: 11726746Abstract: This application describes hybrid hardware accelerators, systems, and apparatus for performing various computations in neural network applications using the same set of hardware resources. An example accelerator may include weight selectors, activation input interfaces, and a plurality of Multiplier-Accumulation (MAC) circuits organized as a plurality of MAC lanes Each of the plurality of MAC lanes may be configured to: receive a control signal indicating whether to perform convolution or vector operations; receive one or more weights according to the control signal; receive one or more activations according to the control signal; and generate output data based on the one or more weights and the one or more input activations according to the control signal and feed the output data into an output buffer. Each of the plurality of MAC lanes includes a plurality of multiplier circuits and a plurality of adder-subtractor circuits.Type: GrantFiled: September 14, 2022Date of Patent: August 15, 2023Assignee: Moffett International Co., LimitedInventors: Xiaoqian Zhang, Zhibin Xiao, Changxu Zhang, Renjie Chen
-
Patent number: 11531869Abstract: Embodiments herein describe circuitry with improved efficiency when executing layers in a nested neural network. As mentioned above, a nested neural network has at least one split operation where a tensor generated by a first layer is transmitted to, and processed by several branches in the neural network. Each of these branches can have several layers that have data dependencies which result in a multiply-add array sitting idly. In one embodiment, the circuitry can include a dedicated pre-pooler for performing a pre-pooling operation. Thus, the pre-pooling operation can be performing in parallel with other operations (e.g., the convolution performed by another layer). Once the multiply-add array is idle, the pre-pooling operation has already completed (or at least, has already started) which means the time the multiply-add array must wait before it can perform the next operation is reduced or eliminated.Type: GrantFiled: March 28, 2019Date of Patent: December 20, 2022Assignee: XILINX, INC.Inventors: Ephrem C. Wu, David Berman, Xiaoqian Zhang
-
Patent number: D1021901Type: GrantFiled: February 21, 2022Date of Patent: April 9, 2024Inventor: Xiaoqian Zhang
-
Patent number: D1039527Type: GrantFiled: July 22, 2022Date of Patent: August 20, 2024Inventor: Xiaoqian Zhang