Patents by Inventor Fei Xue
Fei Xue has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11658168Abstract: A flash memory device includes a plurality of flash memory cell arrays, wherein: a flash memory cell array in the plurality of flash memory cell arrays comprises a plurality of layers of flash memory cell planes; and a flash memory cell plane includes a plurality of flash memory cells. The flash memory device further includes a logic circuitry coupled to the plurality of flash memory cell arrays, configured to perform operations using the plurality of flash memory cell arrays; and a sensing circuitry configured to access a corresponding flash memory cell plane among the plurality of flash memory cell planes.Type: GrantFiled: August 5, 2020Date of Patent: May 23, 2023Inventors: Fei Xue, Shuangchen Li, Dimin Niu, Hongzhong Zheng
-
Patent number: 11604744Abstract: A dual-model memory interface of a computing system is provided, configurable to present memory interfaces having differently-graded bandwidth capacity to different processors of the computing system. A mode switch controller of the memory interface controller, based on at least an arbitration rule written to a configuration register, switches the memory interface controller between a narrow-band mode and a wide-band mode. In each mode, the memory interface controller disables either a plurality of narrow-band memory interfaces of the memory interface controller according to a first bus standard, or a wide-band memory interface of the memory interface controller according to a second bus standard. The memory interface controller virtualizes a plurality of system memory units of the computing system as a virtual wide-band memory unit according to the second bus standard, or virtualizes a system memory unit of the computing system as a virtual narrow-band memory unit according to the first bus standard.Type: GrantFiled: October 16, 2020Date of Patent: March 14, 2023Assignee: Alibaba Group Holding LimitedInventors: Yuhao Wang, Wei Han, Dimin Niu, Lide Duan, Shuangchen Li, Fei Xue, Hongzhong Zheng
-
Publication number: 20230026824Abstract: A memory system for accelerating graph neural network processing can include an on-host chip memory to cache data needed for processing a current root node. The system can also include a volatile memory interface between the host and non-volatile memory. The volatile memory can be configured to save one or more sets of next root nodes, neighbor nodes and corresponding attributes. The non-volatile memory can have sufficient capacity to store the entire graph data. The non-volatile memory can also be configured to pre-arrange the sets of next root nodes, neighbor nodes and corresponding attributes for storage in the volatile memory.Type: ApplicationFiled: July 15, 2022Publication date: January 26, 2023Inventors: Fei XUE, Yangjie ZHOU, Lide DUAN, Hongzhong ZHENG
-
Publication number: 20230009425Abstract: The present invention provides a black plaster producing device that comprises a workbench, longitudinal supports, a transverse bottom plate, an electric-heated thermostatic drying oven, and a storage rack structure for classified storage, a storage bucket, a stove, a boiling furnace, a furnace cover, a stirring box, a box cover, a stirring motor, a stirring pipe, a crushing blade, an adjustable partition frame structure, a rolling frame structure, a gas inlet pipe, a pipe valve, a stirring switch, a rolling switch and a receiving bucket, wherein the longitudinal supports are welded at four corners at the lower part of the workbench respectively, and the transverse bottom plate is screwed at the lower part of the longitudinal supports on the inner side. The transverse partition plate of the present invention is screwed at the central position inside the storage box, which facilitates the convenient storage of materials in the storage box during use and can classify the stored materials at the same time.Type: ApplicationFiled: July 24, 2020Publication date: January 12, 2023Applicant: Shandong Zhushi Pharmaceutical Group Co., Ltd.Inventors: Kunfu ZHU, Lei ZHU, Junhui ZHU, Meng ZHU, Fei XUE
-
Publication number: 20220395808Abstract: The invention provides a biomass-based hyperbranched adsorption material with multi-adsorption sites to multiple heavy metal ions and a preparation method thereof. The biomass-based hyperbranched adsorption material with multi-adsorption sites to multiple heavy metal ions is prepared by one-step instant-crosslinking method using a biomass raw material as matrix and a hyperbranched polymer containing chelating atoms of N, O, and S as functional reagent, wherein the hyperbranched polymer has two or more different adsorption sites (containing elements such as N, S, O, etc.) to heavy metal ions.Type: ApplicationFiled: August 12, 2021Publication date: December 15, 2022Applicant: GUANGXI UNIVERSITYInventors: Hongxiang ZHU, Hui HE, Lei WANG, Fei XUE, Xianlin LEI
-
Publication number: 20220343145Abstract: This application describes a hardware accelerator, a computer system, and a method for accelerating Graph Neural Network (GNN) computations. The hardware accelerator comprises a matrix partitioning circuit configured to partition an adjacency matrix of an input graph for GNN computations into a plurality of sub-matrices; a sub-matrix reordering circuit configured to reorder rows and columns of the plurality of sub-matrices; a tile partitioning circuit configured to divide the plurality of sub-matrices with reordered rows and columns into a plurality of tiles based on processing granularities of one or more processors; and a tile distributing circuit configured to distribute the plurality of tiles to the one or more processors for performing the GNN computations.Type: ApplicationFiled: April 21, 2021Publication date: October 27, 2022Inventors: Fei XUE, Yangjie ZHOU, Hongzhong ZHENG
-
Publication number: 20220343146Abstract: This application describes a hardware accelerator, a computer system and a method for accelerating temporal graphic neural networks computations. An exemplary hardware accelerator comprises: a key-graph memory configured to store a key graph; a nodes classification circuit configured to: fetch the key graph from the key-graph memory; receive a current graph for performing temporal GNN computation with the key graph; and identify one or more nodes of the current graph based on a comparison between the key graph and the current graph; and a nodes reconstruction circuit configured to: perform spatial computations on the one or more nodes identified by the node classification circuit to obtain updated nodes; generate an updated key graph based on the key graph and the updated nodes; and store the updated key graph in the key-graph memory for processing a next graph.Type: ApplicationFiled: April 23, 2021Publication date: October 27, 2022Inventors: Fei XUE, Yangjie ZHOU, Hongzhong ZHENG
-
Patent number: 11409839Abstract: The present disclosure relates to a method for controlling execution of a GEMM operation on an accelerator comprising multiple computation units, a first memory device, and a second memory device. The method comprises determining an execution manner of the GEMM operation, the execution manner comprising partition information of the GEMM operation and computation unit allocation information of the partitioned GEMM operation; generating one or more instructions to compute the partitioned GEMM operation on one or more allocated computation units; and issuing the one or more instructions to at least one of a first queue and a second queue, which enables at least one of a first local controller and a second local controller to execute the one or more instructions, wherein the first local controller and the second local controller are configured to control data movement between the computation units, the first memory device, and the second memory device.Type: GrantFiled: August 21, 2020Date of Patent: August 9, 2022Assignee: Alibaba Group Holding LimitedInventors: Yuhao Wang, Fei Sun, Fei Xue, Yen-Kuang Chen, Hongzhong Zheng
-
Patent number: 11392384Abstract: A method of scheduling instructions in a processing system comprising a processing unit and one or more co-processors comprises dispatching a plurality of instructions from a master processor to a co-processor of the one or more co-processors, wherein each instruction of the plurality of instructions comprises one or more additional fields, wherein at least one field comprises grouping information operable to consolidate the plurality of instructions for decomposition, and wherein at least one field comprises control information. The method also comprises decomposing the plurality of instructions into a plurality of fine-grained instructions, wherein the control information comprises rules associated with decomposing the plurality of instructions into the plurality of fine-grained instructions. Further, the method comprises scheduling the plurality of fine-grained instructions to execute on the co-processor, wherein the scheduling is performed in a non-sequential order.Type: GrantFiled: September 4, 2020Date of Patent: July 19, 2022Assignee: Alibaba Group Holding LimitedInventors: Fei Xue, Yuhao Wang, Fei Sun, Hongzhong Zheng
-
Patent number: 11379005Abstract: A rotating shaft body and an electronic device are provided. The rotating shaft body includes a first rotating surface and a second rotating surface. The first rotating surface and the second rotating surface have a common side edge. The axis of rotation of the first rotating surface and the axis of rotation of the second rotating surface are on a same side and do not overlap. The first rotating surface is provided with a first stopper part. The second rotating surface is provided with a second stopper part. The rotating shaft body further includes a supporting surface. The supporting surface is provided on a same side as the axes of rotation of the two rotating surfaces. The supporting surface is concave toward the two rotating surfaces. Two sides of the supporting surface are provided with a third stopper part and a fourth stopper part respectively.Type: GrantFiled: November 13, 2020Date of Patent: July 5, 2022Assignee: VIVO MOBILE COMMUNICATION CO., LTD.Inventor: Fei Xue
-
Patent number: 11360766Abstract: An apparatus comprises a bulk array of non-volatile memory cells on an integrated circuit die and an arithmetic logic unit on the die coupled to the bulk array. The arithmetic logic unit is operable to perform arithmetic logic operations on contents of the bulk array responsive to instructions received from outside of the die. The non-volatile memory cells may include NAND-type flash memory cells.Type: GrantFiled: November 2, 2020Date of Patent: June 14, 2022Assignee: Alibaba Group Holding LimitedInventors: Fei Xue, Shuangchen Li, Feng Zhu, Hongzhong Zheng
-
Patent number: 11346781Abstract: An optical fiber laser induced breakdown spectroscopy detection device and a detection method are provided. The device comprises an optical fiber LIBS detector and a master control detection system. The master control detection system is installed in the master control room of a nuclear power plant, and the optical fiber LIBS detector is configured to perform detection in a pipeline. The master control detection system and the optical fiber LIBS detector are connected to each other via the transmission optical fiber and the control signal line. The remote on-line detection of the positioning and fixed point of the designated area of the inner wall of the main pipeline of the nuclear power plant can be detected on line in the master control room of the nuclear power plant.Type: GrantFiled: March 20, 2019Date of Patent: May 31, 2022Assignees: XI'AN JIAOTONG UNIVERSITY, SUZHOU NUCLEAR POWERRESEARCH INSTITUTE CO., LTDInventors: Jian Wu, Zhi Zhang, Yan Qiu, Xingwen Li, Yuhua Hang, Tao Liu, Fei Xue
-
Publication number: 20220137960Abstract: An apparatus comprises a bulk array of non-volatile memory cells on an integrated circuit die and an arithmetic logic unit on the die coupled to the bulk array. The arithmetic logic unit is operable to perform arithmetic logic operations on contents of the bulk array responsive to instructions received from outside of the die. The non-volatile memory cells may include NAND-type flash memory cells.Type: ApplicationFiled: November 2, 2020Publication date: May 5, 2022Inventors: Fei XUE, Shuangchen LI, Feng ZHU, Hongzhong ZHENG
-
Publication number: 20220121586Abstract: A dual-model memory interface of a computing system is provided, configurable to present memory interfaces having differently-graded bandwidth capacity to different processors of the computing system. A mode switch controller of the memory interface controller, based on at least an arbitration rule written to a configuration register, switches the memory interface controller between a narrow-band mode and a wide-band mode. In each mode, the memory interface controller disables either a plurality of narrow-band memory interfaces of the memory interface controller according to a first bus standard, or a wide-band memory interface of the memory interface controller according to a second bus standard. The memory interface controller virtualizes a plurality of system memory units of the computing system as a virtual wide-band memory unit according to the second bus standard, or virtualizes a system memory unit of the computing system as a virtual narrow-band memory unit according to the first bus standard.Type: ApplicationFiled: October 16, 2020Publication date: April 21, 2022Applicant: Alibaba Group Holding LimitedInventors: Yuhao Wang, Wei Han, Dimin Niu, Lide Duan, Shuangchen Li, Fei Xue, Hongzhong Zheng
-
Publication number: 20220075622Abstract: A method of scheduling instructions in a processing system comprising a processing unit and one or more co-processors comprises dispatching a plurality of instructions from a master processor to a co-processor of the one or more co-processors, wherein each instruction of the plurality of instructions comprises one or more additional fields, wherein at least one field comprises grouping information operable to consolidate the plurality of instructions for decomposition, and wherein at least one field comprises control information. The method also comprises decomposing the plurality of instructions into a plurality of fine-grained instructions, wherein the control information comprises rules associated with decomposing the plurality of instructions into the plurality of fine-grained instructions. Further, the method comprises scheduling the plurality of fine-grained instructions to execute on the co-processor, wherein the scheduling is performed in a non-sequential order.Type: ApplicationFiled: September 4, 2020Publication date: March 10, 2022Inventors: Fei XUE, Yuhao WANG, Fei SUN, Hongzhong ZHENG
-
Publication number: 20220058150Abstract: A system-in-package architecture in accordance with aspects includes a logic die and one or more memory dice coupled together in a three-dimensional slack. The logic die can include one or more global building blocks and a plurality of local building blocks. The number of local building blocks can be scalable. The local building blocks can include a plurality of engines and memory controllers. The memory controllers can be configured to directly couple one or more of the engines to the one or more memory dice. The number and type of local building blocks, and the number and types of engines and memory controllers can be scalable.Type: ApplicationFiled: August 20, 2020Publication date: February 24, 2022Inventors: Lide DUAN, Wei HAN, Yuhao WANG, Fei XUE, Yuanwei FANG, Hongzhong ZHENG
-
Publication number: 20220058237Abstract: The present disclosure relates to a method for controlling execution of a GEMM operation on an accelerator comprising multiple computation units, a first memory device, and a second memory device. The method comprises determining an execution manner of the GEMM operation, the execution manner comprising partition information of the GEMM operation and computation unit allocation information of the partitioned GEMM operation; generating one or more instructions to compute the partitioned GEMM operation on one or more allocated computation units; and issuing the one or more instructions to at least one of a first queue and a second queue, which enables at least one of a first local controller and a second local controller to execute the one or more instructions, wherein the first local controller and the second local controller are configured to control data movement between the computation units, the first memory device, and the second memory device.Type: ApplicationFiled: August 21, 2020Publication date: February 24, 2022Inventors: Yuhao Wang, Fei Sun, Fei Xue, Yen-Kuang Chen, Hongzhong Zheng
-
Publication number: 20220058024Abstract: A method of performing out-of-order execution in a processing system comprising a processing unit and one or more accelerators comprises dispatching a plurality of coarse-grained instructions, each instruction extended to comprise one or more tags, wherein each tag comprises dependency information for the respective instruction expressed at a coarse-grained level. The method also comprises translating the plurality of coarse-grained instructions into a plurality of fine-grained instructions, wherein the dependency information is translated into dependencies expressed at a fine-grained level. Further, the method comprises resolving the dependencies at the fine-grained level and scheduling the plurality of fine-grained instructions for execution across the one or more accelerators in the processing system.Type: ApplicationFiled: August 18, 2020Publication date: February 24, 2022Inventors: Yuanwei FANG, Fei SUN, Fei XUE, Yuejian XIE, Yuhao WANG, Yen-Kuang CHEN
-
Publication number: 20220051086Abstract: The present disclosure provides an accelerator for processing a vector or matrix operation. The accelerator comprises a vector processing unit comprising a plurality of computation units having circuitry configured to process a vector operation in parallel; a matrix multiplication unit comprising a first matrix multiplication operator, a second matrix multiplication operator, and an accumulator, the first matrix multiplication operator and the second matrix multiplication operator having circuitry configured to process a matrix operation and the accumulator having circuitry configured to accumulate output results of the first matrix multiplication operator and the second matrix multiplication operator; and a memory storing input data for the vector operation or the matrix operation and being configured to communicate with the vector processing unit and the matrix multiplication unit.Type: ApplicationFiled: July 22, 2021Publication date: February 17, 2022Inventors: Fei XUE, Wei HAN, Yuhao WANG, Fei SUN, Lide DUAN, Shuangchen LI, Dimin NIU, Tianchan GUAN, Linyong HUANG, Zhaoyang DU, Hongzhong ZHENG
-
Patent number: D946108Type: GrantFiled: November 12, 2019Date of Patent: March 15, 2022Assignee: KUNSHAN ECOWATER SYSTEMS CO., LTD.Inventors: Miaoqiang Mei, Fei Xue, Chunxia Xu, Rui Feng, Min Feng