Patents by Inventor Hongsheng Wang
Hongsheng Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240355952Abstract: The present invention discloses a solar-blind AlGaN ultraviolet (UV) photodetector and a preparation method thereof. The solar-blind AlGaN UV photodetector comprises an UV photodetector epitaxial wafer, including an undoped N-polar plane AlN buffer layer, a carbon-doped N-polar plane AlN layer, a carbon-doped N-polar plane composition-graded AlyGa1-yN layer, and an undoped N-polar plane AlxGa1-xN layer that are grown sequentially on a silicon substrate, and also comprises an insulating layer, an ohmic contact electrode, and a Schottky contact electrode arranged on the UV photodetector epitaxial wafer, as well as a SiNz passivation layer arranged on both sides of the UV photodetector epitaxial wafer, where x=0.5-0.8, y=0.75-0.95, and z=1.33-1.5. The present invention realizes the preparation of the high-performance solar-blind AlGaN UV photodetector, and improves the responsivity and detectivity of the AlGaN UV photodetector? in the UV solar-blind band.Type: ApplicationFiled: January 25, 2022Publication date: October 24, 2024Applicant: SOUTH CHINA UNIVERSITY OF TECHNOLOGYInventors: Wenliang WANG, Linhao LI, Guoqiang LI, Hongsheng JIANG
-
Publication number: 20240306943Abstract: Provided is a human movement intelligent measurement and digital training system, which comprises N inertial navigation wearable devices, M cameras, a data comprehensive analysis device, and a terminal.Type: ApplicationFiled: December 29, 2022Publication date: September 19, 2024Inventors: XiangTao MENG, Zheng XIANG, JiLin WANG, HongSheng GE
-
Publication number: 20240272494Abstract: The present disclosure provides a display panel and a manufacturing method thereof. The display panel includes a special-shaped display region. The display panel further includes a base substrate and a plurality of pixels on a side of the base substrate, and the plurality of pixels includes an edge pixel with an orthographic projection on the base substrate overlapped with that of an edge of the special-shaped display region on the base substrate. The edge pixel is provided with a light-shielding pattern that divides the edge pixel into a light-shielding region and a light-transmitting region. The edge pixel has a predetermined gray scale and a brightness ratio, the brightness ratio is a ratio of a brightness of the light-transmitting region to a brightness of the edge pixel when an entire region thereof is light-transmitting, and the predetermined gray scale has an exponential function relationship with the brightness ratio.Type: ApplicationFiled: April 22, 2024Publication date: August 15, 2024Applicants: Beijing BOE Optoelectronics Technology Co., Ltd., BOE Technology Group Co., Ltd.Inventors: Xueyong ZHAI, Yao BI, Kangdi ZHOU, Donghua ZHANG, Jin GAO, Ce WANG, Xiaodong WANG, Jian WANG, Xingxing GUAN, Hongsheng BI
-
Patent number: 12060208Abstract: The present invention discloses a gradient slow-release active composite film and a preparation method thereof. The active composite film, from inside to outside, is composed of an antioxidative hygroscopic internal layer, at least one gradient anti-microbial antioxidative intermediate layer and at least one waterproof external layer; wherein the antioxidative hygroscopic internal layer is prepared by an alcohol-soluble protein-water-soluble protein substrate and a water-soluble antioxidant; the gradient anti-microbial antioxidative intermediate layer is prepared by an alcohol-soluble protein-water-soluble protein substrate and a lipid-soluble plant essential oil and a water-soluble antioxidant, wherein a mass ratio of the alcohol-soluble protein to the water-soluble protein is (1-2):(1-2); and the waterproof external layer is composed of a hydrophobic alcohol-soluble protein layer.Type: GrantFiled: July 6, 2020Date of Patent: August 13, 2024Assignee: SOUTH CHINA AGRICULTURAL UNIVERSITYInventors: Jie Xiao, Xia Chen, Jiyang Cai, Hongsheng Liu, Wenbo Wang
-
Publication number: 20240247530Abstract: A vehicle has a vehicle door system, and the vehicle door system includes a vehicle door, an actuating mechanism, and a controller. The actuating mechanism is connected to the vehicle door and is configured to control a state of the vehicle door. The controller is configured to control, according to a current working mode of the vehicle door system, the actuating mechanism to control the state of the vehicle door. In the electric mode, the controller is configured to control the actuating mechanism to drive the vehicle door to open or close; a suspended mode. In the suspended mode, the controller is configured to control the actuating mechanism to keep the vehicle door suspended; and a manual mode. In the manual mode, the controller is configured to control, according to an external force acting on the vehicle door, the actuating mechanism to drive the vehicle door to move.Type: ApplicationFiled: April 3, 2024Publication date: July 25, 2024Inventors: Jie GAO, Hongsheng Tian, Qianqian Zhou, Fangqin Li, Long Wang
-
Patent number: 12039361Abstract: The present disclosure discloses a method for executing a task. The method includes: a master computing device node in a computing cluster system receives a task code of a to-be-executed task; the master computing device node divides the to-be-executed task into subtasks, and for each of the subtasks, the master computing device node determines operators required to execute the subtask based on the task code; the master computing device node respectively distributes the subtasks to computing nodes in the computing cluster system, such that for each of the computing nodes, the computing node generates an executable task subgraph for the computing node based on the operators required to execute the subtask distributed to the computing node and data transmission relationships between the operators required to execute the subtask distributed to the computing node, and runs the executable task subgraph to execute the to-be-executed task.Type: GrantFiled: October 25, 2023Date of Patent: July 16, 2024Assignee: ZHEJIANG LABInventors: Hongsheng Wang, Guang Chen, Fei Wu, Feng Lin
-
Publication number: 20240127027Abstract: Disclosed are an optimization method and apparatus for compiling computation graph. The optimization method includes the following steps: step S1: converting a computation graph into an intermediate representation; step S2: analyzing a dependency relationship; step S3: constructing a work stack; step S4: performing initialization to achieve a nonactivated state; step S5: popping out stack top node elements, and updating an input node set in a current round of iteration; step S6: adding the stack top node elements that depend on step S5 to a stack top position in sequence until the work stack is empty; step S7: implementing an intermediate representation in a fixed node state using a bit vector; and step S8: allocating registers for effective tensor variables contained in nodes of the intermediate representation in the fixed node state.Type: ApplicationFiled: November 22, 2022Publication date: April 18, 2024Inventors: Hongsheng WANG, Shuibing HE, Guang CHEN
-
Publication number: 20240118897Abstract: Disclosed are an instruction execution method and apparatus for graph computation. The method includes the following steps: S1: sending operators of each node in a computational graph used for neural network computation to an operator interpreter; S2: building, by the operator interpreter, instructions in operation; S3: defining an instruction dependency relationship; S4: building an instruction dependency relationship graph; S5: building a topological order of parallel instructions; S6: scheduling the parallel instructions to hardware resources; S7: building shortest schedules for the parallel instructions: the shortest time required to execute the parallel instructions under the condition of limited hardware resources; and S8: releasing the completed instructions.Type: ApplicationFiled: November 30, 2022Publication date: April 11, 2024Inventors: Hongsheng WANG, Guang CHEN, Lingfang ZENG, Aimin PAN
-
Publication number: 20240104016Abstract: The disclosure discloses an intermediate representation method for compiling computation graphs, including: step 1: compiling a neural network into a computation graph for neural network computation; step 2: constructing a node for each tensor variable in the computation graph; step 3: associating the node representing the tensor variable in the computation graph to a set of pointers to the tensor variable; step 4: analyzing constraint relationships between the tensor variables in the computation graph; step 5: iteratively constructing a topological graph of the intermediate representation based on the constraint relationships between the tensor variables in the computation graph; and step 6: analyzing the tensor variables with different aliases pointing to a same memory location based on the intermediate representation, and allocating a register for the tensor variables with different aliases.Type: ApplicationFiled: November 30, 2022Publication date: March 28, 2024Inventors: Hongsheng WANG, Aimin PAN, Guang CHEN
-
Publication number: 20240104341Abstract: A memory optimization method includes: compiling a neural network into a computational graph for neural network computation on a computer; transforming the computational graph into a topological graph; constructing a life cycle relationship graph of tensor variables in the computational graph; and analyzing a life cycle relationship among tensor variables in a node of the computational graph; iteratively merging those tensor variables connected by lines of the second type and caching into a memory any tensor variable that goes beyond a number of idle registers and is not allocated to a register, until all tensor variables that go beyond the number of the idle registers and are not allocated to registers are cached into the memory; caching any node of the life cycle relationship graph with a degree smaller than a number of registers into a stack.Type: ApplicationFiled: November 22, 2022Publication date: March 28, 2024Inventors: Hongsheng WANG, Guang CHEN, Lingfang ZENG
-
Publication number: 20240104395Abstract: Disclosed are a memory optimization method and device oriented to neural network computing. The memory optimization method oriented to neural network computing includes the following steps: step S1: reconstructing a computation graph into a topological structure computation graph; step S2: constructing a life cycle interval about tensor variables; step S3: constructing a scanning line about the life cycle interval; step S4: allocating the tensor variables to idle registers; step S5: allocating to tensor variables exceeding the required number of registers; step S6: allocating registers allocated in the expired life cycle interval to tensor variables exceeding the required number of registers; and step S7: adding tensor variables transferred to a memory back to the life cycle interval in an activated state, and allocating idle registers for the tensor variables. According to the present disclosure, the memory of a data flow of a computation graph for neural network computing is optimized.Type: ApplicationFiled: December 1, 2022Publication date: March 28, 2024Inventors: Hongsheng WANG, Guang CHEN
-
Publication number: 20240102793Abstract: An on-line measurement-error correction device and method for the inner profile of a special-shaped shell, including a fixing device. A vertical moving device is fixed at the top of the fixing device and connected with a horizontal moving device connected with a distance monitoring device, which is movable vertically and horizontally under the drive of the vertical and horizontal moving devices. The distance monitoring device includes a displacement monitoring element fixedly arranged on a fixing support hinged with an electric push rod configured to displace to drive the displacement monitoring element to deflect so as to change a monitoring direction. The displacement monitoring element is driven to deflect by the electric push rod of the distance monitoring device to change a monitoring direction, and a distance between each longitudinal section surface point on the inner surface of the special-shaped shell and the displacement monitoring element can be gradually measured.Type: ApplicationFiled: March 15, 2023Publication date: March 28, 2024Applicants: SHANDONG UNIVERSITY, SHANDONG RESEARCH AND DESIGN INSTITUTE OF INDUSTRIAL CERAMICS CO., LTD.Inventors: Qinghua SONG, Xiaojuan WANG, Liping JIANG, Hongsheng WANG, Qiang LUAN, Zhanqiang LIU, Yicong DU
-
Patent number: 11941514Abstract: The present disclosure discloses a method for execution of a computational graph in a neural network model and an apparatus thereof, including: creating task execution bodies on a native machine according to a physical computational graph compiled and generated by a deep learning framework, and designing a solution for allocating a plurality of idle memory blocks to each task execution body, so that the entire computational graph participates in deep learning training tasks of different batches of data in a pipelining and parallelizing manner.Type: GrantFiled: March 29, 2022Date of Patent: March 26, 2024Assignee: ZHEJIANG LABInventors: Hongsheng Wang, Hujun Bao, Guang Chen, Lingfang Zeng, Hongcai Cheng, Yong Li, Jian Zhu, Huanbo Zheng
-
Patent number: 11941532Abstract: Disclosed is a method for adapting a deep learning framework to a hardware device based on a unified backend engine, which comprises the following steps: S1, adding the unified backend engine to the deep learning framework; S2, adding the unified backend engine to the hardware device; S3, converting a computational graph, wherein the computational graph compiled and generated by the deep learning framework is converted into an intermediate representation of the unified backend engine; S4, compiling the intermediate representation, wherein the unified backend engine compiles the intermediate representation on the hardware device to generate an executable object; S5, running the executable object, wherein the deep learning framework runs the executable object on the hardware device; S6: managing memory of the unified backend engine.Type: GrantFiled: April 22, 2022Date of Patent: March 26, 2024Assignee: ZHEJIANG LABInventors: Hongsheng Wang, Wei Hua, Hujun Bao, Fei Yang
-
Patent number: 11941507Abstract: Disclosed are a data flow method and apparatus for neural network computation. The data flow method for neural network computation includes initializing the lifecycle of a variable in a computational graph; and defining a propagation rule for a variable in use to flow through a node. A definition of the variable is produced at a precursor node of the node, such that an input set of valid variables flowing through the node contains the variable. The method may be used on neural network computation in a deep learning training system.Type: GrantFiled: September 27, 2022Date of Patent: March 26, 2024Assignee: ZHEJIANG LABInventors: Hongsheng Wang, Guang Chen
-
Patent number: 11934887Abstract: The present disclosure discloses a distributed model compilation system. A master node of the system determines the logic calculation graph of the model based on model information, divides the logic calculation graph into multiple logic calculation sub-graphs, generates a distributing message for each logic calculation sub-graph, and then transmits the distributing message to a slave node. Each of the slave nodes allocates a local computing resource to compile the logic calculation sub-graph based on the received distributing message, and transmits compilation completion information to the master node. The master node determines the completion of model compilation based on the compilation completion information returned by each slave node, and executes the target work based on the compiled model.Type: GrantFiled: September 13, 2023Date of Patent: March 19, 2024Assignee: ZHEJIANG LABInventors: Hongsheng Wang, Fei Wu, Guang Chen, Feng Lin
-
Patent number: 11915135Abstract: The disclosure discloses a graph optimization method and apparatus for neural network computation. The graph optimization method includes the following steps: S1: converting a computation graph; S2: allocating a register; S3: defining a route selector for a redefined variable; S4: solving the route selector for the redefined variable; S5: defining a criterion of inserting the route selector for the redefined variable into a node; S6: analyzing a dominating edge set of the node for the redefined variable; S7: inserting the route selector for the redefined variable; and S8: renaming the redefined variable. The disclosure solves the problem of the corresponding route selection on a correct definition of the redefined variable when a node including the redefined variable in a computation graph in the compiling period flows through multiple paths of computation flow, reduces the memory cost and promotes the development of implementation application of a deep neural network model.Type: GrantFiled: September 21, 2022Date of Patent: February 27, 2024Assignee: ZHEJIANG LABInventors: Hongsheng Wang, Guang Chen
-
Publication number: 20240054319Abstract: Disclosed are a data flow method and apparatus for neural network computation. The method includes: step 1, initializing the lifecycle of a variable in a computational graph, i.e., initializing a time period from the start of a definition of the variable to the end of use as the lifecycle of the variable in the computational graph; and step 2, defining a propagation rule for a variable in use to flow through a node, i.e., defining that in the case that a variable at a certain node in the computational graph is used, a definition of the variable is produced at a precursor node of the node, such that an input set of valid variables flowing through the node contains the variable. The application discloses a data flow modeling method and apparatus for neural network computation in a deep learning training system.Type: ApplicationFiled: September 27, 2022Publication date: February 15, 2024Inventors: Hongsheng WANG, Guang CHEN
-
Publication number: 20240028886Abstract: The disclosure discloses a graph optimization method and apparatus for neural network computation. The graph optimization method includes the following steps: S1: converting a computation graph; S2: allocating a register; S3: defining a route selector for a redefined variable; S4: solving the route selector for the redefined variable; S5: defining a criterion of inserting the route selector for the redefined variable into a node; S6: analyzing a dominating edge set of the node for the redefined variable; S7: inserting the route selector for the redefined variable; and S8: renaming the redefined variable. The disclosure solves the problem of the corresponding route selection on a correct definition of the redefined variable when a node including the redefined variable in a computation graph in the compiling period flows through multiple paths of computation flow, reduces the memory cost and promotes the development of implementation application of a deep neural network model.Type: ApplicationFiled: September 21, 2022Publication date: January 25, 2024Inventors: Hongsheng WANG, Guang CHEN
-
Patent number: 11861505Abstract: The disclosure discloses a method of executing dynamic graph for neural network computation and the apparatus thereof. The method of executing dynamic graph includes the following steps: S1: constructing and distributing an operator and a tensor; S2: deducing an operator executing process by an operator interpreter; S3: constructing an instruction of a virtual machine at runtime by the operator interpreter; S4: sending the instruction to the virtual machine at runtime by the operator interpreter; S5: scheduling the instruction by the virtual machine; and S6: releasing an executed instruction by the virtual machine. According to the method of executing dynamic graph for neural network computation and the apparatus thereof provided by the disclosure, runtime is abstracted to be the virtual machine, and the virtual machine acquires a sub-graph of each step constructed by a user in real time through the interpreter and schedules, the virtual machines issues, and executes each sub-graph.Type: GrantFiled: June 6, 2022Date of Patent: January 2, 2024Assignee: ZHEJIANG LABInventors: Hongsheng Wang, Hujun Bao, Guang Chen