Patents by Inventor Guang Chen

Guang Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240127027
    Abstract: Disclosed are an optimization method and apparatus for compiling computation graph. The optimization method includes the following steps: step S1: converting a computation graph into an intermediate representation; step S2: analyzing a dependency relationship; step S3: constructing a work stack; step S4: performing initialization to achieve a nonactivated state; step S5: popping out stack top node elements, and updating an input node set in a current round of iteration; step S6: adding the stack top node elements that depend on step S5 to a stack top position in sequence until the work stack is empty; step S7: implementing an intermediate representation in a fixed node state using a bit vector; and step S8: allocating registers for effective tensor variables contained in nodes of the intermediate representation in the fixed node state.
    Type: Application
    Filed: November 22, 2022
    Publication date: April 18, 2024
    Inventors: Hongsheng WANG, Shuibing HE, Guang CHEN
  • Patent number: 11959012
    Abstract: The present disclosure a temperature-resistant and salt-resistant modified nano-graphite dispersed particle gel system with a strong self-growth effect.
    Type: Grant
    Filed: September 14, 2023
    Date of Patent: April 16, 2024
    Assignees: China University of Petroleum (East China), CNOOC Energy Development Co., Ltd. Engineering Branch, Tianjin Branch of China National Offshore Oil Corporation Ltd.
    Inventors: Guang Zhao, Caili Dai, Dongfang Lyu, Jiaming Li, Kequan Meng, Ju Zheng, Yanxu Zhang, Weiyu Chen, Liguo Zhu, Yanhui Zhang
  • Publication number: 20240118897
    Abstract: Disclosed are an instruction execution method and apparatus for graph computation. The method includes the following steps: S1: sending operators of each node in a computational graph used for neural network computation to an operator interpreter; S2: building, by the operator interpreter, instructions in operation; S3: defining an instruction dependency relationship; S4: building an instruction dependency relationship graph; S5: building a topological order of parallel instructions; S6: scheduling the parallel instructions to hardware resources; S7: building shortest schedules for the parallel instructions: the shortest time required to execute the parallel instructions under the condition of limited hardware resources; and S8: releasing the completed instructions.
    Type: Application
    Filed: November 30, 2022
    Publication date: April 11, 2024
    Inventors: Hongsheng WANG, Guang CHEN, Lingfang ZENG, Aimin PAN
  • Publication number: 20240110089
    Abstract: The present disclosure a temperature-resistant and salt-resistant modified nano-graphite dispersed particle gel system with a strong self-growth effect.
    Type: Application
    Filed: September 14, 2023
    Publication date: April 4, 2024
    Applicants: China University of Petroleum (East China), CNOOC Energy Development Co., Ltd. Engineering Branch, Tianjin Branch of China National Offshore Oil Corporation Ltd.
    Inventors: Guang ZHAO, Caili DAI, Dongfang LYU, Jiaming LI, Kequan MENG, Ju ZHENG, Yanxu ZHANG, Weiyu CHEN, Liguo ZHU, Yanhui ZHANG
  • Publication number: 20240104016
    Abstract: The disclosure discloses an intermediate representation method for compiling computation graphs, including: step 1: compiling a neural network into a computation graph for neural network computation; step 2: constructing a node for each tensor variable in the computation graph; step 3: associating the node representing the tensor variable in the computation graph to a set of pointers to the tensor variable; step 4: analyzing constraint relationships between the tensor variables in the computation graph; step 5: iteratively constructing a topological graph of the intermediate representation based on the constraint relationships between the tensor variables in the computation graph; and step 6: analyzing the tensor variables with different aliases pointing to a same memory location based on the intermediate representation, and allocating a register for the tensor variables with different aliases.
    Type: Application
    Filed: November 30, 2022
    Publication date: March 28, 2024
    Inventors: Hongsheng WANG, Aimin PAN, Guang CHEN
  • Publication number: 20240104341
    Abstract: A memory optimization method includes: compiling a neural network into a computational graph for neural network computation on a computer; transforming the computational graph into a topological graph; constructing a life cycle relationship graph of tensor variables in the computational graph; and analyzing a life cycle relationship among tensor variables in a node of the computational graph; iteratively merging those tensor variables connected by lines of the second type and caching into a memory any tensor variable that goes beyond a number of idle registers and is not allocated to a register, until all tensor variables that go beyond the number of the idle registers and are not allocated to registers are cached into the memory; caching any node of the life cycle relationship graph with a degree smaller than a number of registers into a stack.
    Type: Application
    Filed: November 22, 2022
    Publication date: March 28, 2024
    Inventors: Hongsheng WANG, Guang CHEN, Lingfang ZENG
  • Publication number: 20240104395
    Abstract: Disclosed are a memory optimization method and device oriented to neural network computing. The memory optimization method oriented to neural network computing includes the following steps: step S1: reconstructing a computation graph into a topological structure computation graph; step S2: constructing a life cycle interval about tensor variables; step S3: constructing a scanning line about the life cycle interval; step S4: allocating the tensor variables to idle registers; step S5: allocating to tensor variables exceeding the required number of registers; step S6: allocating registers allocated in the expired life cycle interval to tensor variables exceeding the required number of registers; and step S7: adding tensor variables transferred to a memory back to the life cycle interval in an activated state, and allocating idle registers for the tensor variables. According to the present disclosure, the memory of a data flow of a computation graph for neural network computing is optimized.
    Type: Application
    Filed: December 1, 2022
    Publication date: March 28, 2024
    Inventors: Hongsheng WANG, Guang CHEN
  • Patent number: 11941514
    Abstract: The present disclosure discloses a method for execution of a computational graph in a neural network model and an apparatus thereof, including: creating task execution bodies on a native machine according to a physical computational graph compiled and generated by a deep learning framework, and designing a solution for allocating a plurality of idle memory blocks to each task execution body, so that the entire computational graph participates in deep learning training tasks of different batches of data in a pipelining and parallelizing manner.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: March 26, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Hujun Bao, Guang Chen, Lingfang Zeng, Hongcai Cheng, Yong Li, Jian Zhu, Huanbo Zheng
  • Patent number: 11941507
    Abstract: Disclosed are a data flow method and apparatus for neural network computation. The data flow method for neural network computation includes initializing the lifecycle of a variable in a computational graph; and defining a propagation rule for a variable in use to flow through a node. A definition of the variable is produced at a precursor node of the node, such that an input set of valid variables flowing through the node contains the variable. The method may be used on neural network computation in a deep learning training system.
    Type: Grant
    Filed: September 27, 2022
    Date of Patent: March 26, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Guang Chen
  • Patent number: 11931766
    Abstract: A plural material dispensing system (10) includes a pump (38) having a cylinder (52) mounted between a first bracket (32) and a second bracket (34), a piston (54) disposed within the cylinder (52), and a pump rod (48) extending from the piston (54) and out of the first bracket (32). Material is provided to the cylinder (52) through a flow path extending through the pump rod (48). The piston (54) drives material downstream out of the cylinder (52), and the interface between the piston (54) and the inner surface of the cylinder (52) provides a dynamic seal during pumping. The flow of material into and out of the pump (38) is controlled by actively-controlled inlet and outlet valves.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: March 19, 2024
    Assignee: Graco Minnesota Inc.
    Inventors: Guang Chen, Qiang Xiao
  • Patent number: 11934887
    Abstract: The present disclosure discloses a distributed model compilation system. A master node of the system determines the logic calculation graph of the model based on model information, divides the logic calculation graph into multiple logic calculation sub-graphs, generates a distributing message for each logic calculation sub-graph, and then transmits the distributing message to a slave node. Each of the slave nodes allocates a local computing resource to compile the logic calculation sub-graph based on the received distributing message, and transmits compilation completion information to the master node. The master node determines the completion of model compilation based on the compilation completion information returned by each slave node, and executes the target work based on the compiled model.
    Type: Grant
    Filed: September 13, 2023
    Date of Patent: March 19, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Fei Wu, Guang Chen, Feng Lin
  • Patent number: 11921848
    Abstract: The disclosed embodiments relate to a system that characterizes susceptibility of an inferential model to follow signal degradation. During operation, the system receives a set of time-series signals associated with sensors in a monitored system during normal fault-free operation. Next, the system trains the inferential model using the set of time-series signals. The system then characterizes susceptibility of the inferential model to follow signal degradation. During this process, the system adds degradation to a signal in the set of time-series signals to produce a degraded signal. Next, the system uses the inferential model to perform prognostic-surveillance operations on the set of time-series signals with the degraded signal. Finally, the system characterizes susceptibility of the inferential model to follow degradation in the signal based on results of the prognostic-surveillance operations.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: March 5, 2024
    Assignee: Oracle International Corporation
    Inventors: Zexi Chen, Kenny C. Gross, Ashin George, Guang C. Wang
  • Patent number: 11915135
    Abstract: The disclosure discloses a graph optimization method and apparatus for neural network computation. The graph optimization method includes the following steps: S1: converting a computation graph; S2: allocating a register; S3: defining a route selector for a redefined variable; S4: solving the route selector for the redefined variable; S5: defining a criterion of inserting the route selector for the redefined variable into a node; S6: analyzing a dominating edge set of the node for the redefined variable; S7: inserting the route selector for the redefined variable; and S8: renaming the redefined variable. The disclosure solves the problem of the corresponding route selection on a correct definition of the redefined variable when a node including the redefined variable in a computation graph in the compiling period flows through multiple paths of computation flow, reduces the memory cost and promotes the development of implementation application of a deep neural network model.
    Type: Grant
    Filed: September 21, 2022
    Date of Patent: February 27, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Guang Chen
  • Publication number: 20240054319
    Abstract: Disclosed are a data flow method and apparatus for neural network computation. The method includes: step 1, initializing the lifecycle of a variable in a computational graph, i.e., initializing a time period from the start of a definition of the variable to the end of use as the lifecycle of the variable in the computational graph; and step 2, defining a propagation rule for a variable in use to flow through a node, i.e., defining that in the case that a variable at a certain node in the computational graph is used, a definition of the variable is produced at a precursor node of the node, such that an input set of valid variables flowing through the node contains the variable. The application discloses a data flow modeling method and apparatus for neural network computation in a deep learning training system.
    Type: Application
    Filed: September 27, 2022
    Publication date: February 15, 2024
    Inventors: Hongsheng WANG, Guang CHEN
  • Publication number: 20240028886
    Abstract: The disclosure discloses a graph optimization method and apparatus for neural network computation. The graph optimization method includes the following steps: S1: converting a computation graph; S2: allocating a register; S3: defining a route selector for a redefined variable; S4: solving the route selector for the redefined variable; S5: defining a criterion of inserting the route selector for the redefined variable into a node; S6: analyzing a dominating edge set of the node for the redefined variable; S7: inserting the route selector for the redefined variable; and S8: renaming the redefined variable. The disclosure solves the problem of the corresponding route selection on a correct definition of the redefined variable when a node including the redefined variable in a computation graph in the compiling period flows through multiple paths of computation flow, reduces the memory cost and promotes the development of implementation application of a deep neural network model.
    Type: Application
    Filed: September 21, 2022
    Publication date: January 25, 2024
    Inventors: Hongsheng WANG, Guang CHEN
  • Patent number: 11861505
    Abstract: The disclosure discloses a method of executing dynamic graph for neural network computation and the apparatus thereof. The method of executing dynamic graph includes the following steps: S1: constructing and distributing an operator and a tensor; S2: deducing an operator executing process by an operator interpreter; S3: constructing an instruction of a virtual machine at runtime by the operator interpreter; S4: sending the instruction to the virtual machine at runtime by the operator interpreter; S5: scheduling the instruction by the virtual machine; and S6: releasing an executed instruction by the virtual machine. According to the method of executing dynamic graph for neural network computation and the apparatus thereof provided by the disclosure, runtime is abstracted to be the virtual machine, and the virtual machine acquires a sub-graph of each step constructed by a user in real time through the interpreter and schedules, the virtual machines issues, and executes each sub-graph.
    Type: Grant
    Filed: June 6, 2022
    Date of Patent: January 2, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Hujun Bao, Guang Chen
  • Publication number: 20230410560
    Abstract: Disclosed are a method and apparatus for constructing a three-dimensional data set of a pedestrian re-identification based on a neural radiation field. The method includes the following steps: S1: capturing images of pedestrians to be entered by a group of cameras at different viewing angles; S2: generating a three-dimensional spatial position point set by sampling through camera rays in the scenario, and converting observation directions of the cameras corresponding to the three-dimensional spatial position point set into three-dimensional Cartesian unit vectors; and S3: inputting, into a multi-layer sensor, the three-dimensional spatial position point set and the observation directions converted into the three-dimensional Cartesian unit vectors, to output corresponding densities and colors. The method and apparatus of the present disclosure gives a brand-new method for constructing a pedestrian re-identification data set, and provides a new idea of data set construction.
    Type: Application
    Filed: September 21, 2022
    Publication date: December 21, 2023
    Inventors: Hongsheng WANG, Guang CHEN, Hujun BAO
  • Publication number: 20230398659
    Abstract: Polishing pads having varying protrusions and methods of forming the same are disclosed. In an embodiment, a polishing pad includes a polishing pad substrate; a first protrusion on the polishing pad substrate, the first protrusion including a central region and a peripheral region surrounding the central region, and a first hardness of the central region being greater than a second hardness of the peripheral region; and a first groove adjacent a first side of the first protrusion.
    Type: Application
    Filed: August 29, 2022
    Publication date: December 14, 2023
    Inventors: Te-Chien Hou, Chih Hung Chen, Liang-Che Chen, Shich-Chang Suen, Liang-Guang Chen
  • Publication number: 20230392085
    Abstract: A multi-phase combination reaction system has at least one fixed bed hydrogenation reactor. The fixed bed hydrogenation reactor has, arranged from top to bottom, a first hydrogenation reaction area, a gas-liquid separation area, a second hydrogenation reaction area and a third hydrogenation reaction area. The gas-liquid separation area is provided with a raw oil inlet. A hydrogen inlet is provided between the second hydrogenation reaction area and the third hydrogenation reaction area. The system is capable of simultaneously obtaining two fractions in one hydrogenation reactor.
    Type: Application
    Filed: October 22, 2021
    Publication date: December 7, 2023
    Inventors: Meng DAI, Shicai LI, Yang LI, Dahai XU, He DING, Guang CHEN, Han ZHANG, Jiawen ZHOU
  • Patent number: D1018835
    Type: Grant
    Filed: October 24, 2022
    Date of Patent: March 19, 2024
    Assignee: Shenzhen Waspo Technology Co., Ltd.
    Inventors: Guang Yang, Xiang Chen