Patents by Inventor Guang Chen

Guang Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12039361
    Abstract: The present disclosure discloses a method for executing a task. The method includes: a master computing device node in a computing cluster system receives a task code of a to-be-executed task; the master computing device node divides the to-be-executed task into subtasks, and for each of the subtasks, the master computing device node determines operators required to execute the subtask based on the task code; the master computing device node respectively distributes the subtasks to computing nodes in the computing cluster system, such that for each of the computing nodes, the computing node generates an executable task subgraph for the computing node based on the operators required to execute the subtask distributed to the computing node and data transmission relationships between the operators required to execute the subtask distributed to the computing node, and runs the executable task subgraph to execute the to-be-executed task.
    Type: Grant
    Filed: October 25, 2023
    Date of Patent: July 16, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Guang Chen, Fei Wu, Feng Lin
  • Patent number: 12038830
    Abstract: A double-blind comparison is performed between prognostic-surveillance systems, which are located on a local system and a remote system. During operation, the local system inserts random faults into a dataset to produce a locally seeded dataset, wherein the random faults are inserted into random signals at random times with variable fault signatures. Next, the local system exchanges the locally seeded dataset with a remote system, and in return receives a remotely seeded dataset, which was produced by the remote system by inserting different random faults into the same dataset. Next, the local system uses a local prognostic-surveillance system to analyze the remotely seeded dataset to produce locally detected faults. Finally, the local system determines a performance of the local prognostic-surveillance system by comparing the locally detected faults against actual faults in the remotely seeded fault information. The remote system similarly determines a performance of a remote prognostic-surveillance system.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: July 16, 2024
    Assignee: Oracle International Corporation
    Inventors: Rui Zhong, Guang C. Wang, Kenny C. Gross, Ashin George, Zexi Chen
  • Patent number: 12002684
    Abstract: A method for CMP includes following operations. A metal stack is received. The metal layer stack includes at least a first metal layer and a second metal layer, and a top surface of the first metal layer and a top surface of the second metal layer are exposed. A protecting layer is formed over the second metal layer. A portion of the first metal layer is etched. The protecting layer protects the second metal layer during the etching of the portion of the first metal layer. A top surface of the etched first metal layer is lower than a top surface of the protecting layer. The protecting layer is removed from the second metal layer.
    Type: Grant
    Filed: November 21, 2022
    Date of Patent: June 4, 2024
    Assignee: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY LTD.
    Inventors: Ji Cui, Fu-Ming Huang, Ting-Kui Chang, Tang-Kuei Chang, Chun-Chieh Lin, Wei-Wei Liang, Liang-Guang Chen, Kei-Wei Chen, Hung Yen, Ting-Hsun Chang, Chi-Hsiang Shen, Li-Chieh Wu, Chi-Jen Liu
  • Patent number: 11996283
    Abstract: The present disclosure provides a method for forming an integrated circuit (IC) structure. The method includes providing a metal gate (MG), an etch stop layer (ESL) formed on the MG, and a dielectric layer formed on the ESL. The method further includes etching the ESL and the dielectric layer to form a trench. A surface of the MG exposed in the trench is oxidized to form a first oxide layer on the MG. The method further includes removing the first oxide layer using a H3PO4 solution.
    Type: Grant
    Filed: July 26, 2022
    Date of Patent: May 28, 2024
    Assignee: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY, LTD.
    Inventors: Shich-Chang Suen, Li-Chieh Wu, Chi-Jen Liu, He Hui Peng, Liang-Guang Chen, Yung-Chung Chen
  • Publication number: 20240162189
    Abstract: An active interposer device includes a multiplexer circuit configurable to provide a first signal from a first integrated circuit to an external terminal of the active interposer device in a first configuration of the active interposer device. The multiplexer circuit is further configurable to provide a second signal from a second integrated circuit to the external terminal in a second configuration of the active interposer device. The second integrated circuit is larger than the first integrated circuit, and the active interposer device is configurable to couple the first integrated circuit or the second integrated circuit to a package substrate through the external terminal.
    Type: Application
    Filed: December 21, 2023
    Publication date: May 16, 2024
    Applicant: Altera Corporation
    Inventors: Hon Khet Chuah, Archanna Srinivasan, Arch Zaliznyak, Guang Chen, Kok Kee Looi
  • Publication number: 20240127027
    Abstract: Disclosed are an optimization method and apparatus for compiling computation graph. The optimization method includes the following steps: step S1: converting a computation graph into an intermediate representation; step S2: analyzing a dependency relationship; step S3: constructing a work stack; step S4: performing initialization to achieve a nonactivated state; step S5: popping out stack top node elements, and updating an input node set in a current round of iteration; step S6: adding the stack top node elements that depend on step S5 to a stack top position in sequence until the work stack is empty; step S7: implementing an intermediate representation in a fixed node state using a bit vector; and step S8: allocating registers for effective tensor variables contained in nodes of the intermediate representation in the fixed node state.
    Type: Application
    Filed: November 22, 2022
    Publication date: April 18, 2024
    Inventors: Hongsheng WANG, Shuibing HE, Guang CHEN
  • Publication number: 20240118897
    Abstract: Disclosed are an instruction execution method and apparatus for graph computation. The method includes the following steps: S1: sending operators of each node in a computational graph used for neural network computation to an operator interpreter; S2: building, by the operator interpreter, instructions in operation; S3: defining an instruction dependency relationship; S4: building an instruction dependency relationship graph; S5: building a topological order of parallel instructions; S6: scheduling the parallel instructions to hardware resources; S7: building shortest schedules for the parallel instructions: the shortest time required to execute the parallel instructions under the condition of limited hardware resources; and S8: releasing the completed instructions.
    Type: Application
    Filed: November 30, 2022
    Publication date: April 11, 2024
    Inventors: Hongsheng WANG, Guang CHEN, Lingfang ZENG, Aimin PAN
  • Publication number: 20240104341
    Abstract: A memory optimization method includes: compiling a neural network into a computational graph for neural network computation on a computer; transforming the computational graph into a topological graph; constructing a life cycle relationship graph of tensor variables in the computational graph; and analyzing a life cycle relationship among tensor variables in a node of the computational graph; iteratively merging those tensor variables connected by lines of the second type and caching into a memory any tensor variable that goes beyond a number of idle registers and is not allocated to a register, until all tensor variables that go beyond the number of the idle registers and are not allocated to registers are cached into the memory; caching any node of the life cycle relationship graph with a degree smaller than a number of registers into a stack.
    Type: Application
    Filed: November 22, 2022
    Publication date: March 28, 2024
    Inventors: Hongsheng WANG, Guang CHEN, Lingfang ZENG
  • Publication number: 20240104395
    Abstract: Disclosed are a memory optimization method and device oriented to neural network computing. The memory optimization method oriented to neural network computing includes the following steps: step S1: reconstructing a computation graph into a topological structure computation graph; step S2: constructing a life cycle interval about tensor variables; step S3: constructing a scanning line about the life cycle interval; step S4: allocating the tensor variables to idle registers; step S5: allocating to tensor variables exceeding the required number of registers; step S6: allocating registers allocated in the expired life cycle interval to tensor variables exceeding the required number of registers; and step S7: adding tensor variables transferred to a memory back to the life cycle interval in an activated state, and allocating idle registers for the tensor variables. According to the present disclosure, the memory of a data flow of a computation graph for neural network computing is optimized.
    Type: Application
    Filed: December 1, 2022
    Publication date: March 28, 2024
    Inventors: Hongsheng WANG, Guang CHEN
  • Publication number: 20240104016
    Abstract: The disclosure discloses an intermediate representation method for compiling computation graphs, including: step 1: compiling a neural network into a computation graph for neural network computation; step 2: constructing a node for each tensor variable in the computation graph; step 3: associating the node representing the tensor variable in the computation graph to a set of pointers to the tensor variable; step 4: analyzing constraint relationships between the tensor variables in the computation graph; step 5: iteratively constructing a topological graph of the intermediate representation based on the constraint relationships between the tensor variables in the computation graph; and step 6: analyzing the tensor variables with different aliases pointing to a same memory location based on the intermediate representation, and allocating a register for the tensor variables with different aliases.
    Type: Application
    Filed: November 30, 2022
    Publication date: March 28, 2024
    Inventors: Hongsheng WANG, Aimin PAN, Guang CHEN
  • Patent number: 11941507
    Abstract: Disclosed are a data flow method and apparatus for neural network computation. The data flow method for neural network computation includes initializing the lifecycle of a variable in a computational graph; and defining a propagation rule for a variable in use to flow through a node. A definition of the variable is produced at a precursor node of the node, such that an input set of valid variables flowing through the node contains the variable. The method may be used on neural network computation in a deep learning training system.
    Type: Grant
    Filed: September 27, 2022
    Date of Patent: March 26, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Guang Chen
  • Patent number: 11941514
    Abstract: The present disclosure discloses a method for execution of a computational graph in a neural network model and an apparatus thereof, including: creating task execution bodies on a native machine according to a physical computational graph compiled and generated by a deep learning framework, and designing a solution for allocating a plurality of idle memory blocks to each task execution body, so that the entire computational graph participates in deep learning training tasks of different batches of data in a pipelining and parallelizing manner.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: March 26, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Hujun Bao, Guang Chen, Lingfang Zeng, Hongcai Cheng, Yong Li, Jian Zhu, Huanbo Zheng
  • Patent number: 11934887
    Abstract: The present disclosure discloses a distributed model compilation system. A master node of the system determines the logic calculation graph of the model based on model information, divides the logic calculation graph into multiple logic calculation sub-graphs, generates a distributing message for each logic calculation sub-graph, and then transmits the distributing message to a slave node. Each of the slave nodes allocates a local computing resource to compile the logic calculation sub-graph based on the received distributing message, and transmits compilation completion information to the master node. The master node determines the completion of model compilation based on the compilation completion information returned by each slave node, and executes the target work based on the compiled model.
    Type: Grant
    Filed: September 13, 2023
    Date of Patent: March 19, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Fei Wu, Guang Chen, Feng Lin
  • Patent number: 11931766
    Abstract: A plural material dispensing system (10) includes a pump (38) having a cylinder (52) mounted between a first bracket (32) and a second bracket (34), a piston (54) disposed within the cylinder (52), and a pump rod (48) extending from the piston (54) and out of the first bracket (32). Material is provided to the cylinder (52) through a flow path extending through the pump rod (48). The piston (54) drives material downstream out of the cylinder (52), and the interface between the piston (54) and the inner surface of the cylinder (52) provides a dynamic seal during pumping. The flow of material into and out of the pump (38) is controlled by actively-controlled inlet and outlet valves.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: March 19, 2024
    Assignee: Graco Minnesota Inc.
    Inventors: Guang Chen, Qiang Xiao
  • Patent number: 11915135
    Abstract: The disclosure discloses a graph optimization method and apparatus for neural network computation. The graph optimization method includes the following steps: S1: converting a computation graph; S2: allocating a register; S3: defining a route selector for a redefined variable; S4: solving the route selector for the redefined variable; S5: defining a criterion of inserting the route selector for the redefined variable into a node; S6: analyzing a dominating edge set of the node for the redefined variable; S7: inserting the route selector for the redefined variable; and S8: renaming the redefined variable. The disclosure solves the problem of the corresponding route selection on a correct definition of the redefined variable when a node including the redefined variable in a computation graph in the compiling period flows through multiple paths of computation flow, reduces the memory cost and promotes the development of implementation application of a deep neural network model.
    Type: Grant
    Filed: September 21, 2022
    Date of Patent: February 27, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Guang Chen
  • Publication number: 20240054319
    Abstract: Disclosed are a data flow method and apparatus for neural network computation. The method includes: step 1, initializing the lifecycle of a variable in a computational graph, i.e., initializing a time period from the start of a definition of the variable to the end of use as the lifecycle of the variable in the computational graph; and step 2, defining a propagation rule for a variable in use to flow through a node, i.e., defining that in the case that a variable at a certain node in the computational graph is used, a definition of the variable is produced at a precursor node of the node, such that an input set of valid variables flowing through the node contains the variable. The application discloses a data flow modeling method and apparatus for neural network computation in a deep learning training system.
    Type: Application
    Filed: September 27, 2022
    Publication date: February 15, 2024
    Inventors: Hongsheng WANG, Guang CHEN
  • Publication number: 20240028886
    Abstract: The disclosure discloses a graph optimization method and apparatus for neural network computation. The graph optimization method includes the following steps: S1: converting a computation graph; S2: allocating a register; S3: defining a route selector for a redefined variable; S4: solving the route selector for the redefined variable; S5: defining a criterion of inserting the route selector for the redefined variable into a node; S6: analyzing a dominating edge set of the node for the redefined variable; S7: inserting the route selector for the redefined variable; and S8: renaming the redefined variable. The disclosure solves the problem of the corresponding route selection on a correct definition of the redefined variable when a node including the redefined variable in a computation graph in the compiling period flows through multiple paths of computation flow, reduces the memory cost and promotes the development of implementation application of a deep neural network model.
    Type: Application
    Filed: September 21, 2022
    Publication date: January 25, 2024
    Inventors: Hongsheng WANG, Guang CHEN
  • Patent number: 11861505
    Abstract: The disclosure discloses a method of executing dynamic graph for neural network computation and the apparatus thereof. The method of executing dynamic graph includes the following steps: S1: constructing and distributing an operator and a tensor; S2: deducing an operator executing process by an operator interpreter; S3: constructing an instruction of a virtual machine at runtime by the operator interpreter; S4: sending the instruction to the virtual machine at runtime by the operator interpreter; S5: scheduling the instruction by the virtual machine; and S6: releasing an executed instruction by the virtual machine. According to the method of executing dynamic graph for neural network computation and the apparatus thereof provided by the disclosure, runtime is abstracted to be the virtual machine, and the virtual machine acquires a sub-graph of each step constructed by a user in real time through the interpreter and schedules, the virtual machines issues, and executes each sub-graph.
    Type: Grant
    Filed: June 6, 2022
    Date of Patent: January 2, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Hujun Bao, Guang Chen
  • Publication number: 20230410560
    Abstract: Disclosed are a method and apparatus for constructing a three-dimensional data set of a pedestrian re-identification based on a neural radiation field. The method includes the following steps: S1: capturing images of pedestrians to be entered by a group of cameras at different viewing angles; S2: generating a three-dimensional spatial position point set by sampling through camera rays in the scenario, and converting observation directions of the cameras corresponding to the three-dimensional spatial position point set into three-dimensional Cartesian unit vectors; and S3: inputting, into a multi-layer sensor, the three-dimensional spatial position point set and the observation directions converted into the three-dimensional Cartesian unit vectors, to output corresponding densities and colors. The method and apparatus of the present disclosure gives a brand-new method for constructing a pedestrian re-identification data set, and provides a new idea of data set construction.
    Type: Application
    Filed: September 21, 2022
    Publication date: December 21, 2023
    Inventors: Hongsheng WANG, Guang CHEN, Hujun BAO
  • Patent number: D1027627
    Type: Grant
    Filed: March 8, 2022
    Date of Patent: May 21, 2024
    Assignee: Graco Minnesota Inc.
    Inventors: Guang Chen, Michael A. Cryer, Yu Shen