Patents by Inventor Lingfang Zeng

Lingfang Zeng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240118897
    Abstract: Disclosed are an instruction execution method and apparatus for graph computation. The method includes the following steps: S1: sending operators of each node in a computational graph used for neural network computation to an operator interpreter; S2: building, by the operator interpreter, instructions in operation; S3: defining an instruction dependency relationship; S4: building an instruction dependency relationship graph; S5: building a topological order of parallel instructions; S6: scheduling the parallel instructions to hardware resources; S7: building shortest schedules for the parallel instructions: the shortest time required to execute the parallel instructions under the condition of limited hardware resources; and S8: releasing the completed instructions.
    Type: Application
    Filed: November 30, 2022
    Publication date: April 11, 2024
    Inventors: Hongsheng WANG, Guang CHEN, Lingfang ZENG, Aimin PAN
  • Publication number: 20240104341
    Abstract: A memory optimization method includes: compiling a neural network into a computational graph for neural network computation on a computer; transforming the computational graph into a topological graph; constructing a life cycle relationship graph of tensor variables in the computational graph; and analyzing a life cycle relationship among tensor variables in a node of the computational graph; iteratively merging those tensor variables connected by lines of the second type and caching into a memory any tensor variable that goes beyond a number of idle registers and is not allocated to a register, until all tensor variables that go beyond the number of the idle registers and are not allocated to registers are cached into the memory; caching any node of the life cycle relationship graph with a degree smaller than a number of registers into a stack.
    Type: Application
    Filed: November 22, 2022
    Publication date: March 28, 2024
    Inventors: Hongsheng WANG, Guang CHEN, Lingfang ZENG
  • Patent number: 11941514
    Abstract: The present disclosure discloses a method for execution of a computational graph in a neural network model and an apparatus thereof, including: creating task execution bodies on a native machine according to a physical computational graph compiled and generated by a deep learning framework, and designing a solution for allocating a plurality of idle memory blocks to each task execution body, so that the entire computational graph participates in deep learning training tasks of different batches of data in a pipelining and parallelizing manner.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: March 26, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Hujun Bao, Guang Chen, Lingfang Zeng, Hongcai Cheng, Yong Li, Jian Zhu, Huanbo Zheng
  • Patent number: 11782723
    Abstract: Disclosed are an intermediate representation method and apparatus for parallel execution of graph computation. The method includes the following steps: S1: compiling a neural network into a computational graph on a computer; S2: defining branch states of tensor variables in the computational graph; S3: defining a data dependency relationship of the tensor variables in the computational graph; S4: defining a control dependency relationship of the tensor variables in the computational graph; S5: building a data dependency relationship graph of the tensor variables in the computational graph; S6: building a control dependency relationship graph of the tensor variables in the computational graph; and S7: transforming control dependencies into data dependencies. The present application derives, based on the dependency relationship, a parallel computing method that can execute the branch threads in parallel in the global computational graph, and optimizes the compilation efficiency of the computational graph.
    Type: Grant
    Filed: November 22, 2022
    Date of Patent: October 10, 2023
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Guang Chen, Lingfang Zeng, Aimin Pan
  • Publication number: 20230274129
    Abstract: The present disclosure discloses a method for execution of a computational graph in a neural network model and an apparatus thereof, including: creating task execution bodies on a native machine according to a physical computational graph compiled and generated by a deep learning framework, and designing a solution for allocating a plurality of idle memory blocks to each task execution body, so that the entire computational graph participates in deep learning training tasks of different batches of data in a pipelining and parallelizing manner.
    Type: Application
    Filed: March 29, 2022
    Publication date: August 31, 2023
    Inventors: Hongsheng WANG, Hujun BAO, Guang CHEN, Lingfang ZENG, Hongcai CHENG, Yong LI, Jian ZHU, Huanbo ZHENG
  • Publication number: 20170344478
    Abstract: Technologies are generally described herein for storing log records in non-volatile memory. Transaction data may be accessed that is associated with one or more transactions that modify a data storage device. The transaction data may be stored in a cache that is coupled to the data storage device. The log records corresponding to transaction data may also be stood in a non-volatile memory (NVM) that is coupled to the data storage device. Log records may be synchronized with the data storage device.
    Type: Application
    Filed: December 18, 2014
    Publication date: November 30, 2017
    Applicant: HUA ZHONG UNIVERSITY OF SCIENCE TECHNOLOGY
    Inventors: Dan Feng, Binbing Hou, Jianxi Chen, Lingfang Zeng, Wei Tong