Patents by Inventor Zhengfan YUAN

Zhengfan YUAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11445004
    Abstract: A method for processing shared data, an apparatus, and a server are provided, and relate to the field of communications technologies, so that shared data can be cached across application programs, which facilitates data sharing across applications programs, can reduce a quantity of repeated computations, and helps accelerate a computing speed of a Spark architecture. The method includes: receiving a first instruction; starting a first Spark context for a first application program, to create a DAG of the first application program, and caching the DAG of the first application program in a first area of a first server; receiving a second instruction; starting a second Spark context for a second application program reading m DAGs from the first area; and caching the to-be-cached shareable RDDs in a main memory of a second server, where the shareable RDD is an RDD included in at least two DAGs of the m DAGs.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: September 13, 2022
    Assignee: Petal Cloud Technology Co., Ltd.
    Inventors: Jun Tan, Zhengfan Yuan
  • Publication number: 20220060568
    Abstract: A method for processing shared data, an apparatus, and a server are provided, and relate to the field of communications technologies, so that shared data can be cached across application programs, which facilitates data sharing across applications programs, can reduce a quantity of repeated computations, and helps accelerate a computing speed of a Spark architecture. The method includes: receiving a first instruction; starting a first Spark context for a first application program, to create a DAG of the first application program, and caching the DAG of the first application program in a first area of a first server; receiving a second instruction; starting a second Spark context for a second application program reading m DAGs from the first area; and caching the to-be-cached shareable RDDs in a main memory of a second server, where the shareable RDD is an RDD included in at least two DAGs of the m DAGs.
    Type: Application
    Filed: December 3, 2019
    Publication date: February 24, 2022
    Inventors: Jun TAN, Zhengfan YUAN