Patents by Inventor Rentong GUO

Rentong GUO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240330286
    Abstract: An apparatus, a method, and a storage medium for database query. The apparatus, method, and storage medium are configured to: determine a syntax tree corresponding to an SQL query statement includes a preset subtree, wherein the preset subtree is used to indicate a query mode for querying vector data; determine a query mode according to the preset subtree and the SQL query statement; query data in a database based on the query mode to determine query results. By using an SQL query statement including a preset subtree to query a vector database, to realize an efficient approximate query, so as to avoid high access costs caused by accessing the database by using an API. Moreover, it can reduce user's understanding costs on unstructured query.
    Type: Application
    Filed: March 29, 2023
    Publication date: October 3, 2024
    Applicant: ZILLIZ INC.
    Inventors: Chao XIE, Yu XIE, Xiaofan LUAN, Rentong GUO, Xun HUANG
  • Publication number: 20240330341
    Abstract: Apparatus, method and storage medium for data query is disclosed. One or more domain-related question-answering pairs, files, and summary information of each file may be pre-stored in a vector database. When receiving the query request from the user, the question-answering system first obtains the associated data of the received query request from the vector database, and the associated data includes the associated question-answering pair and the associated summary information corresponding to the query request. Then, the question-answering system inputs the associated data and the query request into the large language model to obtain an intermediate query result. Finally, if there is summary information matching the intermediate query result in the associated summary information, the file corresponding to the matched summary information is added to the intermediate query result to obtain the query result corresponding to the query request.
    Type: Application
    Filed: March 27, 2023
    Publication date: October 3, 2024
    Applicant: ZILLIZ INC.
    Inventors: Chao XIE, Anyang WANG, Yuchen GAO, Rentong GUO
  • Publication number: 20240160628
    Abstract: A data query apparatus, method, and storage medium include a database system that determines a data collection and a calculating model corresponding to a received query task request, according to the number of bytes occupied when the data collection or the calculating model is stored, or the number of bit widths occupied when the data collection or the calculating model is transmitted, the database system determines a transmission rate of the data collection or the calculating model. The query task request is transmitted to a model node or a data node with a lower transmission rate, and the data node or the model node with a higher transmission rate is dispatched to perform data calculation.
    Type: Application
    Filed: November 10, 2022
    Publication date: May 16, 2024
    Applicant: ZILLIZ INC.
    Inventors: Chao XIE, Jie HOU, Rentong GUO
  • Patent number: 10248576
    Abstract: The present invention provides a DRAM/NVM hierarchical heterogeneous memory system with software-hardware cooperative management schemes. In the system, NVM is used as large-capacity main memory, and DRAM is used as a cache to the NVM. Some reserved bits in the data structure of TLB and last-level page table are employed effectively to eliminate hardware costs in the conventional hardware-managed hierarchical memory architecture. The cache management in such a heterogeneous memory system is pushed to the software level. Moreover, the invention is able to reduce memory access latency in case of last-level cache misses. Considering that many applications have relatively poor data locality in big data application environments, the conventional demand-based data fetching policy for DRAM cache can aggravates cache pollution.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: April 2, 2019
    Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Hai Jin, Xiaofei Liao, Haikun Liu, Yujie Chen, Rentong Guo
  • Publication number: 20170277640
    Abstract: The present invention provides a DRAM/NVM hierarchical heterogeneous memory system with software-hardware cooperative management schemes. In the system, NVM is used as large-capacity main memory, and DRAM is used as a cache to the NVM. Some reserved bits in the data structure of TLB and last-level page table are employed effectively to eliminate hardware costs in the conventional hardware-managed hierarchical memory architecture. The cache management in such a heterogeneous memory system is pushed to the software level. Moreover, the invention is able to reduce memory access latency in case of last-level cache misses. Considering that many applications have relatively poor data locality in big data application environments, the conventional demand-based data fetching policy for DRAM cache can aggravates cache pollution.
    Type: Application
    Filed: October 6, 2016
    Publication date: September 28, 2017
    Inventors: Hai JIN, Xiaofei LIAO, Haikun LIU, Yujie CHEN, Rentong GUO