Patents by Inventor Jiaxin Ou

Jiaxin Ou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240086362
    Abstract: A key-value store and a file system are integrated together to provide improved operations. The key-value store can include a log engine, a hash engine, a sorting engine, and a garbage collection manager. The features of the key-value store can be configured to reduce the number of I/O operations involving the file system, thereby improving read efficiency, reducing write latency, and reducing write amplification issues inherent in the combined key-value store and file system.
    Type: Application
    Filed: September 27, 2023
    Publication date: March 14, 2024
    Inventors: Hao Wang, Jiaxin Ou, Sheng Qiu, Yi Wang, Zhengyu Yang, Yizheng Jiao, Jingwei Zhang, Jianyang Hu, Yang Liu, Ming Zhao, Hui Zhang, Kuankuan Guo, Huan Sun, Yinlin Zhang
  • Publication number: 20240070135
    Abstract: Systems and methods are provided for improved point querying of a database. The index values are separated from data and retained in cache memory to allow access without requiring a disk input/output (I/O) operation and thereby having less latency resulting from such disk I/O operations. The index values can be compressed using an algorithm such as Crit-Bit-Trie to allow storage of the index values in limited cache memory space. The index values can be selected for storage according to a least recently used approach when cache memory is insufficient to store all index values to maintain a hit rate for the cached portion and reduce the disk I/O operations.
    Type: Application
    Filed: September 27, 2023
    Publication date: February 29, 2024
    Inventors: Jiaxin Ou, Jingwei Zhang, Hao Wang, Hui Zhang, Ming Zhao, Yi Wang, Zhengyu Yang
  • Publication number: 20240036767
    Abstract: Decoupled computing systems include layers of same-type computing resources, and include a dispatch layer to assign tasks from one layer to another, such as input and output (I/O) flows. The I/O flows can be assigned to particular computing resources of a layer based on a weighted moving average of performance data for the layer. When traffic is high, the assignment can include random assignment to some or all of the computing resources in the layer. The I/O flows can be split between read-intensive and write-intensive flows, with more read-intensive flows being assigned based on a pick ratio.
    Type: Application
    Filed: September 27, 2023
    Publication date: February 1, 2024
    Inventors: Zhengyu YANG, Hao WANG, Sheng QIU, Yang LIU, Yizheng JIAO, Qizhong MAO, Jiaxin OU, Ming ZHAO, Yi WANG, Jingwei ZHANG, Jianyang HU
  • Publication number: 20240028566
    Abstract: A file system particular for use with key-value stores is provided. The file system can operate in a user space instead of a kernel space. The file system can be an append-only file system. The file system can support use of solid state drives (SSDs) for storage, including zoned SSDs. The file system can include a file manager, a metadata manager, a task scheduler, a space allocator, and a collaborator for collaborating with a key-value store.
    Type: Application
    Filed: September 27, 2023
    Publication date: January 25, 2024
    Inventors: Sheng Qiu, Hao Wang, Zhengyu Yang, Yizheng Jiao, Jianyang Hu, Yang Liu, Jiaxin Ou, Huan Sun, Yinlin Zhang
  • Publication number: 20240020231
    Abstract: Methods and systems for garbage collection and compaction for key-value engines in a data storage and communication system. The method includes determining disk capacity usage of the key-value engine and adjusting a garbage collection percentage threshold and a number of garbage collection threads based on whether the disk capacity usage of the key-value engine meets and/or exceeds predetermined disk capacity usage thresholds. The method may further include performing a periodic compaction process to consolidate one or more expired pages of one or more applications on a log-structured merge (LSM) tree by merging one or more layers into a last layer of the one or more expired pages to reduce data handling during an occurrence of the garbage collection.
    Type: Application
    Filed: September 27, 2023
    Publication date: January 18, 2024
    Inventors: Jiaxin Ou, Yi Wang, Jingwei Zhang, Zhengyu Yang
  • Publication number: 20240020167
    Abstract: By splitting data within a large LSM tree structure into smaller tree structures to reduce a number of layers in such a structure, write amplification factor (WAF) is efficiently reduced. By further classifying and labeling each I/O based on type, a lower-level filesystem is able to prioritize scheduling between different types of I/O to thereby facilitate stable latency for individual conjunction within the filesystem layer and for individual I/O operations.
    Type: Application
    Filed: September 27, 2023
    Publication date: January 18, 2024
    Inventors: Jiaxin Ou, Hao Wang, Ming Zhao, Yi Wang, Zhengyu Yang
  • Patent number: 9959053
    Abstract: The present invention provides a method for constructing an NVRAM-based efficient file system, including the following steps: S1. determining a file operation type of the file system, where the file operation type includes a file read operation, a non-persistent file write operation, and a persistent file write operation; and S2. if the file operation type is a non-persistent file write operation, writing, by the file system, content of the non-persistent file write operation to a dynamic random access memory DRAM, updating a corresponding DRAM cache block index, and flushing, at a preset time point, the content of the non-persistent file write operation back to a non-volatile random access memory NVRAM asynchronously, or otherwise, copying, by the file system, related data directly between the NVRAM/DRAM and the user buffer.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: May 1, 2018
    Inventors: Jiwu Shu, Jiaxin Ou, Youyou Lu
  • Publication number: 20170147208
    Abstract: The present invention provides a method for constructing an NVRAM-based efficient file system, including the following steps: S1. determining a file operation type of the file system, where the file operation type includes a file read operation, a non-persistent file write operation, and a persistent file write operation; and S2. if the file operation type is a non-persistent file write operation, writing, by the file system, content of the non-persistent file write operation to a dynamic random access memory DRAM, updating a corresponding DRAM cache block index, and flushing, at a preset time point, the content of the non-persistent file write operation back to a non-volatile random access memory NVRAM asynchronously, or otherwise, copying, by the file system, related data directly between the NVRAM/DRAM and the user buffer.
    Type: Application
    Filed: December 28, 2015
    Publication date: May 25, 2017
    Inventors: Jiwu Shu, Jiaxin Ou, Youyou Lu