Patents by Inventor Zili SHAO

Zili SHAO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10310971
    Abstract: A method for processing a memory page in memory, where the memory page in the memory includes an idle single-level cell (SLC) memory page, an active SLC memory page, an inactive SLC memory page, and a multi-level cell (MLC) memory page, and when a quantity of idle SLC memory pages of any virtual machine (VM) is less than a specified threshold, the processing method includes converting one idle SLC memory page to two MLC memory pages, copying data in two inactive SLC memory pages to the two converted MLC memory pages, and releasing storage space of the two inactive SLC memory pages to obtain two idle SLC memory pages.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: June 4, 2019
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Duo Liu, Zili Shao, Linbo Long
  • Patent number: 9971681
    Abstract: A method for garbage collection in a NAND flash memory system is disclosed. The method includes the steps of receiving a data request task in the NAND flash memory system; executing the data request task in the NAND flash memory system; based on the condition where the number of free data pages in the NAND flash memory system is below the first pre-determined threshold, determining whether a data block partial garbage collection list is empty; based on the condition where the data block partial garbage collection list is empty, selecting a victim block in the NAND flash memory system; and creating a plurality of data block partial garbage collection tasks.
    Type: Grant
    Filed: June 1, 2016
    Date of Patent: May 15, 2018
    Assignee: Nanjing University
    Inventors: Qi Zhang, Xuandong Li, Linzhang Wang, Tian Zhang, Yi Wang, Zili Shao
  • Publication number: 20170351603
    Abstract: A method for garbage collection in a NAND flash memory system is disclosed. The method includes the steps of receiving a data request task in the NAND flash memory system; executing the data request task in the NAND flash memory system; based on the condition where the number of free data pages in the NAND flash memory system is below the first pre-determined threshold, determining whether a data block partial garbage collection list is empty; based on the condition where the data block partial garbage collection list is empty, selecting a victim block in the NAND flash memory system; and creating a plurality of data block partial garbage collection tasks.
    Type: Application
    Filed: June 1, 2016
    Publication date: December 7, 2017
    Applicant: Nanjing University
    Inventors: Qi Zhang, Xuandong Li, Linzhang Wang, Tian Zhang, Yi Wang, Zili Shao
  • Publication number: 20170315931
    Abstract: A method for processing a memory page in memory, where the memory page in the memory includes an idle single-level cell (SLC) memory page, an active SLC memory page, an inactive SLC memory page, and a multi-level cell (MLC) memory page, and when a quantity of idle SLC memory pages of any virtual machine (VM) is less than a specified threshold, the processing method includes converting one idle SLC memory page to two MLC memory pages, copying data in two inactive SLC memory pages to the two converted MLC memory pages, and releasing storage space of the two inactive SLC memory pages to obtain two idle SLC memory pages.
    Type: Application
    Filed: July 17, 2017
    Publication date: November 2, 2017
    Inventors: Duo Liu, Zili Shao, Linbo Long
  • Publication number: 20140304453
    Abstract: This invention discloses methods for implementing a flash translation layer in a computer subsystem comprising a flash memory and a random access memory (RAM). According to one disclosed method, the flash memory comprises data blocks for storing real data and translation blocks for storing address-mapping information. The RAM includes a cache space allocation table and a translation page mapping table. The cache space allocation table may be partitioned into a first cache space and a second cache space. Upon receiving an address-translating request, the cache space allocation table is searched to identify if an address-mapping data structure that matches the request is present. If not, the translation blocks are searched for the matched address-mapping data structure, where the physical page addresses for accessing the translation blocks are provided by the translation page mapping table. The matched address-translating data structure is also used to update the cache space allocation table.
    Type: Application
    Filed: April 8, 2013
    Publication date: October 9, 2014
    Applicant: The Hong Kong Polytechnic University
    Inventors: Zili SHAO, Zhiwei QIN, Yi WANG, Renhai CHEN, Duo LIU