Patents by Inventor Kan Frankie Fan

Kan Frankie Fan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12282655
    Abstract: A method for adaptive mapping for data compression includes determining an input/output (I/O) request pattern, dynamically switching between a segment mapping mode and a flat hash table mapping mode based on the determined I/O request pattern, updating a shared mapping table for the segment mapping mode and the flat hash table mapping mode, and adjusting an entry of the mapping table based on the determined I/O request pattern and a status of the entry.
    Type: Grant
    Filed: February 2, 2023
    Date of Patent: April 22, 2025
    Assignee: Lemon Inc.
    Inventors: Ping Zhou, Longxiao Li, Peng Xu, Kan Frankie Fan, Chaohong Hu, Fei Liu, Hui Zhang, Di Xu
  • Patent number: 12204777
    Abstract: Systems and methods for space allocation for block device compression are provided. In particular, a computing device may receive an allocation request to write the compressed data, select a range list adequate for serving the allocation request from a plurality of range list, dequeue a range entry from the selected range list to allocate free space for the compressed data, and allocate the free space corresponding to the range entry to the compressed data to serve the allocation request.
    Type: Grant
    Filed: March 21, 2023
    Date of Patent: January 21, 2025
    Assignee: Lemon Inc.
    Inventors: Ping Zhou, Kan Frankie Fan, Chaohong Hu, Longxiao Li, Hui Zhang, Fei Liu
  • Patent number: 12204750
    Abstract: The present disclosure describes techniques of metadata management for transparent block level compression. A first area may be created in a backend solid state drive. The first area may comprise a plurality of entries. The plurality of entries may be indexed by addresses of a plurality of blocks of uncompressed data. Each of the plurality of entries comprises a first part configured to store metadata and a second part configured to store compressed data. Each of the plurality blocks of uncompressed data may be compressed individually to generate a plurality of compressed blocks. Metadata and at least a portion of compressed data associated with each of the plurality of compressed blocks may be stored in one of the plurality of entries based on an address of a corresponding block of uncompressed data. A second area may be created in the backend solid state drive for storing the rest of the compressed data.
    Type: Grant
    Filed: September 26, 2022
    Date of Patent: January 21, 2025
    Assignee: Lemon Inc.
    Inventors: Ping Zhou, Chaohong Hu, Kan Frankie Fan, Fei Liu, Longxiao Li, Hui Zhang
  • Patent number: 12197727
    Abstract: Methods and systems for adaptive mapping for data compression on a storage device is provided. The method includes determining a data request pattern of a workload, determining whether to use at least one of a segment mapping mode or a hash mapping mode for mapping the workload, dividing a space on the storage device into a plurality of defrag units for storing data, and assigning the plurality of defrag units as being at least one of a segment defrag unit or a hash defrag unit. The method also includes when the data request pattern is for the segment mapping mode, storing the data on at least one of the plurality of defrag units assigned as the segment defrag unit, and when the data request pattern is for the hash mapping mode, storing the data on at least one of the plurality of defrag units assigned as the hash defrag unit.
    Type: Grant
    Filed: May 4, 2023
    Date of Patent: January 14, 2025
    Assignee: Lemon Inc.
    Inventors: Ping Zhou, Kan Frankie Fan
  • Patent number: 12189969
    Abstract: A system and method are described to efficiently allocate memory space with low latency overhead by allocating blocks of non-volatile memory on a storage device according to a tree data structure comprising a plurality of counter sets, each counter set including one or a plurality of counters indicating numbers of unallocated blocks of memory space within the non-volatile memory.
    Type: Grant
    Filed: December 15, 2022
    Date of Patent: January 7, 2025
    Assignee: LEMON INC.
    Inventors: Ping Zhou, Kan Frankie Fan, Chaohong Hu, Longxiao Li, Peng Xu, Fei Liu, Hui Zhang
  • Patent number: 12093175
    Abstract: Described are examples for storing data on a storage device, including storing, in a live write stream cache, one or more logical blocks (LBs) corresponding to a data segment, writing, for each LB in the data segment, a cache element of a cache entry that points to the LB in the live write stream cache, where the cache entry includes multiple cache elements corresponding to the multiple LBs of the data segment, writing, for the cache entry, a table entry in a mapping table that points to the cache entry, and when a storage policy is triggered for the cache entry, writing the multiple LBs, pointed to by each cache element of the cache entry, to a stream for storing as contiguous LBs on the storage device, and updating the table entry to point to a physical address of a first LB of the contiguous LBs on the storage device.
    Type: Grant
    Filed: November 9, 2022
    Date of Patent: September 17, 2024
    Assignee: Lemon Inc.
    Inventors: Peng Xu, Ping Zhou, Chaohong Hu, Fei Liu, Changyou Xu, Kan Frankie Fan
  • Publication number: 20240264970
    Abstract: Described are examples of a remote memory bridge, or a method for providing or operating a remote memory bridge, that may include a host interface configured to access one or more memories of a host device, and a remote interface configured to provide, to one or more remote devices, a remote memory device function to access the one or more memories of the host device.
    Type: Application
    Filed: April 19, 2024
    Publication date: August 8, 2024
    Inventors: Ping Zhou, Kan Frankie Fan
  • Publication number: 20240248625
    Abstract: Systems and methods for accessing block storage devices are provided. In particular, a computing device may receive a write request including an uncompressed data and an uncompressed block address associated with the uncompressed data, generate compressed data by compressing the uncompressed data, determine a plurality of mapping candidates of compressed data blocks in the block storage devices based on the uncompressed block address, select a compressed data block from the plurality of mapping candidates that has sufficient capacity to store the compressed data, write the compressed data to the selected compressed data block, update metadata of the selected compressed data block to link the uncompressed block address to a compressed block address of the selected compressed data block, and write the selected compressed data block back to a respective block storage device of the block storage devices.
    Type: Application
    Filed: January 20, 2023
    Publication date: July 25, 2024
    Inventors: Ping Zhou, Chaohong HU, Kan Frankie Fan, Fei Liu, Longxiao Li, Hui Zhang
  • Publication number: 20240168630
    Abstract: A flat hash table includes a plurality of entries, and each entry includes a hash function index and a usage bitmap. A method for block device level compression mapping using the flat hash table includes compressing uncompressed data to compressed data, retrieving an entry of the flat hash table using an uncompressed block address of the uncompressed data, determining a compressed block address of the compressed data by executing at least one hash function and by determining a hash function in the at least one hash function for mapping the uncompressed block address to the compressed block address that corresponds to a space in a block storage device, storing the compressed data to the space that corresponds to the compressed block address, and updating the hash function index of the entry of the flat hash table with an index indicative of the hash function.
    Type: Application
    Filed: November 18, 2022
    Publication date: May 23, 2024
    Inventors: Ping Zhou, Longxiao Li, Chaohong HU, Fei Liu, Kan Frankie Fan, Hui Zhang
  • Publication number: 20240152455
    Abstract: Described are examples for storing data on a storage device, including storing, in a live write stream cache, one or more logical blocks (LBs) corresponding to a data segment, writing, for each LB in the data segment, a cache element of a cache entry that points to the LB in the live write stream cache, where the cache entry includes multiple cache elements corresponding to the multiple LBs of the data segment, writing, for the cache entry, a table entry in a mapping table that points to the cache entry, and when a storage policy is triggered for the cache entry, writing the multiple LBs, pointed to by each cache element of the cache entry, to a stream for storing as contiguous LBs on the storage device, and updating the table entry to point to a physical address of a first LB of the contiguous LBs on the storage device.
    Type: Application
    Filed: November 9, 2022
    Publication date: May 9, 2024
    Inventors: Peng XU, Ping Zhou, Chaohong Hu, Fei Liu, Changyou Xu, Kan Frankie Fan
  • Publication number: 20240126686
    Abstract: A system includes a host device, a hardware offload engine, and a non-volatile storage to store on-disk data. The hardware offload engine is represented to the host device as being a storage having a virtual storage capacity, and the host device transmits an offload command to the hardware offload engine as a data write command without requiring kernel changes or special drivers.
    Type: Application
    Filed: December 27, 2023
    Publication date: April 18, 2024
    Inventors: Ping Zhou, Kan Frankie Fan, Hui Zhang
  • Publication number: 20240103722
    Abstract: The present disclosure describes techniques of metadata management for transparent block level compression. A first area may be created in a backend solid state drive. The first area may comprise a plurality of entries. The plurality of entries may be indexed by addresses of a plurality of blocks of uncompressed data. Each of the plurality of entries comprises a first part configured to store metadata and a second part configured to store compressed data. Each of the plurality blocks of uncompressed data may be compressed individually to generate a plurality of compressed blocks. Metadata and at least a portion of compressed data associated with each of the plurality of compressed blocks may be stored in one of the plurality of entries based on an address of a corresponding block of uncompressed data. A second area may be created in the backend solid state drive for storing the rest of the compressed data.
    Type: Application
    Filed: September 26, 2022
    Publication date: March 28, 2024
    Inventors: Ping Zhou, Chaohong Hu, Kan Frankie Fan, Fei Liu, Longxiao Li, Hui Zhang
  • Publication number: 20230273727
    Abstract: Methods and systems for adaptive mapping for data compression on a storage device is provided. The method includes determining a data request pattern of a workload, determining whether to use at least one of a segment mapping mode or a hash mapping mode for mapping the workload, dividing a space on the storage device into a plurality of defrag units for storing data, and assigning the plurality of defrag units as being at least one of a segment defrag unit or a hash defrag unit. The method also includes when the data request pattern is for the segment mapping mode, storing the data on at least one of the plurality of defrag units assigned as the segment defrag unit, and when the data request pattern is for the hash mapping mode, storing the data on at least one of the plurality of defrag units assigned as the hash defrag unit.
    Type: Application
    Filed: May 4, 2023
    Publication date: August 31, 2023
    Inventors: Ping Zhou, Kan Frankie Fan
  • Publication number: 20230229324
    Abstract: Systems and methods for space allocation for block device compression are provided. In particular, a computing device may receive an allocation request to write the compressed data, select a range list adequate for serving the allocation request from a plurality of range list, dequeue a range entry from the selected range list to allocate free space for the compressed data, and allocate the free space corresponding to the range entry to the compressed data to serve the allocation request.
    Type: Application
    Filed: March 21, 2023
    Publication date: July 20, 2023
    Inventors: Ping ZHOU, Kan Frankie FAN, Chaohong HU, Longxiao LI, Hui ZHANG, Fei LIU
  • Publication number: 20230176734
    Abstract: A method for adaptive mapping for data compression includes determining an input/output (I/O) request pattern, dynamically switching between a segment mapping mode and a flat hash table mapping mode based on the determined I/O request pattern, updating a shared mapping table for the segment mapping mode and the flat hash table mapping mode, and adjusting an entry of the mapping table based on the determined I/O request pattern and a status of the entry.
    Type: Application
    Filed: February 2, 2023
    Publication date: June 8, 2023
    Inventors: Ping ZHOU, Longxiao LI, Peng XU, Kan Frankie FAN, Chaohong HU, Fei LIU, Hui ZHANG, Di XU
  • Publication number: 20230122533
    Abstract: A system and method are described to efficiently allocate memory space with low latency overhead by allocating blocks of non-volatile memory on a storage device according to a tree data structure comprising a plurality of counter sets, each counter set including one or a plurality of counters indicating numbers of unallocated blocks of memory space within the non-volatile memory.
    Type: Application
    Filed: December 15, 2022
    Publication date: April 20, 2023
    Inventors: Ping ZHOU, Kan Frankie FAN, Chaohong HU, Longxiao LI, Peng XU, Fei LIU, Hui ZHANG
  • Patent number: 9219683
    Abstract: Systems and methods that provide a unified infrastructure over layer-2 networks are provided. A first frame is generated by an end point. The first frame comprises a proxy payload, a proxy association header and a frame header relating to a control proxy element. The first frame is sent over a first network to the control proxy element. A second frame is generated by the control proxy element. The second frame comprises the proxy payload and a proxy header. The first and second frames correspond to different layer-2 protocols. The control proxy element sends the second frame over a second network employing the layer-2 protocol of the second frame.
    Type: Grant
    Filed: April 8, 2013
    Date of Patent: December 22, 2015
    Assignee: Broadcom Corporation
    Inventors: Uri El Zur, Kan Frankie Fan, Scott S. McDaniel, Murali Rajagopal
  • Publication number: 20130286840
    Abstract: A network device may provide Layer-2 (L2) based tunneling to offload tunneling performed by tunneling gateways. The network device may selectively offload tunneling handled by a tunneling gateway by determining whether addressing or locality information of a destination network device for a traffic packet is available. When the locality information of the destination network device for the traffic packet is available, the network device may establish a separate tunnel for communicating the traffic packet to the destination network device instead of forwarding the traffic packet to the tunneling gateway. The separate tunnel may be configured to bypass the tunneling gateway. When the addressing or locality information of the destination network device for the traffic packet is not available, the network device may forward the traffic packet to the tunneling gateway for tunneling of the traffic packet by the tunneling gateway.
    Type: Application
    Filed: June 28, 2013
    Publication date: October 31, 2013
    Inventor: Kan Frankie Fan
  • Patent number: 8542585
    Abstract: A method for processing network data includes collecting by a network interface controller (NIC), a plurality of transmit (TX) buffer indicators into a plurality of priority lists of connections. Each of the plurality of TX buffer indicators identifies transmit-ready data located externally to the NIC and not previously received by the NIC. One or more of the plurality of TX buffer indicators may be selected. The identified transmit-ready data may be retrieved into the NIC based on the selected one or more of the plurality of TX buffer indicators. At least a portion of the identified transmit-ready data may be transmitted. Each of the plurality of priority lists may be generated based on a particular connection priority characteristic and a particular connection type. The identified transmit-ready data may be associated with the same connection priority characteristic and the same connection type.
    Type: Grant
    Filed: August 9, 2011
    Date of Patent: September 24, 2013
    Assignee: Broadcom Corporation
    Inventors: Scott McDaniel, Kan Frankie Fan, Uri El Zur
  • Patent number: 8493851
    Abstract: A network device may provide Layer-2 (L2) based tunneling to offload at least a portion of tunneling performed by tunneling gateways. The L2 based tunneling provided by the network device may comprise determining one or more other network devices that may receive traffic packets which may be handled by the tunneling gateways; and communicating at least a portion of the traffic packets to the one or more other network devices directly from the network device, using L2 tunnels established via the network device such that communication of the at least a portion of the one or more traffic packets offloads tunneling by bypassing the one or more tunneling gateways. At least a portion of the L2 based tunnel offloading by the network device may be handled via a network controller. Providing the offloaded tunneling in the network device may be based on determined of traffic type of the traffic packets.
    Type: Grant
    Filed: January 20, 2011
    Date of Patent: July 23, 2013
    Inventor: Kan Frankie Fan