Patents by Inventor Kan Frankie Fan

Kan Frankie Fan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240152455
    Abstract: Described are examples for storing data on a storage device, including storing, in a live write stream cache, one or more logical blocks (LBs) corresponding to a data segment, writing, for each LB in the data segment, a cache element of a cache entry that points to the LB in the live write stream cache, where the cache entry includes multiple cache elements corresponding to the multiple LBs of the data segment, writing, for the cache entry, a table entry in a mapping table that points to the cache entry, and when a storage policy is triggered for the cache entry, writing the multiple LBs, pointed to by each cache element of the cache entry, to a stream for storing as contiguous LBs on the storage device, and updating the table entry to point to a physical address of a first LB of the contiguous LBs on the storage device.
    Type: Application
    Filed: November 9, 2022
    Publication date: May 9, 2024
    Inventors: Peng XU, Ping Zhou, Chaohong Hu, Fei Liu, Changyou Xu, Kan Frankie Fan
  • Publication number: 20240126686
    Abstract: A system includes a host device, a hardware offload engine, and a non-volatile storage to store on-disk data. The hardware offload engine is represented to the host device as being a storage having a virtual storage capacity, and the host device transmits an offload command to the hardware offload engine as a data write command without requiring kernel changes or special drivers.
    Type: Application
    Filed: December 27, 2023
    Publication date: April 18, 2024
    Inventors: Ping Zhou, Kan Frankie Fan, Hui Zhang
  • Publication number: 20240103722
    Abstract: The present disclosure describes techniques of metadata management for transparent block level compression. A first area may be created in a backend solid state drive. The first area may comprise a plurality of entries. The plurality of entries may be indexed by addresses of a plurality of blocks of uncompressed data. Each of the plurality of entries comprises a first part configured to store metadata and a second part configured to store compressed data. Each of the plurality blocks of uncompressed data may be compressed individually to generate a plurality of compressed blocks. Metadata and at least a portion of compressed data associated with each of the plurality of compressed blocks may be stored in one of the plurality of entries based on an address of a corresponding block of uncompressed data. A second area may be created in the backend solid state drive for storing the rest of the compressed data.
    Type: Application
    Filed: September 26, 2022
    Publication date: March 28, 2024
    Inventors: Ping Zhou, Chaohong Hu, Kan Frankie Fan, Fei Liu, Longxiao Li, Hui Zhang
  • Publication number: 20230273727
    Abstract: Methods and systems for adaptive mapping for data compression on a storage device is provided. The method includes determining a data request pattern of a workload, determining whether to use at least one of a segment mapping mode or a hash mapping mode for mapping the workload, dividing a space on the storage device into a plurality of defrag units for storing data, and assigning the plurality of defrag units as being at least one of a segment defrag unit or a hash defrag unit. The method also includes when the data request pattern is for the segment mapping mode, storing the data on at least one of the plurality of defrag units assigned as the segment defrag unit, and when the data request pattern is for the hash mapping mode, storing the data on at least one of the plurality of defrag units assigned as the hash defrag unit.
    Type: Application
    Filed: May 4, 2023
    Publication date: August 31, 2023
    Inventors: Ping Zhou, Kan Frankie Fan
  • Publication number: 20230229324
    Abstract: Systems and methods for space allocation for block device compression are provided. In particular, a computing device may receive an allocation request to write the compressed data, select a range list adequate for serving the allocation request from a plurality of range list, dequeue a range entry from the selected range list to allocate free space for the compressed data, and allocate the free space corresponding to the range entry to the compressed data to serve the allocation request.
    Type: Application
    Filed: March 21, 2023
    Publication date: July 20, 2023
    Inventors: Ping ZHOU, Kan Frankie FAN, Chaohong HU, Longxiao LI, Hui ZHANG, Fei LIU
  • Publication number: 20230176734
    Abstract: A method for adaptive mapping for data compression includes determining an input/output (I/O) request pattern, dynamically switching between a segment mapping mode and a flat hash table mapping mode based on the determined I/O request pattern, updating a shared mapping table for the segment mapping mode and the flat hash table mapping mode, and adjusting an entry of the mapping table based on the determined I/O request pattern and a status of the entry.
    Type: Application
    Filed: February 2, 2023
    Publication date: June 8, 2023
    Inventors: Ping ZHOU, Longxiao LI, Peng XU, Kan Frankie FAN, Chaohong HU, Fei LIU, Hui ZHANG, Di XU
  • Publication number: 20230122533
    Abstract: A system and method are described to efficiently allocate memory space with low latency overhead by allocating blocks of non-volatile memory on a storage device according to a tree data structure comprising a plurality of counter sets, each counter set including one or a plurality of counters indicating numbers of unallocated blocks of memory space within the non-volatile memory.
    Type: Application
    Filed: December 15, 2022
    Publication date: April 20, 2023
    Inventors: Ping ZHOU, Kan Frankie FAN, Chaohong HU, Longxiao LI, Peng XU, Fei LIU, Hui ZHANG
  • Patent number: 9219683
    Abstract: Systems and methods that provide a unified infrastructure over layer-2 networks are provided. A first frame is generated by an end point. The first frame comprises a proxy payload, a proxy association header and a frame header relating to a control proxy element. The first frame is sent over a first network to the control proxy element. A second frame is generated by the control proxy element. The second frame comprises the proxy payload and a proxy header. The first and second frames correspond to different layer-2 protocols. The control proxy element sends the second frame over a second network employing the layer-2 protocol of the second frame.
    Type: Grant
    Filed: April 8, 2013
    Date of Patent: December 22, 2015
    Assignee: Broadcom Corporation
    Inventors: Uri El Zur, Kan Frankie Fan, Scott S. McDaniel, Murali Rajagopal
  • Publication number: 20130286840
    Abstract: A network device may provide Layer-2 (L2) based tunneling to offload tunneling performed by tunneling gateways. The network device may selectively offload tunneling handled by a tunneling gateway by determining whether addressing or locality information of a destination network device for a traffic packet is available. When the locality information of the destination network device for the traffic packet is available, the network device may establish a separate tunnel for communicating the traffic packet to the destination network device instead of forwarding the traffic packet to the tunneling gateway. The separate tunnel may be configured to bypass the tunneling gateway. When the addressing or locality information of the destination network device for the traffic packet is not available, the network device may forward the traffic packet to the tunneling gateway for tunneling of the traffic packet by the tunneling gateway.
    Type: Application
    Filed: June 28, 2013
    Publication date: October 31, 2013
    Inventor: Kan Frankie Fan
  • Patent number: 8542585
    Abstract: A method for processing network data includes collecting by a network interface controller (NIC), a plurality of transmit (TX) buffer indicators into a plurality of priority lists of connections. Each of the plurality of TX buffer indicators identifies transmit-ready data located externally to the NIC and not previously received by the NIC. One or more of the plurality of TX buffer indicators may be selected. The identified transmit-ready data may be retrieved into the NIC based on the selected one or more of the plurality of TX buffer indicators. At least a portion of the identified transmit-ready data may be transmitted. Each of the plurality of priority lists may be generated based on a particular connection priority characteristic and a particular connection type. The identified transmit-ready data may be associated with the same connection priority characteristic and the same connection type.
    Type: Grant
    Filed: August 9, 2011
    Date of Patent: September 24, 2013
    Assignee: Broadcom Corporation
    Inventors: Scott McDaniel, Kan Frankie Fan, Uri El Zur
  • Patent number: 8493851
    Abstract: A network device may provide Layer-2 (L2) based tunneling to offload at least a portion of tunneling performed by tunneling gateways. The L2 based tunneling provided by the network device may comprise determining one or more other network devices that may receive traffic packets which may be handled by the tunneling gateways; and communicating at least a portion of the traffic packets to the one or more other network devices directly from the network device, using L2 tunnels established via the network device such that communication of the at least a portion of the one or more traffic packets offloads tunneling by bypassing the one or more tunneling gateways. At least a portion of the L2 based tunnel offloading by the network device may be handled via a network controller. Providing the offloaded tunneling in the network device may be based on determined of traffic type of the traffic packets.
    Type: Grant
    Filed: January 20, 2011
    Date of Patent: July 23, 2013
    Inventor: Kan Frankie Fan
  • Patent number: 8478907
    Abstract: A network interface device for use with a host computer that includes a host processor and a memory, and which is configured to concurrently run a master operating system and at least one virtual operating system. The device includes a bus interface that communicates over a bus with the host processor and the memory, and a network interface, which is coupled to send and receive data packets carrying data over a packet network. A protocol processor is coupled between the bus interface and the network interface so as to convey the data between the network interface and the memory while performing protocol processing on the data packets under instructions from the at least one virtual operating system, while bypassing the master operating system.
    Type: Grant
    Filed: May 3, 2006
    Date of Patent: July 2, 2013
    Assignee: Broadcom Corporation
    Inventors: Eliezer Aloni, Kobby Carmona, Shay Mizrachi, Rafi Shalom, Merav Sicron, Dov Hirshfeld, Amit Oren, Caitlin Bestler, Uri Tal, Uri Elzur, Kan (Frankie) Fan, Scott McDaniel
  • Patent number: 8438321
    Abstract: Certain aspects of a method and system for supporting hardware acceleration for iSCSI read and write operations via a TCP offload engine may comprise pre-registering at least one buffer with hardware. An iSCSI command may be received from an initiator. An initiator test tag value, a data sequence value and/or a buffer offset value of an iSCSI buffer may be compared with the pre-registered buffer. Data may be fetched from the pre-registered buffer based on comparing the initiator test tag value, the data sequence value and/or the buffer offset value of the iSCSI buffer with the pre-registered buffer. The fetched data may be zero copied from the pre-registered buffer to the initiator.
    Type: Grant
    Filed: April 5, 2011
    Date of Patent: May 7, 2013
    Assignee: Broadcom Corporation
    Inventors: Uri Elzur, Kan Frankie Fan, Scott Sterling McDaniel
  • Patent number: 8417834
    Abstract: Systems and methods that provide a unified infrastructure over Ethernet are provided. In one embodiment, a method of communicating between an Ethernet-based system and a non-Ethernet-based network may include, for example, one or more of the following: generating an Ethernet frame that comprises a proxy payload, a proxy association header and an Ethernet header, the Ethernet header relating to a control proxy element; sending the Ethernet frame over an Ethernet-based network to the control proxy element; generating a non-Ethernet frame that comprises the proxy payload and a proxy header; and sending the non-Ethernet frame over a non-Ethernet-based network.
    Type: Grant
    Filed: December 8, 2004
    Date of Patent: April 9, 2013
    Assignee: Broadcom Corporation
    Inventors: Uri El Zur, Kan Frankie Fan, Scott S. McDaniel, Murali Rajagopal
  • Patent number: 8358664
    Abstract: Provided is a system and method for performing smart offloads between computer applications module and a network interfacing device within a data communications system. The method includes receiving data requests from the computer applications module and determining whether the received data requests require offloading. The received requests are forwarded along a first data path to the network interfacing device if offloading is required. If offloading is not required, the received data requests are forwarded along a secondary data path to a host protocol stack for processing. Next, the protocol processing is performed and the processed data requests are forwarded to the network interfacing device.
    Type: Grant
    Filed: November 10, 2009
    Date of Patent: January 22, 2013
    Assignee: Broadcom Corporation
    Inventors: Kan Frankie Fan, Scott McDaniel
  • Patent number: 8321658
    Abstract: Certain aspects of a method for iSCSI boot may include loading boot BIOS code from a host bus adapter or a network interface controller (NIC) by an iSCSI client device. A connection may be established to an iSCSI target by the iSCSI client device after loading the boot BIOS code. The boot BIOS code may be chained to at least one interrupt handler over iSCSI protocol. An operating system may be remotely booted from the iSCSI target by the iSCSI client device based on chaining the interrupt handler. An Internet protocol (IP) address and/or location of the iSCSI target may be received. At least one iSCSI connection may be initiated to the iSCSI target based on chaining at least one interrupt handler. The iSCSI target may be booted in real mode if at least one master boot record is located in the memory.
    Type: Grant
    Filed: June 29, 2010
    Date of Patent: November 27, 2012
    Assignee: Broadcom Corporation
    Inventors: Uri El Zur, Kan Frankie Fan, Murali Rajagopal, Kevin Tran
  • Patent number: 8230090
    Abstract: Systems and methods that provide transmission control protocol (TCP) offloading and uploading are provided. In one example, a multiple stack system may include a software stack and a hardware stack. The software stack may be adapted to process a first set of TCP packet streams. The hardware stack may be adapted to process a second set of TCP packet streams and may be coupled to the software stack. The software stack may be adapted to offload one or more TCP connections to the hardware stack. The hardware stack may be adapted to upload one or more TCP connections to the software stack. The software stack and the hardware stack may process one or more TCP connections concurrently.
    Type: Grant
    Filed: November 18, 2002
    Date of Patent: July 24, 2012
    Assignee: Broadcom Corporation
    Inventors: Kan Frankie Fan, Scott S. McDaniel
  • Patent number: 8180928
    Abstract: Certain embodiments of the invention may be found in a method and system for performing SCSI read operations with a CRC via a TCP offload engine. Aspects of the method may comprise receiving an iSCSI read command from an initiator. Data may be fetched from a buffer based on the received iSCSI read command. The fetched data may be zero copied from the buffer to the initiator and a TCP sequence may be retransmitted to the initiator. A digest value may be calculated, which may be communicated to the initiator. An accumulated digest value stored in a temporary buffer may be utilized to calculate a final digest value, if the buffer is posted. The retransmitted TCP sequence may be processed and the fetched data may be zero copied into an iSCSI buffer, if the buffer is posted. The calculated final digest value may be communicated to the initiator.
    Type: Grant
    Filed: June 17, 2005
    Date of Patent: May 15, 2012
    Assignee: Broadcom Corporation
    Inventors: Uri Elzur, Kan Frankie Fan, Scott McDaniel
  • Publication number: 20110314171
    Abstract: A method for processing of packetized data is disclosed and includes allocating a plurality of partitions of a single context memory for handling data for a corresponding plurality of network protocol connections. Data for at least one of the plurality of network protocol connections may be processed utilizing a corresponding at least one of the plurality of partitions of the single context memory. The at least one of the plurality of partitions of the single context memory may be de-allocated, when the corresponding at least one of the plurality of network protocol connections is terminated. The data for the at least one of the plurality of network protocol connections may be received. The data may be associated with a single network protocol or with a plurality of network protocols. The data for the at least one of the plurality of network protocol connections includes context data.
    Type: Application
    Filed: December 14, 2010
    Publication date: December 22, 2011
    Inventors: Uri El Zur, Steven B. Lindsay, Kan Frankie Fan, Scott S. McDaniel
  • Publication number: 20110307577
    Abstract: A method for processing network data includes collecting by a network interface controller (NIC), a plurality of transmit (TX) buffer indicators into a plurality of priority lists of connections. Each of the plurality of TX buffer indicators identifies transmit-ready data located externally to the NIC and not previously received by the NIC. One or more of the plurality of TX buffer indicators may be selected. The identified transmit-ready data may be retrieved into the NIC based on the selected one or more of the plurality of TX buffer indicators. At least a portion of the identified transmit-ready data may be transmitted. Each of the plurality of priority lists may be generated based on a particular connection priority characteristic and a particular connection type. The identified transmit-ready data may be associated with the same connection priority characteristic and the same connection type.
    Type: Application
    Filed: August 9, 2011
    Publication date: December 15, 2011
    Inventors: Scott McDaniel, Kan Frankie Fan, Uri El Zur