Patents by Inventor Fong Pong

Fong Pong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20080112413
    Abstract: Aspects of a method and system for hash table based routing via table and prefix aggregation are provided. Aspects of the invention may enable aggregating prefixes of varying lengths into a single hash table, wherein each entry in the hash table comprises one or more encoded bits to uniquely identify said prefixes. Additionally, an entry in a hash table may be formatted based on a length of one or more representations of said prefixes in the entry. Aggregating prefixes into a hash table may comprise truncating the prefixes to a common length. In this regard, the encoded bits may indicate the length of the prefixes prior to and/or subsequent to truncation. Additionally, the encoded bits may represent bits removed from the prefix during truncation. In this regard, an encoded bit may represent a possible combination of removed bits and may be asserted when the removed bits are equal to that combination.
    Type: Application
    Filed: July 12, 2007
    Publication date: May 15, 2008
    Inventor: Fong Pong
  • Publication number: 20080112412
    Abstract: Aspects of a method and system for hash table based routing via prefix transformation are provided. Aspects of the invention may enable translating one or more network addresses as a coefficient set of a polynomial, and routing data in a network based on a quotient and a remainder derived from the coefficient set. In this regard, the quotient and the remainder may be calculated via modulo 2 division of the polynomial by a primitive generator polynomial. In one example, the remainder may be calculated with the aid of a remainder table. The primitive generator polynomial may be x16+x8+x6+x5+x4+x2+1. Additionally, entries in one or more hash tables may comprise a calculated quotient and may be indexed by a calculated remainder. In this manner, the hash tables may be accessed to determine a longest prefix match for the one or more network addresses. The hash tables may comprise 2deg(g(x)) sets, where deg(g(x)) is the degree of the primitive generator polynomial.
    Type: Application
    Filed: July 12, 2007
    Publication date: May 15, 2008
    Inventor: Fong Pong
  • Publication number: 20080082759
    Abstract: Methods, systems and computer program products for global address space management are described herein. A System on Chip (SOC) unit configured for a global address space is provided. The SOC includes an on-chip memory, a first controller and a second controller. The first controller is enabled to decode addresses that map to memory locations in the on-chip memory and the second controller is enabled to decode addresses that map to memory locations in an off-chip memory.
    Type: Application
    Filed: September 29, 2006
    Publication date: April 3, 2008
    Applicant: Broadcom Corporation
    Inventor: Fong Pong
  • Publication number: 20080082771
    Abstract: Methods, systems and computer program products to implement hardware memory locks are described herein. A system to implement hardware memory locks is provided. The system comprises an off-chip memory coupled to a SOC unit that includes a controller and an on-chip memory. Upon receiving a request from a requester to access a first memory location in the off-chip memory, the controller is enabled to grant access to modify the first memory location based on an entry stored in a second memory location of the on-chip memory. In an embodiment, the on-chip memory is Static Random Access Memory (SRAM) and the off-chip memory is Random Access Memory (RAM).
    Type: Application
    Filed: September 29, 2006
    Publication date: April 3, 2008
    Applicant: Broadcom Corporation
    Inventor: Fong Pong
  • Publication number: 20080082758
    Abstract: Methods, systems and computer program products to maintain cache coherency in a System On Chip (SOC) which is part of a distributed shared memory system are described. A local SOC unit that includes a local controller and an on-chip memory is provided. In response to receiving a request from a remote controller of a remote SOC to access a memory location, the local controller determines whether the local SOC has exclusive ownership of the requested memory location, sends data from the memory location if the local SOC has exclusive ownership of the memory location and stores an entry in the on-chip memory that identifies the remote SOC as having requested data from the memory location. The entry specifies whether the request from the remote SOC is for exclusive ownership of the memory location. The entry also includes a field that identifies the remote SOC as the requester. The requested memory location may be external or internal to the local SOC unit.
    Type: Application
    Filed: September 29, 2006
    Publication date: April 3, 2008
    Applicant: Broadcom Corporation
    Inventor: Fong Pong
  • Publication number: 20080082622
    Abstract: Methods, systems and computer program products to communicate between System On Chip (SOC) units in a cluster configuration are provided herein. A local SOC unit that includes a local controller and a local on-chip memory is provided. In response to receiving a signal from a remote SOC, the local controller is configured to retrieve a message from a remote on-chip memory of the remote SOC and store the message in the local on-chip memory. The local controller is a node controller and the local on-chip memory is a Static Random Access Memory (SRAM). The local SOC and the remote SOC are part of a cluster.
    Type: Application
    Filed: September 29, 2006
    Publication date: April 3, 2008
    Applicant: Broadcom Corporation
    Inventor: Fong Pong
  • Publication number: 20080069116
    Abstract: In one aspect, there is provided a method for use by an edge device for establishing a connection with a server to support a full TCP connection between a client and the edge device. The method comprises establishing a full TCP connection with the server using a full TCP socket, allocating a first light TCP socket for supporting a first light TCP connection with the server, associating a first light session ID with the first light TCP connection, sending a first open session message to the server via the full TCP connection with the server, establishing the first light TCP connection with the server via the full TCP connection, associating first data with the first light session ID, and delivering the first data associated with the first light session ID to the server using the first light TCP connection via the full TCP connection.
    Type: Application
    Filed: September 20, 2006
    Publication date: March 20, 2008
    Inventor: Fong Pong
  • Publication number: 20080043742
    Abstract: A method to transmit data using a device having a plurality of physical input/output (I/O) interfaces is provided. The method comprises receiving data and determining a topology according to which data is to be transmitted. Data is transmitted in sequential order via a single physical interface for a first topology and in random order via a plurality of physical interfaces for a second topology. A System On Chip (SOC) unit enabled to transmit data via one or more physical interfaces is provided. The SOC comprises a processor and a network interface including multiple physical input/output (I/O) interfaces coupled to the processor. In response to receiving data for transmission, the processor is enabled to select a single I/O interface for sequential data transmission according to a first topology or select multiple physical I/O interfaces for random order data transmission according to a second topology.
    Type: Application
    Filed: May 21, 2007
    Publication date: February 21, 2008
    Applicant: Broadcom Corporation
    Inventors: Fong Pong, Chun Ning
  • Publication number: 20070297334
    Abstract: Aspects of a method and system for network protocol offloading are provided. A path may be established between a host socket and an offloaded socket in a TOE for offloading a TCP connection to the TOE. Offload functions associated with extensions to the host socket may enable TCP offload and IP layer bypass extensions in a network device driver for generating the offload path. In this regard, a flag in the host socket extensions may indicate when connection offloading is to occur. The offload path may be established after the connection is established via a native stack in the host or after a listening socket is offloaded to the TOE for establishing the connection. Data for retransmission for the offloaded connection may be stored in the host or in the TOE. The offloaded connection may be terminated in the TOE or may be migrated to the host for termination.
    Type: Application
    Filed: June 21, 2006
    Publication date: December 27, 2007
    Inventor: Fong Pong
  • Publication number: 20070260719
    Abstract: The invention relates to insertion and removal of MPA markers and RDMA CRCs in RDMA data streams, after determining the locations for these fields. An embodiment of the invention comprises a host interface, a transmit interface connected to the host interface, and a processor interface connected to both transmit and host interfaces. The host interface operates under the direction of commands received from the processor interface when processing inbound RDMA data. The host interface calculates the location of marker locations and removes the markers. The transmit interface operates under the direction of commands received from the processor interface when processing outbound RDMA data. The transmit interface calculates the positions in the outbound data where markers are to be inserted. The transmit interface them places the markers accordingly.
    Type: Application
    Filed: May 2, 2006
    Publication date: November 8, 2007
    Applicant: Broadcom Corporation
    Inventor: Fong Pong
  • Publication number: 20070239938
    Abstract: A memory system is provided comprising a memory controller, a level 1 (L1) cache including L1 tag memory and L1 data memory, a level 2 (L2) cache coupled to the L1 cache, the L2 cache including L2 tag memory having a plurality of L2 tag entries and a L2 data memory having a plurality of L2 data entries. The L2 tag entries are more than the L2 data entries. In response to receiving a tag and an associated data, if L2 tag entries having corresponding L2 data entries are unavailable and if a first tag in a first L2 tag entry with an associated first data in a first L2 data entry has a more recent or duplicate value of the first data in the L1 data memory, the memory controller moves the first tag to a second L2 tag entry that does not have a corresponding L2 data entry, vacates the first L2 tag entry and the first L2 data entry and stores the received tag in the first L2 tag entry and the received data in the first L2 data entry.
    Type: Application
    Filed: April 7, 2006
    Publication date: October 11, 2007
    Applicant: Broadcom Corporation
    Inventor: Fong Pong
  • Patent number: 7274706
    Abstract: Methods and systems for processing data communicated over a network. In one aspect, an exemplary embodiment includes processing a first group of network packets in a first processor which executes a first network protocol stack, where the first group of network packets are communicated through a first network interface port, and processing a second group of network packets in a second processor which executes a second network protocol stack, where the second group of network packets is communicated through the first network interface port. Other methods and systems are also described.
    Type: Grant
    Filed: April 24, 2001
    Date of Patent: September 25, 2007
    Assignee: Syrus Ziai
    Inventors: Tung Nguyen, Fong Pong, Paul Jordan, Syrus Ziai, Al Chang, Greg Grohoski
  • Publication number: 20070180176
    Abstract: A system includes a first bus segment and a second bus segment. The first bus segment is operatively coupled to one or more first bus agents, where the first bus agents are configured for writing messages to the first bus segment and reading messages from the first bus segment and the second bus segment, which is separate from the first bus segment, is operatively coupled to one or more second bus agents. The first bus agents are configured for writing messages to the first bus segment and reading messages from the first bus segment. The system also includes first electrical circuitry operably coupled to the first bus segment and the second bus segment and configured to read messages written on the first bus segment and to write the messages onto the second bus segment and second electrical circuitry operably coupled to the first bus segment and the second bus segment and configured to read messages written on the second bus segment and to write the messages onto the first bus segment.
    Type: Application
    Filed: January 31, 2006
    Publication date: August 2, 2007
    Inventors: Fong Pong, Lief O'Donnell
  • Publication number: 20070121659
    Abstract: Managing data traffic among three or more bus agents configured in a topological ring includes numbering each bus agent sequentially and injecting messages that include a binary polarity value from the bus agents into the ring in a sequential order according to the numbering of the bus agents during cycles of bus agent activity. Messages from the ring are received into two or more receive buffers of a receiving bus agent, and the value of the binary polarity value is alternated after succeeding cycles of bus ring activity. The received messages are ordered for processing by the receiving bus agent based on the polarity value of the messages and a time at which each message was received.
    Type: Application
    Filed: November 30, 2005
    Publication date: May 31, 2007
    Inventor: Fong Pong
  • Publication number: 20060277365
    Abstract: Aspects of a method and system for an on-chip configurable data RAM for fast memory and pseudo associative caches are provided. Memory banks of configurable data RAM integrated within a chip may be configured to operate as fast on-chip memory or on-chip level 2 cache memory. A set associativity of the on-chip level 2 cache memory may be same after configuring the memory banks as prior to the configuring. The configuring may occur during initialization of the memory banks, and may adjusted the amount of the on-chip level 2 cache. The memory banks configured to operate as on-chip level 2 cache memory or as fast on-chip memory may be dynamically enabled by a memory address.
    Type: Application
    Filed: September 16, 2005
    Publication date: December 7, 2006
    Inventor: Fong Pong
  • Publication number: 20060274788
    Abstract: Certain aspects of a method and system for a system-on-a-chip (SoC) device with integrated support for Ethernet, TCP, iSCSI, RDMA, and network application acceleration are provided. Aspects of the method may include storing on a multifunction host bus adapter (MHBA) chip that handles a plurality of protocols, at least a portion of received data for at least one of a plurality of network connections. The MHBA chip may be configured for handling the received data based on one of the plurality of protocols that is associated with the received data. The received data for the at least one of the plurality of network connections may be processed within the MHBA chip. The one of the plurality of protocols may include an Ethernet protocol, a transmission control protocol (TCP), an Internet protocol (IP), an Internet small computer system interface (iSCSI) protocol, and/or a remote direct memory access (RDMA) protocol.
    Type: Application
    Filed: September 16, 2005
    Publication date: December 7, 2006
    Inventor: Fong Pong
  • Publication number: 20060274789
    Abstract: Certain embodiments of the invention may be found in a method for a high performance hardware network protocol processing engine. The method may comprise processing TCP packets via a plurality of pipelined hardware stages on a single network chip. Headers of received TCP packets may be parsed, and Ethernet frame CRC digests, IP checksums and TCP checksums may be validated, at a first stage of the parallel, pipelined hardware stages. IP addresses of the TCP packets that are received may also be validated at the first stage. TCB index of the TCP packets that are received may be looked up at a second stage. TCB data for TCP packets may be looked up at a third stage and receive processing of the TCP packets may be performed at a fourth stage. A fifth stage may initiate transfer of the processed TCP packets that are received to an application layer.
    Type: Application
    Filed: September 16, 2005
    Publication date: December 7, 2006
    Inventor: Fong Pong
  • Publication number: 20060277352
    Abstract: Aspects of a method and system for supporting large caches with split and canonicalization tags are presented. One aspect of the system may comprise a processor that generates a canonicalization tag based on at least a current portion of a tag field of a physical address. A tag cache line may be retrieved based on a set field of the physical address. The processor may compare the canonicalization tag and at least a portion of the retrieved tag cache line. Based on the comparison between the canonicalization tag and at least a portion of the retrieved tag cache line, the processor may retrieve a data cache line from cache memory.
    Type: Application
    Filed: September 16, 2005
    Publication date: December 7, 2006
    Inventor: Fong Pong
  • Publication number: 20060274787
    Abstract: Certain aspects of a method and system for an adaptive cache for memory protection table (MPT), memory translation table (MTT) and TCP context are provided. At least one of a plurality of on-chip cache banks integrated within a multifunction host bus adapter (MHBA) chip may be allocated for storing active connection context for any of a plurality of communication protocols. The MHBA chip may handle a plurality of protocols, such as an Ethernet protocol, a transmission control protocol (TCP), an Internet protocol (IP), Internet small computer system interface (iSCSI) protocol, and a remote direct memory access (RDMA) protocol. The active connection context may be stored within the allocated at least one of the plurality of on-chip cache banks integrated within the multifunction host bus adapter chip, based on a corresponding one of the plurality of communication protocols associated with the active connection context.
    Type: Application
    Filed: September 16, 2005
    Publication date: December 7, 2006
    Inventor: Fong Pong
  • Publication number: 20060274742
    Abstract: Certain embodiments of the invention may be found in a method and system for an adaptive cache for caching context and for adapting to collisions in session lookup table. A network processor chip may comprise an on-chip cache that stores transport control blocks (TCB) from a TCB array in external memory to reduce latency in active transmission control protocol/Internet protocol (TCP/IP) sessions. The on-chip cache may comprise a tag portion implemented using a content addressable memory (CAM) and a data portion implemented using a random access memory (RAM). When a session collision occurs the context of a subsequent network connection may be stored in a data overflow portion of an overflow table in the on-chip cache. A search key associated with the subsequent network connection that comprises network connection parameters may be stored in a tag overflow portion of the overflow table.
    Type: Application
    Filed: September 16, 2005
    Publication date: December 7, 2006
    Inventor: Fong Pong