DATA BUFFER PROCESSING METHOD AND DATA BUFFER PROCESSING SYSTEM FOR 4R4W FULLY-SHARED PACKET

The present invention discloses a data buffer processing method and system for a 4R4W fully-shared packet. The method comprises: assembling two 2R1 memories into one Bank memory unit; forming the hardware architecture of a 4R4W memory based on four Bank memory units; under one clock cycle, when data is written into the 4R4W memory, if the size of the data is less than or equal to the bit width of the 2R1W memory, writing the data into different Banks respectively, and copying the written data and writing, the copied data into the two 2R1W memories of each Bank respectively; if the size of the data is greater than the memory, waiting for a second clock, cycle, and writing the data into different Banks respectively, and writing the high and low bits of each piece of written data into the two 2R1W memories of each Bank memory unit respectively.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description

The present application claims the priority of Chinese Patent Application No. 201610605130.7, filed to the State Intellectual Property Office on Jul. 28, 2016, and entitled “Data Buffer Processing Method and Data Buffer Processing System for 4R4W Fully-Shared Packet”, the content of which is incorporated herein by reference in its entirety.

TECHNICAL FIELD

The present invention relates to the field of network communication technologies, and more particularly, to a data buffer processing method and a data buffer processing system for a 4R4W fully-shared packet.

BACKGROUND

When an Ethernet switch chip is designed, it is usually necessary to use a large-capacity multi-port memory, such as a 2-read and 1-write (supporting 2 read ports and 1 write port simultaneously) memory, a 1-read and 2-write memory, a 2-read and 2-write memory or a memory with more ports.

Usually, suppliers generally provide only one read or write memory, one 1-read and 1-write memory, and two read or write memories. Thus, the designer can only construct a multi-port memory based on the basic memory units described above.

The packet buffer is a special type of multi-port memory whose writing is controllable, that is, sequential writing, but whose reading is random. In one of the user's needs, for the Ethernet switch chip with the uni-direction switching capacity 2.4Tbps, in order to achieve line rate writing and reading, each minimal packet (64 bytes) only costs the time of 280 ps, which requires a core frequency as high as 3.571 GHz. Such a requirement is currently not achievable with existing semiconductor processes. In order to achieve the above objective, the usual method is to divide the entire chip into multiple independent packet forwarding and processing units for parallel processing. The English name of the packet forwarding and processing unit is Slice. For example, if four Slices are obtained after division for parallel processing, the data bandwidth that each slice needs to process is reduced, and the requirement on the core frequency is also reduced to ¼ of the original core frequency. Correspondingly, in the implementation process of the solution, for the packet buffer, it is necessary to provide eight ports for the four Slices to access at the same time, four of which are read ports and four of which are write ports.

In general, on the basis that the port type of the SRAM is 1-read or 1-write, 2-read or 2-write, and 1-write or 2-read, the number of ports of the SRAM is increased by customized design, for example, a method for modifying the memory cell, and algorithm design.

The period of the customized design cycle is generally long, as spice simulation is required, and a memory compiler is also needed to generate the SRAM of different sizes and types. For suppliers, it usually takes six to nine months to provide a new type of SRAM, and such a customized design is strongly related to the specific process (such as 14 nm and 28 mn of GlobalFoundries or 28 mn and 16 nm of TSMC). Once the process changes, the customized-designed SRAM library needs to be redesigned.

The algorithm design is based on the off-the-shelf SRAM type provided by the suppliers, The multi-port memory is realized by algorithms. The greatest advantage is to avoid the customized design and shorten the time. Simultaneously, the design is not related to technology libraries, and can be easily transplanted between different technology libraries.

FIG. 1 shows a 4R4W memory architecture supporting the access by four slices designed by the algorithm design. In the present embodiment, a large-capacity 2R2W SRAM is designed by using the 1R1W SRAM2D, which logically requires four 65536-depth 2304-width SRAM2Ds. Since the capacity of one single physical SRAM2D can not meet the above requirements, one 65536-depth 2304-width logical SRAM needs to be divided into multiple physical SRAMs. For example, thirty-two 16384-depth 288-width physical blocks can be obtained after division. In this way, a total of 32×4=128 physical blocks is required. With the above 2R2W SRAM as a basic unit, a 4R4W SRAM with the size of 18M bytes is constructed.

As shown in FIG. 2, a total of four 65536-depth 2304-width 2R2W SRAMs is logically required, that is, the number of the required SRAM2D (with 16384-depth and 288-width) physical blocks is 512. It can be known according to the existing data that under the 14 nm technological condition, the size of one 16384-depth 288-width SRAM2D physical block is 0.4165 square centimeters, and the power consumption is 0.108 Watts (and the technological conditions are the fastest when a core voltage is equal to 0.9V and a junction temperature is equal to 125 DEG C.). Although the above method for constructing the SRAM of more ports by copying the basic unit SRAM provided by the technology library into multiple copies is obvious in design principle, the area overhead is very large. By taking the above solution as an example, only the area of the 4R4W SRAM of 18M bytes occupies 213.248 square centimeters, the total power consumption is 55.296Watts, and the overhead of inserting Recap and DFT as well as placing and routing has not been considered here yet. The 4R4W SRAM designed by such algorithm design occupies a huge area and has huge total power consumption.

As shown in FIG. 3, in the prior art, another algorithm design method uses the 2R2W SRAM as a basic unit to implement the packet buffer of the 4R4W SRAM by spatial division. Each X?Y? is a 2R2W SRAM logic block with the size of 4.5M bytes. There are four such SRAM logic blocks in total, which form the 4R4W SRAM, and the size is 18M bytes (4.5M×4=18M).

S0, S1, S2, and S3 represent four slices. Each slice comprises, for example, six 100GE ports. A packet input from slice2 or slice1 to slice0 or slice1 is stored into X0Y0. A packet input from slice0 or slice1 to slice2 or slice3 is stored into X1Y0. A packet input from slice1 or slice3 to slice0 or slice1 is stored into X0Y1. A packet input from slice2. or slice3 to slice2 or slice3 is stored into X1Y1. For a multicast packet, the multicast packet from Slice2 or Slice1 is simultaneously stored in X0Y0 and X1Y0. Further, when the packet is read, slice2 or slice1 will read the packet from X0Y0 or X0Y1, and slice2 or slice3 will read the packet from X1Y0 or X1Y1.

FIG. 4 shows an architecture diagram of each X1Y1 in the algorithm design of the prior art, one X?Y? logically requires four 16384-depth 2304-width SRAMs, and each logic 16384-depth 2304-width SRAM can be cut into eight 16384-depth 288-width physical SRAM2Ds. Under a 14 nm integrated circuit technology, such a packet buffer of 18M bytes requires a total of 4×4×8=128 16384-depth 288-width physical SRAM2Ds. The total area is 51.312 square centimeters, and the total power consumption is 13.824 Watts (the technological conditions are the fastest when a core voltage is equal to 0.9V and a junction temperature is equal to 125 DEC C).

The area and power consumption overhead of the above second algorithm design is only ¼ of the first algorithm design described above. However, the algorithm design cannot realize that the four 2R2W SRAM logic blocks are shared among all the four slices. The maximal packet buffer that each Slice input port can occupy is only 9M bytes, and such a packet buffer is not the shared cache of the true sense.

SUMMARY

In order to solve the above technical problem, an objective of the present invention is to provide a data buffer processing method and a data buffer processing system for a 4R4W fully-shared packet.

In order to realize one of the objectives of the above invention, an embodiment of the present invention provides a data buffer processing method for a 4R4W fully-shared packet, wherein the method comprises: assembling two 2R1W memories in parallel into one Bank memory unit; forming the hardware architecture of a 4R4W memory based on four Bank memory units directly; under one clock cycle, when data is written into the 4R4W memory by four write ports, if the size of the data is less than or equal to the bit width of the 2R1W memory, writing the data into different Banks respectively, and meanwhile, copying the written data and writing the copied data into the two 2R1W memories of each Bank respectively; and if the size of the data is greater than the bit width of the 2R1W memory, waiting for a second clock cycle, and when the second clock cycle comes, writing the data into different Banks respectively, and meanwhile, writing the high and low bits of each piece of written data into the two 2R1W memories of each Bank memory unit respectively.

As an improvement on the embodiment of the present invention, the method further comprises: under one clock cycle, when the data is read from the 4R4W memory, if the size of the data is less than or equal to the bit width of the 2R1W memory, selecting a matched read port in the 4R4W memory to directly read the data; and if the size of the data is greater than the bit width of the 2R1W memory, waiting for the second clock cycle, and when the second clock cycle comes, selecting a matched read port in the 4R4W memory to directly read the data.

As a further improvement on the embodiment of the present invention, the method further comprises: selecting a writing position of the data according to the remaining free resource of each Bank when the data is written into the 4R4W memory.

As a further improvement on the embodiment of the present invention, the method specifically comprises: correspondingly creating a free buffer resource pool for each Bank, the free buffer resource pool being used to store remaining free pointers of the current corresponding Bank, and when the data sends a request of being written into the 4R4W memory, comparing the depths of respective free buffer resource pools, if there exists one free buffer resource pool with the maximum depth, directly writing the data into the Bank corresponding to the free buffer resource pool with the maximum depth; and if there exist more than two free buffer resource pools with the same maximum depth, randomly writing the data into the Bank corresponding to one of the free buffer resource pools with the maximum depth.

As a further improvement on the embodiment of the present invention, the method further comprises: according to the depth and width of the 2R1W memory, selecting 2m+1 SRAM2P memories having the same depth and width to construct a hardware architecture of the 2R1W memory, m being a positive integer, wherein each SRAM2P memory has M pointer addresses, one of the plurality of SRAM2P memories is an auxiliary memory, and the rest SRAM2P memories are main memories; and when the data is written into and/or read from the 2R1W memory, associating the data in the main memories and the data in the auxiliary memory according to a current pointer position of the data, and performing XOR operation on the associated data to complete the writing and reading of the data.

In order to realize one of the above objectives of the present invention, an embodiment of the present invention provides a data buffer processing system for a 4R4W fully-shared packet, wherein the system comprises: a data constructing module and a data processing module.

The data constructing module is configured to: assemble two 2R1W memories in parallel into one Bank memory unit; and form the hardware architecture of a 4R4W memory based on four Bank memory units directly.

The data processing module is further configured to: when determining that under one clock cycle, data is written into the 4R4W memory by four write ports, if the size of the data is less than or equal to the bit width of the 2R1W memory, write the data into different Banks respectively, and meanwhile, copy the written data and write the copied data into the two 2R1W memories of each Bank respectively; and if the size of the data is greater than the bit width of the 2R1W memory, wait for a second clock cycle, and when the second clock cycle comes, write the data into different Banks respectively, and meanwhile, write the high and low bits of each piece of written data into the two 2R1W memories of each Bank memory unit respectively.

As an improvement on the embodiment of the present invention, the data processing module is further configured to: when determining that under one clock cycle, the data is read from the 4R4W memory, if the size of the data is less than or equal to the bit width of the 2R1W memory. select a matched read port in the 4R4W memory to directly read the data; and if the size of the data is greater than the bit width of the 2R1W memory, wait for the second clock cycle, and when the second clock cycle comes, select a matched read port in the 4R4W memory to directly read the data.

As a further improvement on the embodiment of the present invention, the data processing module is further configured to select a writing position of the data according to the remaining free resource of each Bank when determining that the data is written into the 4R4W memory.

As a further improvement on the embodiment of the present invention, the data processing module is further configured to: correspondingly create an free buffer resource pool for each Bank, the free buffer resource pool being used to store remaining free pointers of the current corresponding Bank, and when the data sends a request of being written into the 4R4W memory, compare the depths of respective free buffer resource pools, if there exists one free buffer resource pool with the maximum depth, directly write the data into the Bank corresponding to the free buffer resource pool with the maximum depth; and if there exist more than two free buffer resource pools with the same maximum depth, randomly write the data into the Bank corresponding to one of the free buffer resource pools with the maximum depth,

As a further improvement on the embodiment of the present invention, the data constructing module is further configured to: according to the depth and width of the 2R1W memory, select 2m+1 SRAM2P memories having the same depth and width to construct a hardware architecture of the 2R1W memory, in being a positive integer.

Each SRAM2P memory has M pointer addresses, one of the plurality of SRAM2P memories is an auxiliary memory, and the rest SRAM2P memories are main memories.

When the data is written into and/or read from the 2R1W memory, the data processing module is further configured to associate the data in the main memories and the data in the auxiliary memory according to a current pointer position of the data, and perform XOR operation on the associated data to complete the writing and reading of the data.

Compared with the prior art, according to the data buffer processing method and data buffer processing system for a 4R4W fully-shared packet of the present invention, the SRAM of more ports is constructed by algorithms based on existing types of SRAMs, and the multi-port SRAM is supported to the greatest extent at only a minimal cost. In the implementation process, complex control logics and additional multi-port SRAMs or register array resources are avoided. By using the uniqueness of the packet buffer and by spatial division and time division, the 4R4W packet buffer can be realized by only simple XOR operation, Meanwhile, all memory resources of the 4R4W memory according to the present invention are visible to the four Slices or any one input/output port, and all memory resources are completely shared between any ports, The present invention has lower power consumption and a faster processing speed, saves more resources or areas, and is simple to implement. Manpower and material costs are saved,

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a schematic diagram of a packet buffer logic unit of a 2R2W memory implemented by algorithm design based on a 1R1W memory in the prior art.

FIG. 2 is a schematic diagram of a packet buffer logic unit of a 4R4W memory implemented by algorithm customized design based on a 2R2W memory in the prior art.

FIG. 3 is a schematic diagram of a packet buffer architecture of a 4R4W memory implemented by another algorithm design based on a 2R2W memory in the prior art.

FIG. 4 is a schematic diagram of a packet buffer logic unit of one of X?Y? in FIG. 3.

FIG. 5 is a schematic flowchart of the data buffer processing method for a 4R4W fully-shared packet according to one embodiment of the present invention.

FIG. 6 is a schematic diagram of a digital circuit structure of a 2R1W memory formed by customized design according to a first embodiment of the present invention.

FIG. 7 is a schematic diagram of read-write time-sharing operation of a 2R1W memory formed by customized design according to a second embodiment of the present invention.

FIG. 8 is a schematic diagram of a packet buffer logic unit of a 2R1W memory formed by algorithm design according to a third embodiment of the present invention.

FIG. 9a is a schematic diagram of a packet buffer logic unit of a 2R1W memory formed by algorithm design according to a fourth embodiment of the present invention.

FIG. 9b is a structural schematic diagram of a memory block number mapping table corresponding to FIG. 9a.

FIG. 10 is a schematic flowchart of a data processing method for a 2R1W memory provided by a fifth embodiment of the present invention.

FIG. 11 is a schematic diagram of a packet buffer logic unit of a 2R1W memory provided in the fifth embodiment of the present invention.

FIG. 12 is a schematic diagram of a packet buffer architecture of four Banks according to a specific embodiment of the present invention.

FIG. 13 is a schematic diagram of a packet buffer architecture of a 4R4W memory according to a specific embodiment of the present invention.

FIG. 14 is a module schematic diagram of a data buffer processing system for a 4R4W fully shared packet provided by an embodiment of the present invention.

DETAILED DESCRIPTION

The present invention will be described in detail below in conjunction with respective embodiments shown in the accompanying drawings. However, these embodiments are not intended to limit the invention, and the structures, methods, or functional changes made by those ordinary skilled in the art in accordance with the embodiments are included in the protective scope of the present invention.

FIG. 5 shows a data buffer processing method for a 4R4W fully-shared packet according to one embodiment of the present invention. The method comprises: assembling two 2R1W memories in parallel into one Bank memory unit; forming the hardware architecture of a 4R4W memory based on four Bank memory units directly; under one clock cycle, when data is written into the 4R4W memory by four write ports, if the size of the data is less than or equal to the bit width of the 2R1W memory, writing the data into different Banks respectively, and meanwhile, copying the written data and writing the copied data into the two 2R1W memories of each Bank respectively; and if the size of the data is greater than the bit width of the 2R1W memory, waiting for a second clock cycle, and when the second clock cycle comes, writing the data into different Banks respectively, and meanwhile, writing the high and low bits of each piece of written data into the two 2R1W memories of each Bank memory unit respectively.

Under one clock cycle, when the data is read from the 4R4W memory, if the size of the data is less than or equal to the bit width of the 2R1W memory, a matched read port in the 4R4W memory is selected to directly read the data. If the size of the data is greater than the bit width of the 2R1W memory, wait for the second clock cycle, and when the second clock cycle comes, a matched read port in the 4R4W memory is selected to directly read the data.

The 4R4W memory can support 4-read and 4-write simultaneously.

In the preferred embodiment of the invention, there are five methods to establish the 2R1W memory.

As shown in FIG. 6, in the first embodiment, on the basis of the 6T SRAM, one word line is divided into a left one and a right one, so that two read ports can be made for simultaneous operation or one write port is made. In this way, the reading of the data from a left MOS transistor and the reading of the data from a right MOS transistor can be simultaneously performed. It should be noted that the data read by the right MOS transistor cannot be used till being inverted. In order not to affect the speed of data reading, a pseudo-differential amplifier is required as the reading sense amplifier. Thus, the area of the 6T SRAM is unchanged, and the only cost is to double the word line, thereby ensuring that the overall memory density is basically unchanged.

FIG. 7 shows a schematic diagram of a read-write operation flow of a 2R1W memory formed by customized design according to the second embodiment of the present invention.

By customized design, the ports of the SRAM can be increased, and one word line is cut into two word lines, to increase to two read ports. The technique of time-sharing operation may also be performed, that is, the read operation is performed on the rising edge of a clock, and the write operation is performed on the falling edge of the clock, In this way, a basic 1-read or 1-write SRAM can be expanded to a 1-read and 1-write SRAM, that is, one read operation and one write operation can be performed simultaneously, and the memory density is basically unchanged.

FIG. 8 shows a schematic diagram of a read-write operation flow of a 2R1W memory formed by algorithm design according to the third embodiment of the present invention.

In the present embodiment, the 2R1W SRAM constructed based on the SRAM2P is taken as an example, and the SRAM2P is an SRAM capable of supporting 1-read and 1-read/write, that is, two read operations can be simultaneously performed or one read and one write operation can be performed on the SRAM2P.

In the present embodiment, the 2R1W SRAM is constructed on the basis of the SRAM2P by copying one SRAM. In this example, the SRAM2P_1 on the right is a copy of the SRAM2P_0 on the left. When in the specific operation, the two SRAM2Ps are used as 1-read and 1-write memories for use. When data is written, the data is written to the left and right SRAM2Ps at the same time. When the data is read, data A is fixedly read from the SRAM2P-0, and the data B is fixedly read from the SRAM2P_1, so that one write operation and two read operations can be performed concurrently.

FIG. 9a and FIG. 9b show schematic diagrams of a read-write operation flow of the 2R1W memory formed by algorithm design according to the fourth embodiment.

In the present embodiment, a logically integral 16384-depth SRAM is divided into logically four 4096-depth SRAM2Ps, which are numbered sequentially as 0, 1, 2, and 3, and an additional 4096-depth SRAM is increased, is numbered as 4, and is used as a solution to read-write conflicts. For reading the data A and the data B, it is always ensured that the two read operations can be performed concurrently. When the addresses of the two read operations are in different SRAM2Ps, since any one SRAM2P can be configured into the 1R1W type, there are no read-write conflicts. When the addresses of two read operations are in the same SRAM2P block, for example, both in the SRAM2P_0, since the same SRAM2P can only provide 2 ports for simultaneous operation at most, at this point, the ports arc occupied by the two read operations. If one write operation is just to be written into the SRAM2P_0, then such data is written into the fourth SRAM2P 4 block of the memory.

In the present embodiment, a memory block mapping table is required to record which memory block stores valid data. As shown in FIG. 9b, the depth of the memory block mapping table is the same as the depth of one memory block, that is, 4096 depths. In each entry, the numbers from 0 to 4 of all memory blocks are sequentially stored after initialization. In the example of FIG. 9a, since the SRAM2P_0 has the read-write conflicts when the data is written, the data is actually written to the SRAM2P_4, At this point, the read operation also reads the corresponding content in the memory mapping table, and the original content is {0, 1, 2, 3, 4}, which becomes {4, 1, 2, 3, 0} after modification. The first block number and the fourth block number are exchanged, indicating that the data is actually written to the SRAM2P_4, and the SRAM2P_0 becomes a backup entry at the same time.

When the data is read, it is necessary to firstly read the memory block number mapping table of the corresponding address, to check which memory block the valid data is stored in. For example, if the data of the address 5123 is to be read, the content stored in the address 1027 (5123-4096-1027) of the memory block number mapping table is firstly read. The content of the address 1027 of the corresponding storage block is read according to the number of the second column.

For the data writing operation, the memory block number mapping table is required to provide one read port and one write port. For two data reading operations, the memory block number mapping table is required to provide two read ports, so that the memory block number mapping table is required to provide three read ports and one write port in total, and these four access operations must be performed simultaneously.

FIG. 10 shows a fifth embodiment. In the preferred embodiment of the present invention, a method for constructing the 2R1W memory comprises: according to the depth and width of the 2R1W memory, selecting 2m+1 SRAM2P memories having the same depth and width to construct a hardware architecture of the 2R1W memory, in being a positive integer.

Multiple SRAM2P memories are sequentially SRAM2P(0), SRAM2P(1) . . . SRAM2P(2m) according to an arrangement sequence. Each SRAM2P memory has M pointer addresses, one of the multiple SRAM2P memories is an auxiliary memory, and the rest SRAM2P memories are main memories.

In the preferred embodiment of the invention, the product of the depth and width of each SRAM 2P memory is equal to (the product of the depth and width of the 2R1W memory)/2m.

For the convenience of description, the SRAM memory which has the m value of 2 and is the 16384-depth 128-width 2R1W memory is described in detail below.

In this specific example, the multiple SRAM2P memories are sequentially SRAM2P(0), SRAM2P(1), SRAM2P(2), SRAM2P(3) and SRAM2P(4) according to the arrangement sequence, wherein the SRAM2P(0), SRAM2P(1), SRAM2P(2) and SRAM2P(3) are the main memories, and the SRAM2P(4) is the auxiliary memory. The depth and width of each SRAM2P memory are 4096 and 128 respectively. Correspondingly, each SRAM2P memory has 4096 pointer addresses. If the pointer address of each SRAM2P memory is independently identified, the pointer address of each SRAM2P memory is 0-4095. If the addresses of all the main memories are arranged in order, the range of all the pointer addresses is 0-16383. In this example, the SRAM2P(4) is used to resolve port conflicts. In the present embodiment, the requirement can be met without adding the memory block number mapping table.

Further, based on the above hardware architecture, the method further comprises: when the data is written into and/or read from the 2R1W memory, associating the data in the main memories and the data in the auxiliary memory according to a current pointer position of the data, and performing XOR operation on the associated data to complete the writing and reading of the data.

In the preferred embodiment of the invention, the data writing process is as follows.

The writing address of the current data is obtained as W(x, y). x represents the arrangement position of the SRAM2P memory where the written data is located, and 0≤x<2m. y represents the specific pointer address in the SRAM2P memory where the written data is located, and 0≤y≤M.

The data in the rest main memories which have the same pointer address as the writing address are obtained and are subjected to the XOR operation with the current written data at the same time. The result of the XOR operation is written into the same pointer address of the auxiliary memory.

As shown in FIG. 11, in a specific example of the present invention, the data 128-bit all “1” is written to the pointer address “5” in the SRAM2P(0), that is, the writing address of the current data is W(0,5). In the process of data writing, in addition to directly writing the data 128-bit all “1” to the pointer address “5” in the SRAM2P(0) of the specified position, meanwhile, the data of the rest main memories at the same pointer address need to be read. It is assumed that the data read from the pointer address “5” in the SRAM2P(1) is 128-bit all “1”, the data read from the pointer address “5” in the SRAM2P(2) is 128-bit all “0”, and the data read from the pointer address “5” in the SRAM2P(3) is 128-bit all “1”, then the data 128-bit all “1”, 128-bit all “0”, 128-bit all “1” and 128-bit all “1” are subjected to the XOR operation. The result “1” of the XOR operation is simultaneously written to the pointer address “5” in the SRAM2P(4). In this way, it is ensured that the two read ports and one write port of the 2R1W memory operate simultaneously.

Further, in the preferred embodiment of the present invention, the data reading process is as follows.

If the reading addresses of the current two pieces of read data are in the same SRAM2P memory, then the reading addresses of the two pieces of read data are respectively obtained as R1 (x1, y1), R2 (x2, y2), x1 and y1 both represent the arrangement positions of the SRAM2P memory in which the read data are located, 0≤x1<2m, and 0≤x2<2m. y1 and y2 both represent the specific pointer addresses in the SRAM2P memory in which the read data are located, 0≤y1≤M, and 0≤y2≤M.

The read data stored in one of the reading addresses R1 (x1, y1) is randomly selected, and the currently stored data is directly read from the currently designated reading address.

The data in the rest main memories and the data stored in the auxiliary memory, which have the same pointer address as another reading address are obtained, and are subjected to the XOR operation. The result of the XOR operation is output as the stored data of the another reading address.

Then as shown in FIG, 11, in a specific example of the present invention, there are two pieces of read data, and the pointer addresses arc the pointer address “2” in the SRAM2P(0) and the pointer address “5” in the SRAM2P(0) respectively. That is, the reading addresses of the current data are R (0, 2) and R (0, 5).

In the process of reading the data from the 2R1W memory, since each SRAM2P can only guarantee that one read port and one write port operate simultaneously, the read port directly reads the data from the pointer address “2” in the SRAM2P(0), but the request of the other read port cannot be met. Correspondingly, the present invention solves the problem of simultaneously reading the data by the two read ports by using the XOR operation.

For the data in R(0,5), the data of the pointer addresses “5” of other three main memories and the auxiliary memory are read respectively and are subjected to the XOR operation. By following the above example, the data read from the pointer address “5” in the SRAM2P(1) is “1”, the data read from the pointer address “5” in the SRAM2P(2) is “0”, the data read from the pointer address “5” in the SRAM2P(3) is 128-bit all “1”, and the data read from the pointer address “5” in the SRAM2P(4) is 128-bit all “1”. The data 128-bit all “1”, 128-bit all “1”, 128-bit all “0” and 128-bit all “1” are subjected to the XOR operation to obtain 128-bit “1”, and the result 128-bit all “1” of the XOR operation is used as the stored data of the pointer address “5” in the SRAM2P(0) for output. The result of the data obtained by the above process is completely consistent with the data stored in the pointer address “5” in the SRAM2P(0). Thus, according to the current pointer position of the data, the data in the main memories and the data in the auxiliary memory are associated and are subjected to the XOR operation to complete the writing and reading of the data.

In one embodiment of the present invention, if the reading addresses of the current two pieces of read data are in different SRAM2P memories, the data corresponding to the pointer addresses in the different SRAM2P memories are directly obtained for independent output.

As shown in FIG. 11, in a specific example of the present invention, there are two pieces of read data, and the pointer addresses are the pointer address “5” in the SRAM2P(0) and the pointer address “10” in the SRAM2P(1) respectively. That is, the current data reading addresses are R (0, 5) and R (1, 10).

In the process of reading the data from the 2R1W memory, each SRAM2P can ensure that one read port and one write port operate simultaneously. Therefore, in the data reading process, the data is directly read from the pointer address “5” in the SRAM2P(0), and the data is directly from the pointer address “10” in the SRAM2P(1). Thus, it is ensured that the two read ports and one write port of the 2R1W memory simultaneously operate, which is not repeated in detail herein.

It should be noted that if each SRAM2P is further divided logically, for example, if it is divided into 4m SRAM2Ps having the same depth, and then the above 2R1W SRAM can be constructed by only adding the memory area of ¼m. Correspondingly, the number of the SRAM blocks is also increased by nearly 2 times physically, and a lot of area overhead will be occupied in actual locating and wiring. Of course, the present invention is not limited to the above specific embodiments, and other solutions using the XOR operation to expand the memory ports are also included in the protective scope of the present invention, which is not repeated in detail herein.

As shown in FIG. 12, the 4R4W memory according to the present invention is specifically introduced by an example that two 16384-depth 1152-width 2R1W-type SRAMs are assembled in parallel into one Bank. The capacity of one Bank is 4.5M bytes, and a total of 4 banks form a 4R4W multi-port memory unit of 18M bytes.

In the example, in the process of writing the data into the 4R4W memory, simultaneous writing of four slices is required to be supported. It is assumed that the data bus bit width of each slice is 1152 bits, and meanwhile each slice supports the line rate forwarding of six 100GE ports. In the worst case on a data channel, for the packet data less than or equal to the length of 144 bytes, the core clock frequency needs to run to 892.9 MHz. For the packets larger than the length of 144 bytes, the core clock frequency is required to run to 909.1 MHz.

In one clock cycle, if the bit width of the written data is less than or equal to 144 bytes, the bandwidth requirement can be satisfied only when simultaneous writing of four slices is met. Thus, by adopting spatial division, the written data of the four Slices are written into four Banks respectively. Meanwhile, the data written in one Bank is copied and is written into the left and right 2R1W memories of one Bank respectively, so that the data reading request is met, and the detailed description is performed below.

In one clock cycle, if the bit width of the written data is greater than 144 bytes, the bandwidth requirement can be satisfied only when simultaneous writing of four slices is met. That is, the data of each Slice needs to occupy the entire Bank. Thus, for each Slice, the requirement can be met by only adopting the ping-pong operation in two clock cycles. For example, in one clock cycle, two pieces of data therein are written into two Banks respectively. When the second cycle comes, the other two pieces of data are respectively written into two Banks. The two 2R1W memories in each Bank respectively correspondingly store the high and low bits of any data larger than 144 bytes, which is not repeated in detail here. Thus, there are no conflicts between the written data.

The reading process is similar to the writing process. In one clock cycle, if the bit width of the read data is less than or equal to 144 bytes, in the worst case, the read data is stored in the same Bank. Each Bank of the present invention is formed by splicing two 2R1W memories, and each 2R1W memory can support two reading requests simultaneously. During data writing, the data is copied and stored into the left and right 2R1W memories of the same Bank respectively. Therefore, the data reading request can also be met in such a case.

In one clock cycle, if the bit width of the read data is greater than 144 bytes, in the worst case, the read data is stored in the same Bank, similar to the writing process, only the ping-pang operation is required in two clock cycles. That is, in one clock period, two pieces of data are read from two 2R1W memories of one Bank. In the second clock period, the remaining two pieces of data are read from the two 2R1W memories of the same Bank, to meet the reading request, which is not repeated in detail herein.

In the preferred embodiment of the present invention, the method further comprises: selecting a writing position of the data according to the remaining free resource of each Bank when the data is written into the 4R4W memory. Specifically, the method comprises: correspondingly creating a free buffer resource pool for each Bank, the free buffer resource pool being used to store remaining free pointers of the current corresponding Bank; when the data sends a request of being written into the 4R4W memory, comparing, the depths of respective free buffer resource pools; if there exists one free buffer resource pool with the maximum depth, directly writing the data into the Bank corresponding to the free buffer resource pool with the maximum depth; and if there exist more than two free buffer resource pools with the same maximum depth, randomly writing the data into the Bank corresponding to one of the free buffer resource pools with the maximum depth.

Of course, in other embodiments of the present invention, a certain rule may also be set. When there exist more than two free buffer resource pools with the same maximum depth, the data may be written into the corresponding Banks according to the arrangement sequence of respective Banks, which is not repeated in detail herein.

As shown in FIG. 13, in a specific example of the present invention, the specific structure of X0Y0 is same as that shown in FIG. 12,

S0, S1, S2 and S3 represent four slices, Each slice for example contains six 100GE ports. The packets input from slice 0, slice 1, slice 2 and slice 3 to the slice 0, the slice 1, the slice 2 and the slice 3 are all stored into the X0Y0. Further, when the packets are read, the slice 0, the slice 1, the slice 2 and the slice 3 all directly read corresponding data from the X0Y0. In this way, cache sharing between different destination ports of the slices can be realized. The specific process of packet writing and reading may refer to the specific explanation in FIG. 12.

Under the 14 nm integrated circuit technology, the 4R4W memory according to the present invention logically requires a total of forty 4096-depth 1152-width SRAM2Ps. The total occupied area is 22.115 square centimeters, and the total power consumption is 13.503Watts (the technological conditions are the fastest when a core voltage is equal to 0.9V and a junction temperature is equal to 125 DEG C.). Meanwhile, complex control logic is not required. The operation of multiple read ports can be realized only by the simple XOR operation. In addition, additional memory block mapping table and control logics are not required. Further, all memory resources are visible to the four Slices or any one input/output port, and all memory resources are completely shared between any ports.

FIG. 14 shows a data buffer processing system for a 4R4W fully-shared packet according to the embodiment of the present invention.

The system comprises: a data constructing module 100 and a data processing module 200.

The data constructing module 100 is configured to: assemble two 2R1W memories in parallel into one Bank memory unit; and form the hardware architecture of a 4R4W memory based on four Bank memory units directly.

The data processing module 200 is configured to: when determining that under one clock cycle, data is written into the 4R4W memory by four write ports, if the size of the data is less than or equal to the bit width of the 2R1W memory, write the data into different Banks respectively, and meanwhile, copy the written data and write the copied data into the two 2R1W memories of each Bank respectively; and if the size of the data is greater than the bit width of the 2R1W memory, wait for a second clock cycle, and when the second clock cycle comes, write the data into different Banks respectively, and meanwhile, write the high and low bits of each piece of written data into the two 2R1W memories of each Bank memory unit respectively.

The data processing module 200 is further configured to: when determining that under one clock cycle, the data is read from the 4R4W memory, if the size of the data is less than or equal to the bit width of the 2R1W memory, select a matched read port in the 4R4W memory to directly read the data; and if the size of the data is greater than the bit width of the 2R1W memory, wait for the second clock cycle, and when the second clock cycle comes, select a matched read port in the 4R4W memory to directly read the data.

In the preferred embodiment of the present invention, the data constructing module 100 adopts five methods to establish the 2R1W memory.

As shown in FIG. 6, in the first embodiment, on the basis of the 6T SRAM, the data constructing module 100 divides word line into a left one and a right one, so that two read ports can be made for simultaneous operation or one write port is made. In this way, the reading of the data from a left MOS transistor and the reading of the data from a right MOS transistor can be simultaneously performed. It should be noted that the data read by the right MOS transistor cannot be used till being inverted. In order not to affect the speed of data reading, a pseudo-differential amplifier is required as the reading sense amplifier. Thus, the area of the 6T SRAM is unchanged, and the only cost is to double the word line, thereby ensuring that the overall memory density is basically unchanged.

As shown in FIG. 7, in the second embodiment, by customized design, the data constructing module 100 increases the ports of the SRAM, and one word line is cut into two word lines, to increase to two read ports. The technique of time-sharing operation may also be adopted, that is, the read operation is performed on the rising edge of a clock, and the write operation is performed on the falling edge of the clock. In this way, a basic 1-read or 1-write SRAM can be expanded to a 1-read and 1-write SRAM, that is, one read operation and one write operation can be performed simultaneously, and the memory density is basically unchanged.

As shown in FIG. 8, in the third embodiment, the 2R1W SRAM constructed based on the SRAM2P is taken as an example. The SRAM2P is an SRAM capable of supporting 1-read and 1-read/write, that is, two read operations can be simultaneously performed or one read and one write operation can be performed on the SRAM2P.

In the present embodiment, the data constructing module 100 constructs the 2R1W SRAM on the basis of the SRAM2P by copying one SRAM. In this example, the SRAM2P_1 on the right is a copy of the SRAM2P_0 on the left. When in the specific operation, the two SRAM2Ps are used as 1-read and 1-write memories for use. When data is written, the data is written to the left and right SRAM2Ps at the same time, When the data is read, data A is fixedly read from the SRAM2P_0, and the data B is fixedly read from the SRAM2P_1, so that one write operation and two read operations can be performed concurrently.

As shown in FIG. 9a and FIG. 9b, in the fourth embodiment, the data constructing module 100 divides a logically integral 16384-depth SRAM into logically four 4096-depth SRAM2Ps, which are numbered sequentially as 0, 1, 2, and 3, and an additional 4096-depth SRAM is increased, is numbered as 4, and is used as a solution to read-write conflicts. For reading the data A and the data B, it is always ensured that the two read operations can be performed concurrently. When the addresses of the two read operations are in different SRAM2Ps, since any one SRAM2P can be configured into the 1R1W type, there are no read-write conflicts. When the addresses of two read operations are in the same SRAM2P block, for example, both in the SRAM2P_0, since the same SRAM2P can only provide 2 ports for simultaneous operation at most, at this point, the ports are occupied by the two read operations. If one write operation is just to be written into the SRAM2P_0, then such data is written into the fourth SRAM2P_4 block of the memory.

In the present embodiment, a memory block mapping table is required to record which memory block stores valid data. As shown in FIG. 9b, the depth of the memory block mapping table is the same as the depth of one memory block, that is, 4096 depths. In each entry, the numbers from 0 to 4 of all memory blocks are sequentially stored after initialization. In the example of FIG. 9a, since the SRAM2P_0 has the read-write conflicts when the data is written, the data is actually written to the SRAM2P_4. At this point, the read operation also reads the corresponding content in the memory mapping table, and the original content is {0, 1, 2, 3, 4}, which becomes {4, 1, 2, 3, 0} after modification. The first block number and the fourth block number are exchanged, indicating that the data is actually written to the SRAM2P_4, and the SRAM2P_0 becomes a backup entry.

When the data is read, it is necessary to firstly read the memory block number mapping table of the corresponding address, to check which memory block the valid data is stored in. For example, if the data of the address 5123 is to be read, the content stored in the address 1027 (5123-4096=1027) of the memory block number mapping table is firstly read. The content of the address 1027 of the corresponding storage block is read according to the number of the second column.

For the data writing operation, the memory block number mapping table is required to provide one read port and one write port. For two data reading operations, the memory block number mapping table is required to provide two read ports, so that the memory block number mapping table is required to provide three read ports and one write port in total, and these 4 access operations must be performed simultaneously.

FIG. 10 shows a fifth embodiment. In the preferred embodiment of the present invention, the data constructing module 100, according to the depth and width of the 2R1W memory, selects 2m+1 SRAM2P memories having the same depth and width to construct a hardware architecture of the 2R1W memory, in being a positive integer.

Multiple SRAM2P memories are sequentially SRAM2P(0), SRAM2P(1) . . . SRAM2P(2m) according to an arrangement sequence. Each SRAM2P memory has M pointer addresses, one of the multiple SRAM2P memories is an auxiliary memory, and the rest SRAM2P memories are main memories.

In the preferred embodiment of the invention, the product of the depth and width of each SRAM2P memory is equal to (the product of the depth and width of the 2R1W memory)/2m.

For the convenience of description, the SRAM memory which has them value of 2 and is the 16384-depth 128-width 2R1W memory is described in detail below.

In this specific example, the multiple SRAM2P memories are sequentially

SRAM2P(0), SRAM2P(1), SRAM2P(2), SRAM2P(3) and SRAM2P(4) according to the arrangement sequence, wherein the SRAM2P(0), SRAM2P(1), SRAM2P(2) and SRAM2P(3) are the main memories, and the SRAM2P(4) is the auxiliary memory. The depth and width of each SRAM2P memory are 4096 and 128 respectively. Correspondingly, each SRAM2P memory has 4096 pointer addresses. If the pointer address of each SRAM2P memory is independently identified, the pointer address of each SRAM2P memory is 0-4095. If the addresses of all the main memories are arranged in order, the range of all the pointer addresses is 0-16383. In this example, the SRAM2P(4) is used to resolve port conflicts. In the present embodiment, the requirement can be met without adding the memory block number mapping table.

Further, based on the above hardware architecture, when the data is written into and/or read from the 2R1W memory, the data processing module 200 is specifically configured to associate the data in the main memories and the data in the auxiliary memory according to a current pointer position of the data, and perform XOR operation on the associated data to complete the writing and reading of the data.

In the preferred embodiment of the invention, the data writing process is as follows.

The writing address of the current data is obtained as W(x, y). x represents the arrangement position of the SRAM2P memory where the written data is located, and 0≤x<2m. y represents the specific pointer address in the SRAM2P memory where the written data is located, and 0≤y≤M.

The data in the rest main memories which have the same pointer address as the writing address are obtained and are subjected to the XOR operation with the current written data at the same time. The result of the XOR operation is written into the same pointer address of the auxiliary memory.

Further, in the preferred embodiment of the present invention, the data reading process of the data processing module 200 is as follows.

If the reading addresses of the current two pieces of read data are in the same SRAM2P memory, then the data processing module 200 is specifically configured to respectively obtain the reading addresses of the two pieces of read data as R1 (x1, y1), R2 (x2, y2). x1 and y1 both represent the arrangement positions of the SRAM2P memory in which the read data are located, 0≤x1<2m, and 0≤x2<2m. y1 and y2 both represent the specific pointer addresses in the SRAM2P memory in which the read data are located, 0≤y1≤M, and 0≤y2≤M.

The data processing module 200 is specifically configured to randomly select the read data stored in one of the reading addresses R1 (x1, y1), and directly read the currently stored data from the currently designated reading address.

The data processing module 200 is specifically configured to: obtain the data in the rest main memories and the data stored in the auxiliary memory, which have the same pointer address as another reading address, perform the XOR operation on the obtained data, and output the result of the XOR operation as the stored data of the another reading address.

In one embodiment of the present invention, if the reading addresses of the current two pieces of read data are in different SRAM2P memories, the data processing module 200 directly obtains the data corresponding to the pointer addresses in the different SRAM2P memories for independent output.

It should be noted that if each SRAM2P is further divided logically, for example, is divided into 4m SRAM2Ps having the same depth, and then the above 2R1W type SRAM can be constructed by only adding the memory area of ¼m. Correspondingly, the number of the SRAM blocks is also increased by nearly 2 times physically, and a lot of area overhead will be occupied in actual locating and wiring. Of course, the present invention is not limited to the above specific embodiments, and other solutions using the XOR operation to expand the memory ports are also included in the protective scope of the present invention, which is not repeated in detail herein.

In the preferred embodiment of the present invention, the data processing module 200 is further configured to: when the data is written into the 4R4W memory, select a data writing position according to the remaining free resource of each Bank. Specifically, the data processing module 200 is further configured to: correspondingly create a free buffer resource pool for each Bank, the free buffer resource pool being used to store remaining free pointers of the current corresponding Bank; when the data sends a request of being written into the 4R4W memory, compare the depths of respective free buffer resource pools; if there exists one free buffer resource pool with the maximum depth, directly write the data into the Bank corresponding to the free buffer resource pool with the maximum depth; and if there exist more than two free buffer resource pools with the same maximum depth, randomly write the data into the Bank corresponding to one of the free buffer resource pools with the maximum depth.

Of course, in other embodiments of the present invention, a certain rule may also be set. When there exist more than two free buffer resource pools with the same maximum depth, the data may be written into the corresponding Banks according to the arrangement sequence of respective Banks, which is not repeated in detail herein.

As shown in FIG. 13, in the specific example, the specific structures of X0Y0 and X1Y1 are the same as those shown in FIG. 12. In the data writing and reading process, the storage needs to be performed according to the corresponding forwarding ports. For example, the data of S0 and S1 can only be written into the X0Y0, while the data of S2 and S3 can only be written into the X1Y1, and the specific writing process is not repeated.

Under the 14 nm integrated circuit technology, the 4R4W memory according to the present invention logically requires a total of forty 4096-depth 1152-width SRAM2Ps. The total occupied area is 22.115 square centimeters, and the total power consumption is 13.503Watts (the technological conditions are the fastest when a core voltage is equal to 0.9V and a junction temperature is equal to 125 DEG C.). Meanwhile, the complex control logic is not required. The operation of multiple read ports can be realized only by the simple XOR operation. In addition, additional memory block mapping table and control logics are not required. Further, all memory resources are visible to the four Slices or any one input/output port, and all memory resources are completely shared between any ports.

In conclusion, according to the data buffer processing method and data buffer processing system for a 4R4W fully-shared packet according to the present invention, the SRAM of more ports is constructed by algorithms based on existing types of SRAMs, and the multi-port SRAM is supported to the greatest extent at only a minimal cost. In the implementation process, complex control logics and additional multi-port SRAM or register array resources are avoided. By using the uniqueness of the packet buffer and by spatial division and time division, the 4R4W packet buffer can be realized by only simple XOR operation. Meanwhile, all memory resources of the 4R4W memory according to the present invention are visible to the four Slices or any one input/output port, and all memory resources are completely shared between any ports. The present invention has lower power consumption and a faster processing speed, saves more resources or areas, and is simple to implement. Manpower and material costs are saved.

For the convenience of description, the above apparatuses are described with separate modules based on the functions of these modules. Of course, the functions of these modules may be realized in the same or multiple pieces of software and/or hardware when carrying out the present invention.

The apparatus embodiments described above are only illustrative. The modules described as separate members may or may not be physically separated. The members displayed as modules may or may not be physical modules, may be located at the same location and may be distributed in multiple network modules. The objectives of the solutions of these embodiments may be realized by selecting a part or all of these modules according to the actual needs, and may be understood and implemented by those skilled in the art without any inventive effort.

It should be understood that although the description is described according to the embodiments, not every embodiment only includes one independent technical solution, that such a description manner is only for the sake of clarity, that those skilled in the art should take the description as an integral part, and that the technical solutions in the embodiments may be suitably combined to form other embodiments understandable by those skilled in the art.

The above detailed description only specifies feasible embodiments of the present invention, and is not intended to limit the protection scope thereof. All equivalent embodiments or modifications not departing from the spirit of the present invention should be included in the protection scope of the present invention.

Claims

1. A data buffer processing method for a 4R4W fully-shared packet, wherein the method comprises:

assembling two 2R1W memories in parallel into one Bank memory unit;
forming the hardware architecture of a 4R4W memory based on four Bank memory units directly;
under one clock cycle, when data is written into the 4R4W memory by four write ports,
if the size of the data is less than or equal to the bit width of the 2R1W memory, writing the data into different Banks respectively, and meanwhile, copying the written data and writing the copied data into the two 2R1W memories of each Bank respectively: and
if the size of the data is greater than the bit width of the 2R1W memory, waiting for a second clock cycle, and when the second clock cycle comes, writing the data into different Banks respectively, and meanwhile, writing the high and low bits of each piece of written data into the two 2R1W memories of each Bank memory unit respectively.

2. The data buffer processing method for a 4R4W fully-shared packet according to claim 1, wherein the method further comprises:

under one clock cycle, when the data is read from the 4R4W memory,
if the size of the data is less than or equal to the bit width of the 2R1W memory, selecting a matched read port in the 4R4W memory to directly read the data; and
if the size of the data is greater than the bit width of the 2R1W memory, waiting for the second clock cycle, and when the second clock cycle comes, selecting a matched read port in the 4R4W memory to directly read the data.

3. The data buffer processing method for a 4R4W fully-shared packet according to claim 2, wherein the method further comprises:

selecting a writing position of the data according to the remaining free resource of each Bank when the data is written into the 4R4W memory.

4. The data buffer processing method for a 4R4W fully-shared packet according to claim 3, wherein the method specifically comprises:

correspondingly creating a free buffer resource pool for each Bank, the free buffer resource pool being used to store remaining free pointers of the current corresponding Bank, and when the data sends a request of being written into the 4R4W memory, comparing the depths of respective free buffer resource pools,
if there exists one free buffer resource pool with the maximum depth, directly writing the data into the Bank corresponding to the free buffer resource pool with the maximum depth; and
if there exist more than two free buffer resource pools with the same maximum depth, randomly writing, the data into the Bank corresponding to one of the free buffer resource pools with the maximum depth.

5. The data buffer processing method for a 4R4W fully-shared packet according to claim 1, wherein the method further comprises:

according, to the depth and width of the 2R1W memory, selecting 2m+1 SRAM2P memories having the same depth and width to construct a hardware architecture of the 2R1W memory, m being a positive integer, wherein
each SRAM2P memory has M pointer addresses, one of the plurality of SRAM2P memories is an auxiliary memory, and the rest SRAM2P memories are main memories; and
when the data is written into and/or read from the 2R1W memory, associating the data in the main memories and the data in the auxiliary memory according to a current pointer position of the data, and performing XOR operation on the associated data to complete the writing and reading of the data.

6. A data buffer processing system for a 4R4W fully-shared packet, wherein the system comprises: a data constructing, module and a data processing module;

the data constructing module is configured to assemble two 2R1W memories in parallel into one Bank memory unit; and
form the hardware architecture of a 4R4W memory based on four Bank memory units directly;
the data processing module is configured to, when determining that under one clock cycle, data is written into the 4R4W memory by four write ports,
if the size of the data is less than or equal to the bit width of the 2R1W memory, write the data into different Banks respectively, and meanwhile, copy the written data and write the copied data into the two 2R1W memories of each Bank respectively; and
if the size of the data is greater than the bit width of the 2R1W memory, wait for a second clock cycle, and when the second clock cycle comes, write the data into different Banks respectively, and meanwhile, write the high and low bits of each piece of written data into the two 2R1W memories of each Bank memory unit respectively.

7. The data buffer processing system for a 4R4W fully-shared packet according to claim 6, wherein

the data processing module is further configured to: when determining that under one clock cycle, the data is read from the 4R4W memory, if the size of the data is less than or equal to the bit width of the 2R1W memory, select a matched read port in the 4R4W memory to directly read the data; and if the size of the data is greater than the bit width of the 2R1W memory, wait for the second clock cycle, and when the second clock cycle comes, select a matched read port in the 4R4W memory to directly read the data.

8. The data buffer processing system for a 4R4W fully-shared packet according to claim 7, wherein

the data processing module is further configured to select a writing position of the data according to the remaining free resource of each Bank when determining that the data is written into the 4R4W memory.

9. The data buffer processing system for a 4R4W fully-shared packet according to claim 8, wherein

the data processing module is further configured to: correspondingly create a free buffer resource pool for each Bank, the free buffer resource pool being used to store remaining free pointers of the current corresponding Bank, and when the data sends a request of being written into the 4R4W memory, compare the depths of respective free buffer resource pools, if there exists one free buffer resource pool with the maximum depth, directly write the data into the Bank corresponding to the free buffer resource pool with the maximum depth; and if there exist more than two free buffer resource pools with the same maximum depth, randomly write the data into the Bank corresponding to one of the free buffer resource pools with the maximum depth.

10. The data buffer processing system for a 4R4W fully-shared packet according to claim 6, wherein

the data constructing module is further configured to: according to the depth and width of the 2R1W memory, select 2m+1 SRAM2P memories having the same depth and width to construct a hardware architecture of the 2R1W memory, in being a positive integer, wherein
each SRAM2P memory has M pointer addresses, one of the plurality of SRAM2P memories is an auxiliary memory, and the rest SRAM2P memories are main memories; and
when the data is written
to and/or read from the 2R1W memory, the data processing module is further configured to associate the data in the main memories and the data in the auxiliary memory according to a current pointer position of the data, and perform XOR operation on the associated data to complete the writing and reading of the data.

11. The data buffer processing method for a 4R4W fully-shared packet according to claim 2, wherein the method further comprises:

according to the depth and width of the 2R1W memory, selecting 2m+1 SRAM2P memories having the same depth and width to construct a hardware architecture of the 2R1W memory, m being a positive integer, wherein
each SRAM2P memory has M pointer addresses, one of the plurality of SRAM2P memories is an auxiliary memory, and the rest SRAM2P memories are main memories; and
when the data is written into and/or read from the 2R1W memory, associating the data in the main memories and the data in the auxiliary memory according to a current pointer position of the data, and performing XOR operation on the associated data to complete the writing and reading of the data.

12. The data buffer processing method for a 4R4W fully-shared packet according to claim 3, wherein the method further comprises:

according to the depth and width of the 2R1W memory, selecting 2m±1 SRAM2P memories having the same depth and width to construct a hardware architecture of the 2R1W memory, in being a positive integer, wherein
each SRAM2P memory has M pointer addresses, one of the plurality of SRAM2P memories is an auxiliary memory, and the rest SRAM2P memories are main memories; and
when the data is written into and/or read from the 2R1W memory, associating the data in the main memories and the data in the auxiliary memory according to a current pointer position of the data, and performing XOR operation on the associated data to complete the writing and reading of the data.

13. The data buffer processing method for a 4R4W fully-shared packet according to claim 4, wherein the method further comprises:

according to the depth and width of the 2R1W memory, selecting 2m+1 SRAM2P memories having the same depth and width to construct a hardware architecture of the 2R1W memory, in being a positive integer, wherein
each SRAM2P memory has M pointer addresses, one of the plurality of SRAM2P memories is an auxiliary memory, and the rest SRAM2P memories are main memories; and
when the data is written into and/or read from the 2R1W memory, associating the data in the main memories and the data in the auxiliary memory according to a current pointer position of the data, and performing XOR operation on the associated data to complete the writing and reading of the data.

14. The data buffer processing system for a 4R4W fully-shared packet according to claim 7, wherein

the data constructing module is further configured to: according to the depth and width of the 2R1W memory, select 2m+1 SRAM2P memories having the same depth and width to construct a hardware architecture of the 2R1 memory, m being a positive integer, wherein
each SRAM2P memory has M pointer addresses, one of the plurality of SRAM2P memories is an auxiliary memory, and the rest SRAM2P memories are main memories; and
when the data is written into and/or read from the 2R1W memory, the data processing module is further configured to associate the data in the main memories and the data in the auxiliary memory according to a current pointer position of the data, and perform XOR operation on the associated data to complete the writing and reading of the data.

15. The data buffer processing system for a 4R4W fully-shared packet according to claim 8, wherein

the data constructing module is further configured to: according to the depth and width of the 2R1W memory, select 2m+1 SRAM2P memories having the same depth and width to construct a hardware architecture of the 2R1W memory, m being a positive integer, wherein
each SRAM2P memory has M pointer addresses, one of the plurality of SRAM2P memories is an auxiliary memory, and the rest SRAM2P memories are main memories; and
when the data is written into and/or read from the 2R1W memory, the data processing module is further configured to associate the data in the main memories and the data in the auxiliary memory according to a current pointer position of the data, and perform XOR operation on the associated data to complete the writing and reading of the data.

16. The data buffer processing system for a 4R4W fully-shared packet according to claim 9, wherein

the data constructing module is further configured to: according to the depth and width of the 2R1W memory, select 2m+1 SRAM2P memories having the same depth and width to construct a hardware architecture of the 2R1W memory, m being a positive integer, wherein
each SRAM2P memory has M pointer addresses, one of the plurality of SRAM2P memories is an auxiliary memory, and the rest SRAM2P memories are main, memories; and
when the data is written into and/or read from the 2R1W memory, the data processing module is further configured to associate the data in the main memories and the data in the auxiliary memory according to a current pointer position of the data, and perform XOR operation on the associated data to complete the writing and reading of the data.
Patent History
Publication number: 20190332313
Type: Application
Filed: Feb 15, 2017
Publication Date: Oct 31, 2019
Applicant: CENTEC NETWORKS (SU ZHOU) CO., LTD. (Suzhou, Jiangsu)
Inventors: Jun XU (Suzhou, Jiangsu), Jie XIA (Suzhou, Jiangsu), Xiaoyang ZHENG (Suzhou, Jiangsu)
Application Number: 16/319,447
Classifications
International Classification: G06F 3/06 (20060101); G06F 1/06 (20060101);