Disclosure of Invention
The embodiment of the invention aims to provide a load balancing system of a multi-path E1 networking, which solves the problem of IPOE data packet congestion at a core network junction under a service concurrency scene by carrying out load balancing design on a bridge module of an IPOE interface of a core network and adopting a non-blocking state machine based on quick jump.
In order to achieve the above object, an embodiment of the present invention provides a load balancing system for a multi-path E1 networking, including a lower ethernet networking, an FPGA chip, an ethernet chip, and an upper ethernet networking, wherein a downlink port of the FPGA chip is connected to the lower ethernet networking, and an uplink port of the FPGA chip is connected to the upper ethernet networking through the ethernet chip; the FPGA chip is configured with a bus bridge module; wherein,
the bus bridging module is used for processing data requests of a plurality of paths of E1 links by adopting a preset processing strategy; and the processing strategy is to continue to jump to the processing of the data receiving request of the next E1 link after the data receiving request of one E1 link is processed until the data receiving request of each E1 link is traversed.
Preferably, the FPGA chip is further configured with an encoding and decoding module, an analysis and conversion module, and a buffer module; wherein,
the coding and decoding module is used for receiving the differential signal sent by the lower layer Ethernet network, decoding the differential signal into a binary code stream and sending the binary code stream to the analysis and conversion module;
the analysis conversion module is used for carrying out serial-parallel conversion on the received binary code stream, analyzing the binary code stream into data of an Avalon-ST bus protocol and sending the data to the buffer module;
and the buffer module is used for reading the data of the Avalon-ST bus protocol and carrying out collection bridging in a data packet format.
Preferably, the method further comprises the following steps:
the buffer module is further configured to receive data of the Avalon-ST bus protocol sent by the upper ethernet group network, and send the data to the analysis conversion module;
the analysis conversion module is also used for recombining the received data of the Avalon-ST bus protocol according to an HDLC protocol, converting the data into a serial binary code stream through serial conversion and sending the serial binary code stream to the coding and decoding module;
and the coding and decoding module is also used for coding the received serial binary code stream and sending the coded serial binary code stream to the lower-layer Ethernet network.
Preferably, the ethernet network further comprises a Buffer chip, and each E1 link in the lower ethernet network is connected to the FPGA chip through one Buffer chip.
Preferably, the codec rule adopted by the codec module is an HDB3 codec rule.
Preferably, the parsing protocol adopted by the parsing conversion module is an HDLC protocol.
Preferably, the buffering mode of the buffering module is whole packet buffering, and when it is confirmed that data sent by the downlink E1 link is buffered as a complete data packet, the data packet is read at a preset reading rate.
Preferably, the buffer rate of the buffer module is 2Mbps.
Preferably, the preset reading rate is 50Mbps.
Preferably, the latency of the bus bridge module for processing the data request of each path of the E1 link is 1 clock cycle.
Compared with the prior art, the load balancing system of the multi-path E1 networking provided by the embodiment of the invention has the advantages that the fast jump strategy is applied to the design of the Avalon-ST bus bridge in the E1 networking, the fast jump design is carried out on the request processing state machine, the purpose of carrying out load balancing processing when multi-path services are concurrent is achieved, the problem of concurrent blockage of the multi-path services is solved, and only extremely small processing time resources and FPGA logic resource consumption need to be additionally added for the redesigned state machine.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic structural diagram of a load balancing system for a multi-path E1 networking according to embodiment 1 of the present invention is shown, where the system includes a lower ethernet networking, an FPGA chip, an ethernet chip, and an upper ethernet networking, where a downlink port of the FPGA chip is connected to the lower ethernet networking, and an uplink port of the FPGA chip is connected to the upper ethernet networking through the ethernet chip; the FPGA chip is configured with a bus bridge module; wherein,
the bus bridging module is used for processing data requests of a plurality of paths of E1 links by adopting a preset processing strategy; and the processing strategy is to continue to jump to the processing of the data receiving request of the next E1 link after the data receiving request of one E1 link is processed until the data receiving request of each E1 link is traversed.
Specifically, the load balancing system of the multi-path E1 networking includes a lower ethernet networking, an FPGA (Field Programmable Gate Array) chip, an ethernet chip, and an upper ethernet networking, wherein a downlink port of the FPGA chip is connected to the lower ethernet networking, and an uplink port of the FPGA chip is connected to the upper ethernet networking through the ethernet chip. Generally, both the lower layer ethernet network and the upper layer ethernet network are composed of multiple E1 links. The scheme of the invention can be realized after the system is powered on.
Wherein the FPGA chip is configured with the bus bridge module. The bus bridge module is mainly used for realizing a load balancing function. The realization process is as follows:
the bus-bridge module is connected to the bus-bridge module, the system comprises a data processing module, a data processing module and a data processing module, wherein the data processing module is used for processing data requests of a plurality of paths of E1 links by adopting a preset processing strategy; and the processing strategy is that after the data receiving request of one E1 link is processed, the next E1 link continues to be processed until the data receiving request of each E1 link is traversed. When the bus bridge module operates at a clock rate of 50Mhz, up to 25 lanes of 2M link data can be processed while ensuring that no blocking occurs. Therefore, after the module processes the data receiving request of a certain path of E1 link, the module directly uses the 50Mhz clock to jump to the next path to judge the state of the E1 receiving request, even if the next path of E1 link does not request to receive data, the strategy can be used for traversing and judging the receiving request of each path in the process of receiving data once, thereby avoiding the situation that the subsequent other paths E1 are blocked due to high load of a certain path E1, and the strategy can be called as a non-blocking state machine based on quick jump. Referring to fig. 2, a schematic data processing diagram of a non-blocking state machine based on fast jump according to the embodiment of the present invention is shown. In order to further highlight the advantages of the present invention, this embodiment of the present invention further describes processing of multiple concurrent data in the prior art, in which a priority-based bridging policy is generally adopted for processing multiple concurrent data, that is, when a bridge is always processing a request with a high load, other requests are easily set aside and cannot be processed. Fig. 3 is a schematic data processing diagram of a bridging policy based on priority according to this embodiment of the present invention.
Although the redesigned state machine of the invention needs extra processing time resources, the time cost is low and can be ignored. Referring to fig. 4, it is a schematic diagram of the state machine based on the fast jump design according to the embodiment of the present invention, which spends time when processing data requests of each path of E1 link. As can be seen from FIG. 4, even if only one E1 link is received at full capacity, the additional added state machine hop time is n-1 lanes of 50Mhz clock cycles. Taking a 2M system as an example, one FPGA chip processes 16 paths of requests received by the E1 link, and the intermediate jump time of the state machine is 20ns × 15=300ns, which is equivalent to only increasing bandwidth consumption of 120bps in one full-load E1 link, and the bandwidth of the E1 link relative to 2Mbps can be almost ignored. Therefore, the bus bridge module can judge the receiving request of each E1 link with little additional processing time cost.
Embodiment 1 of the present invention provides a load balancing system for a multi-path E1 networking, and avoids the problem of congestion of an IPOE data packet at a tandem of a core network in a service concurrency scenario by performing load balancing design on a bridge module of an IPOE interface of the core network and using a race-free state machine based on fast skip.
As an improvement of the above scheme, the FPGA chip is further configured with an encoding and decoding module, an analysis and conversion module, and a buffer module; wherein,
the coding and decoding module is used for receiving the differential signal sent by the lower layer Ethernet network, decoding the differential signal into a binary code stream and sending the binary code stream to the analysis and conversion module;
the analysis conversion module is used for carrying out serial-parallel conversion on the received binary code stream, analyzing the binary code stream into data of an Avalon-ST bus protocol and sending the data to the buffer module;
and the buffer module is used for reading the data of the Avalon-ST bus protocol and carrying out aggregation and bridging in a data packet format.
Specifically, referring to fig. 5, a schematic structural diagram of an FPGA chip according to the embodiment of the present invention is shown. As can be seen from fig. 5, the FPGA chip is further configured with the encoding/decoding module, the parsing/converting module, and the buffering module; wherein,
and the coding and decoding module is used for receiving the differential signal sent by the lower Ethernet network, decoding the differential signal into a binary code stream and sending the binary code stream to the analysis and conversion module. The differential signal refers to an HDB3 differential signal, and an HDB3 original differential signal of the E1 link is connected to an FPGA pin through the Buffer chip after being subjected to positive and negative decision shaping. And the coding and decoding module is used for decoding the differential signal and then obtaining the clock of the opposite terminal equipment.
And the analysis conversion module is used for carrying out serial-parallel conversion on the received binary code stream, analyzing the binary code stream into data of an Avalon-ST bus protocol, and sending the data to the buffer module. That is to say, when the parsing and converting module receives the binary code stream sent by the encoding and decoding module, the binary code stream is first converted in a serial-parallel manner and then parsed into data of the Avalon-ST bus protocol.
And the buffer module is used for reading data of the Avalon-ST bus protocol and carrying out collection bridging in a data packet format. Generally, an E1 link works at a rate of 2M, and before entering an FPGA chip for data processing, a data packet at a rate of 2Mbps needs to be converted into a rate of 50Mbps for processing. And the buffer module performs whole packet buffer on the data packet received by the E1 link, confirms that one complete packet is buffered at the rate of 2Mbps, and then reads out the data packet at the rate of 50Mbps for convergence and bridging.
The process is that the FPGA chip processes the data request received from the lower Ethernet network. Fig. 6 is a schematic view of a corresponding processing flow when the FPGA chip receives a data request from an upper ethernet networking and a lower ethernet networking according to the embodiment of the present invention. The upper half part is a corresponding processing flow when a data request of the lower layer Ethernet network is received, and the lower half part is a corresponding processing flow when a data request of the upper layer Ethernet network is received.
In the embodiment of the invention, the original HDB3 differential signals of the E1 link are decoded by using the coding and decoding module to obtain the binary code stream, the binary code stream is subjected to serial-parallel conversion by using the analysis and conversion module and then is analyzed into the data of the Avalon-ST bus protocol, then the data of the Avalon-ST bus protocol is read by using the buffer module and is collected and bridged in the format of a data packet, and the data request of the lower-layer Ethernet networking is processed.
As an improvement of the above scheme, the method further comprises the following steps:
the buffer module is further configured to receive data of the Avalon-ST bus protocol sent by the upper ethernet group network, and send the data to the analysis conversion module;
the analysis conversion module is also used for recombining the received data of the Avalon-ST bus protocol according to an HDLC protocol, converting the data into a serial binary code stream through serial conversion and sending the serial binary code stream to the coding and decoding module;
and the coding and decoding module is also used for coding the received serial binary code stream and sending the coded serial binary code stream to the lower-layer Ethernet network.
Specifically, referring to the lower half flow of fig. 6, when the FPGA chip receives a data request of the upper ethernet network, reverse transmission needs to be performed, and the corresponding processing flow is as follows:
the buffer module is further configured to receive data of the Avalon-ST bus protocol sent by an upper ethernet network, and send the data to the analysis conversion module. Similarly, the receiving process also needs buffering of the whole packet, and after the buffering of a complete data packet is completed, the buffering module sends the data packet to the parsing and converting module through the Avalon-ST bus.
The analysis conversion module is also used for recombining the received data of the Avalon-ST bus protocol according to the HDLC protocol, the recombining is also called packaging, after the packaging is finished, the data are converted into serial binary code streams through serial conversion, and the serial binary code streams are sent to the coding and decoding module.
And the coding and decoding module is also used for coding the received serial binary code stream and sending the coded serial binary code stream to a lower-layer Ethernet network. Before reaching each E1 link, the coded signals are sent to the Buffer chip and converted into positive and negative levels.
The embodiment of the invention firstly utilizes the buffer module to receive the data of the Avalon-ST bus protocol sent by the upper layer Ethernet networking, then utilizes the analysis conversion module to recombine the received data of the Avalon-ST bus protocol according to the HDLC protocol, converts the data into the serial binary code stream through serial conversion, then codes the serial binary code stream through the coding and decoding module, and sends the coded serial binary code stream to the lower layer Ethernet networking so as to realize the processing of the data request of the upper layer Ethernet networking.
As an improvement of the above scheme, the ethernet network further comprises a Buffer chip, and each path of E1 link in the lower ethernet networking is connected with the FPGA chip through one Buffer chip.
Specifically, the load balancing system for the multi-path E1 networking further includes the Buffer chip, and each path of E1 link in the lower ethernet networking is connected to the FPGA chip through one Buffer chip. That is, each E1 link in the lower ethernet network is connected to a pin of the FPGA chip through the Buffer chip.
In the embodiment of the invention, the Buffer chip is additionally arranged between each path of E1 link and the FPGA chip so as to reduce the positive and negative level distortion of each path of E1 link.
As an improvement of the above scheme, the codec rule adopted by the codec module is an HDB3 codec rule.
Specifically, the encoding and decoding rule adopted by the encoding and decoding module is an HDB3 encoding and decoding rule. Namely, the coding and decoding module decodes the received differential signal into a binary code stream according to the HDB3 decoding rule, and codes the received binary code stream into a differential signal according to the HDB3 coding rule.
As an improvement of the above scheme, an analysis protocol adopted by the analysis conversion module is an HDLC protocol.
Specifically, the analysis protocol adopted by the analysis conversion module is an HDLC protocol. Preferably, the HDLC protocol uses a parallel HDLC protocol, and translates the idle codes with respect to a standard HDLC protocol, thereby ensuring that the data packet is not decoded when the data identical to the idle codes appears in the original data.
As an improvement of the above scheme, the buffering mode of the buffering module is whole packet buffering, and when it is determined that data sent by a downlink E1 link is buffered as a complete data packet, the data packet is read at a preset reading rate.
Specifically, the buffering mode of the buffering module is whole packet buffering, and when it is confirmed that data sent by a downlink E1 link is buffered as a complete data packet, the data packet is read at a preset reading rate. For example, after confirming that a complete packet is buffered at the rate of 2Mbps, the data packet is read out at the rate of 50Mbps for aggregation bridging.
As an improvement of the above scheme, the buffer rate of the buffer module is 2Mbps.
Specifically, the buffer rate of the buffer module is 2Mbps. The buffer module performs whole packet buffer on the data packet received by the E1 link, and reads the data packet after confirming buffer of a whole packet at the rate of 2Mbps, so that incomplete data or missing data of the data packet is avoided.
As an improvement of the above scheme, the preset reading rate is 50Mbps.
Specifically, the preset reading rate is 50Mbps. And after the data packet is completely cached, reading is carried out, and the reading speed is greater than the caching speed, so that the content of the data packet can be rapidly acquired, the data request is processed, and the link blockage is reduced.
As an improvement of the above scheme, the latency of the bus bridge module to process the data request of each path of E1 link is 1 clock cycle.
Specifically, the latency of the bus bridge module for processing the data request of each path of the E1 link is 1 clock cycle. That is, the bus bridge module can make the reception request judgment of each path E1 with a little increase in processing time cost. Even if only one path E1 is received at full load, the additionally increased state machine jump time is n-1 paths of 50Mhz clock cycles. Taking a 2M system as an example, a piece of FPGA processes 16 paths of requests received by E1 links, where the intermediate jump time of the state machine is 20ns × 15=300ns, which is equivalent to only increasing bandwidth consumption of 120bps under one full-load E1 link, and the bandwidth of the E1 link relative to 2Mbps can be almost ignored.
To sum up, the load balancing system for a multi-path E1 networking provided in the embodiments of the present invention applies a fast jump strategy to the design of an Avalon-ST bus bridge in the E1 networking, and performs a fast jump design on a request processing state machine, so as to achieve the purpose of performing load balancing processing when multiple paths of services are concurrent, and solve the problem of concurrent blocking of multiple paths of services, and the redesigned state machine only needs to additionally increase a very small processing time resource and FPGA logic resource consumption, and the waiting time for processing and receiving a request is only 1 clock cycle, so that bandwidth consumption can be almost ignored, and resource consumption of the bus bridge is not increased. The invention can be compatible with various multi-path E1 networking modes, and can ensure that the problem of blocking caused by one-path office direction can not occur under the networking environment.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.