Your shopping cart is empty!
The telecom industry is anxiously awaiting the benefits that 400G capacity will bring to existing and future fibre network deployments. Nearly every business is leveraging the latest in digital offerings to remain competitive in their respective markets, which exponentially increases the amount of data transported across the network. 400G is certainly the answer to these increasing data demands, at least in the present, but there will be an initial struggle on the network backbone in supporting these initiatives and fulfilling the promise of higher-capacity transport.
What is 400G?
400G is the latest standard for high speed Ethernet client interfaces. Originally known as IEEE 802.3bs, 400G was officially approved in December of 2017 and is part of a broader family of related themes such as 200G, next generation 100G, and 50G Ethernet.
400G has driven the rapid development and adoption of new pluggable optical modules and switches. Sometimes referred to as 400GE or 400G Ethernet, the new standard includes Forward Error Correction (FEC) to improve error performance.
For Data Centre operators and Cloud providers, a significant amount of the traffic remains purely within the facility, as real and virtual servers, n-tier applications, clusters, application management, security and other applications operate. Even a data centre with little traffic flowing internally or externally may have tremendous needs for raw bandwidth capacity and bandwidth aggregation. Thus, it’s our experience that nearly every network owner or operator building a new facility—or modernizing an existing one—is looking at 400GbE. It simply makes little sense to go with 100GbE, unless there truly are issues with cost or with equipment testing, certification and availability on the timeline needing for the project. Bear in mind that devices that support 400GbE also interoperate with lower capacity devices—200Gbe, 100GbE, 50GbE and even 25GbE — which makes 400GbE ideal for accommodating incremental modernization projects. 400G means more than just new Ethernet ports and modulation advancements. The paradigm shift necessitates changes and adjustments throughout the networking ecosystem, providing flexibility and scalability of bandwidth deployment in new and unique ways.
Challenges of 400G Transceiver Test
Higher speeds and the utilization of PAM-4 modulation bring amazing improvements in throughput but can also lead to some of the inherent challenges of 400G testing. PAM-4 modulation introduces added complexity at the physical layer. Links now always have errors, so simply quantifying the errors or testing based on “zero” errors no longer suffices.
Furthermore, on the physical appearance layer, for 400G optical modules, its high-speed interfaces include more electrical input interfaces, electrical output interfaces, optical input interfaces, optical output interfaces, and other power and low-speed management interfaces. All the performance of these interfaces should be made to a complaint of 400G standards. However, the size of the 400G transceivers is similar to the existing 100G transceivers, the integration of those interfaces needs more sophisticated manufacture technology, as well as corresponding performance tests to ensure the quality of those modules.
400G Transceiver Test Components
ER measurement
Extinction ratio, when used to describe the performance of an optical transmitter used in digital communications, is simply the ratio of the energy (power) used to transmit a logic level ER is the optical power logarithms ratio when the laser outputs the high level and low level after electric signals are modulated to optical signals. The ER test can show whether a laser works at the best bias point and within the optimal modulation efficiency range. Both the ER and the average power can be measured by mainstream optical oscilloscopes.
Eye Diagram Test
Eye diagram is an oscilloscope display in which a digital signal from a receiver is repetitively sampled and applied to the vertical input, while the data rate is used to trigger the horizontal sweep.
By using an oscilloscope to create an eye diagram, engineers can quickly evaluate system performance and gain insight into the nature of channel imperfections that can lead to errors when a receiver tries to interpret the value of a bit.
Bit Error Rate Test
Bit Error Rate Test (BERT) is a testing method for digital communication circuits that uses predetermined stress patterns consisting of a sequence of logical ones and zeros generated by a test pattern generator. A BERT typically consists of a test pattern generator and a receiver that can be set to the same pattern. It can be used in pairs, with one at either end of a transmission link, or singularly at one end with a loopback at the remote end.
Jitter Test
Proper testing of transceivers requires the ability not only to measure generated jitter but also to inject in-band as well as out-of-band jitter for an appropriate receiver tolerance test. Jitter tests are mainly designed for output jitters of transmitters and jitter tolerance of receivers. The Jitter includes random Jitter and deterministic Jitter because deterministic jitter is predictable when compared to random jitter, you can design your transmitter and receiver to eliminate it.