40G 100G Network Infra Migration

Posted byVijay Gupta13/05/20200 Comment(s)

Big Data, mobility and the Internet of Things (IoT) are generating an enormous amount of data, and data center operators must find ways to support higher and higher speeds. Many data centers were designed to support 1-gigabit or 10-gigabit pathways between servers, routers and switches, but today’s Ethernet roadmap extends from 25- and 40-gigabit up through 100-gigabit, and 400-gigabit and even 1-terabit Ethernet loom within a few years.  As a result, data center operators have an immediate need to migrate their Layer 1 infrastructure to support higher speeds, and that new infrastructure must also deliver lower latency, greater agility, and higher density.

 

Recent data center trends predict bandwidth requirements will continue growing 25 percent to 35 percent per year. A key impact of this sustained growth is the shift to higher switching speeds. According to a recent study, Ethernet switch revenue will continue to grow through the end of the decade, with the biggest sales forecasted for 25G and 100G ports. The shift to 25G lanes is well underway as switches deploying 25G lanes become more commonplace. Lane capacities are expected to continue doubling, reaching 100G by 2020 and enabling the next generation of high speed links for fabric switches. Several factors are driving the surge in data center throughput speeds.

  • Server densities are increasing by approximately 20 percent a year.
  • Processor capabilities are growing, with Intel recently announcing
  • Multi-core processors and graphic processing units (GPUs)
  • Virtualization density is increasing by 30 percent, which is driving the uplink speeds to switches.
  • East-west traffic in the data center has far surpassed the volume of north-south traffic.
  • Migration Challenges
  • There are several aspects of data center design and the evolution of cabling that present challenges to those wishing to migrate to higher speeds.

 

The pace of change is accelerating. The move from 1G to 10G Ethernet took nearly a decade, for example, while migration from 10G to 25G and 100G will take half as long. A lot of legacy networks were designed with infrastructure that’s not as scalable as it needs to be; planners could anticipate an eventual move from 1G to 10G, for example, but in most cases cabling that was installed even a couple of years ago is now outdated. Data center managers are having to update fiber or add more fiber, and that fiber must support rapid advancements to 100G and beyond.

Standards are evolving. Many data centers use multi-mode fiber to connect servers and switches, but the state of the art in multi-mode fiber was OM3 or OM4 a few years ago. Last year, standards bodies approved the OM5 standard, which has four times the throughput of OM3.

Data centers are densifying. In multi-tenant data centers, customers  are reducing the size of their deployments by consolidating network gear into smaller footprints. As a result, they need to be able to expand their network capacity inside a smaller environment. Some older cable management systems and patch panels can’t support higher density. MTP®/MPO cables in the data centers play an important role in ultra-high-density cabling. MTP®/MPO multi-fiber connector is about the same size as a SC connector but can accommodate 8, 12 or 24 fibers, thus save multi circuit card and rack space. The improved MTP®/MPO cable assemblies have developed as an optimum solution for migration to 40G and 100G.

Migration is costly and disruptive. Ripping and replacing cabling is disruptive enough, but when the data center also needs higher-density cable management systems and patch panels, it can be a real nightmare. In large enterprise data centers where there is often more space, migration can take place in sections to reduce disruption, but this is not an option in multi-tenant data centers.

 

Planning for Migration

The most important strategy for high-speed migration is to plan for the long term. Many data centers last upgraded their Layer 1 infrastructure to support the next generation of switches, routers and servers, but because the pace of change is accelerating, it’s best to plan for the longer term. Choose a point in the future (say, 400G), assume the data center will require more fiber strands than are available today, and buy the highest grade of multi-mode or single-mode fiber available to support future migration without ripping and replacing.

In addition, data center architects should adopt low latency designs. Low latency is important in financial trading applications today, but it will increasingly become a requirement to support IoT applications such as connected cars. Data center cabling and connectors using ultra-low loss components will offer the most flexibility in achieving low latency. Also, architects should consider single mode fiber as well as multi-mode fiber. Single mode fiber delivers the highest throughput and reach and is important in larger data centers, while multi-mode fiber is less expensive and easier to deploy.

Finally, choose the right Layer 1 infrastructure solution provider. The largest providers have global operations, so they can deliver effective solutions throughout the world. These providers have teams of field application engineers that will come out to the data center and make appropriate recommendations about which products to install for long-term viability, and some offer guarantees that their infrastructure solutions will support any application.

JTOPTICS® Fiber Cabling Solution simplifies the delivery of network services by providing reliable infrastructure components assembled and tested in a factory-controlled environment. JTOPTICS® end-to-end cabling system is an ideal solution for data centers specially when time for traditional cable installation and termination is limited and offer quick plug-in deployment for trouble free network performance.

 

Initialisation for 40G/100G Migration

Most large corporations use 10G; as there is a need and increasing demand for high-speed Internet, due to virtualization, I/O coverage trend, network storage trend and data centre network aggregation trend. So both 40G and 100G or other related equipment are launched making migration from 10G to 40G/100G unavoidable.

 

IEEE and TIA Standards

If you are planning to migrate to 40G/100G network, you should know more about high-speed Ethernet. Let’s talk about side of standards, since standards are very important in structured cabling systems. Standards for 40G and 100G are distinctly different from other generations in everything, from how information transmission is done to the equipment needed.

TIA defines the parameters for structuring cabling systems for data centres. Everything from criteria to design, including layout, spacing, tiered reliability, cabling infrastructure, and environmental considerations are determined by them. Standard recommendation is deploying the highest capacity media that is available currently, to infrastructure lifespan.

 

40G/100G Using MPO/MTP Interface

1G and 10G networks deploy GBIC (Gigabit interface converter). So, if the transceiver SFP+ (small form-factor pluggable) it is meant for 10G network. The fibre connectivity in high-speed active equipment is then simplified and condensed later. Transceivers for 40G and 100G are CFP, QSFP (quad small form-factor pluggable) and CXP (100G form-factor pluggable). MPO/MTP is the designated interface for multimode 40/100G and is also backward compatible with 1G/10G applications too. Its smaller, high-density form factor is good for higher-speed Ethernet equipment.

40G and 100G Ethernet use parallel optics. Data is received and transmitted at the same time on MTP interfaces via 10G simplex transmission through every strand of array cable.

We need to  understand high-speed Ethernet, before discussing structured cabling system of migration to 40G and 100G networks in layman’s terms.

 

12- or 24-Fiber Cabling Infrastructure

This system comprises configurations for 10G to 40G/100G networks via 12- or 24-fiber MTP cabling. So, what is the difference between these two methods? What is better? We can this compare in context to density, congestion and migration

 

Migration

For migration, harnesses, modules, adapter plates and trunks are needed. These components along with 40G 12-fiber legacy configurations, array harnesses and a second trunk is needed for full-fledged fibre utilization. For 100G, 12-fiber legacy configuration and additional components are needed.  When using 24-fiber trunks, one cable could support 1G-100G channels and make network upgrades easier. When equipment is upgraded, new trunks are needed and reducing the changes required can ensure better integrity and security.

 

Density in Connectivity

With high-density connectivity, rack space for active equipment needed is more and floor space requirement is less. For this reason, 24-fiber cabling is better. If the equipment is configured to use 24-fiber channel/lane assignments, there can be more connections, yet the number of ports can remain the same as 12-fiber.

 

Congestion in Network

With more connectivity for identical footprint, the cabinet or rack could get more crowded and lesser trunks can bring down the congestion in data centres. When deploying 24-fiber MTP trunks for cable runs, it will reduce the number of cables required as compared to 12-fiber, which translates into lighter load, easier management due to less fibres and reduced cooling costs due to lesser airflow requirement. For these reasons, 24-fiber MTP trunks are ideal.

 

Track Order