* share this Tech Note ...

Whether you are operating an enterprisehyper-scale cloud, or multi-tenant data center, your infrastructure must address reliability, manageability, scalability, and flexibility. Without a flexible fiber optic cabling plan capable of easily accommodating common moves, adds, and changes, your network growth will be limited.

Data Hall Sacramento

Enterprise and Private Data Centers

The enterprise or private data center is the life source for many businesses. With e-commerce platforms, VPN services, and other programs supporting manufacturing, marketing, or HR, your network infrastructure needs to be reliable, flexible, and manageable.

New technologies and applications also drive the need for higher speeds, greater capacity, and lower latencies. Meeting data center standards, ensuring network scalability, and addressing the security challenges of today’s technologies are key to your enterprise data center network’s success.

Whether adopting cloud or hosted services or migrating an on-premise data center to 40G, 100G, or 400G, passive optical cabling solutions give you the flexibility to accommodate current and future business needs.

A key factor when choosing the type of optical connectivity is scalability. Scalability refers to both the physical expansion of the data center for additional servers, switches, or storage devices and the scalability of the infrastructure to support a migration path for increasing data rates. As technology evolves and standards are set to define data rates such as 40/100G Ethernet, Fibre Channel (32G and beyond), and InfiniBand standards (40G and beyond), the cabling infrastructures installed today must provide scalability to accommodate the need for more bandwidth in support of future applications.

Large Scale, Highly Portable, Flexible, and Scalable

Hyperscale and cloud data centers form the foundation of today’s digital world. With hundreds of thousands of individual servers connected via high-speed networks, hyper-scale operators optimize their data centers to provide a low total cost of ownership, with modularity and scalability to offer on-demand IT capacity anytime, anywhere.

Multitenant Data Centers

Multitenant data centers (MTDC), also known as colocations, have various requirements for servicing end users. With an increased demand put upon data centers caused by the expansion of the Internet of Things (IoT) and its associated technology requirements, many infrastructures need upgrading to keep up.

The cost and resources involved in building a data center – as well as storing and managing the data – are immense. Additionally, keeping a data center fully optimized while eliminating latency, reducing downtime, and maintaining compliance with ever-evolving standards is quite challenging. by Alycea Ohl Corning Optical Communications 

(AI) Artificial Intelligence Impacts Network Polarity

AI techniques for improving performance of optical communication systems and networks. The use of AI-based techniques is first studied in applications related to optical transmission, ranging from the characterization and operation of network components to performance monitoring, mitigation of nonlinearities, and quality of transmission estimation.

I’ve included below three noteworthy topics when it comes to incorporating intelligence into optical systems or networks how to handle uncertaintyhow to tackle decision-making, and how to learn.

Artificial Intelligence in Optical Communications: From Machine Learning to Deep LearningJavier Mata

  • In an optical network, there are non-deterministic events taking place, and the lack of full information about the environment is not a rare issue. Therefore, intelligent agents must be able to operate under uncertainty in a robust way. 
  • A second key element is the use of decision-making algorithms. 
  • The third issue of paramount importance is learning. Learning enables an agent to improve its performance on future tasks due to acquired experience. 

Faster Fiber Links for Data Centers

A new fiber-optic system can carry 800 gigabits of data per second, a big step up from top speeds of 100 or 200 gigabits in today’s data centers.

by Jeff Hecht

Fiber-optic transmission capacity has grown at Moore’s Law rates since the 1980s, helping to drive the rise of information technology. High-capacity long-haul systems have fueled network growth, most recently in the Pacific Light Cable. That system will carry a record 144,000 gigabits per second between Hong Kong and Los Angeles by sending 240 wavelengths through each of six fiber pairs when it comes online this year. 

However, the bulk of data carried on those big cables is now moving between corporate data centers. Facebook and Google bought shares of the Pacific Light Cable to link their data centers in Asia and North America. Datacenter capacities are growing faster than 10 percent a year, stretching out among multiple buildings as the network transforms, says Helen Xenos, senior director of portfolio marketing for Ciena. Operators want to integrate cable networks connecting sites that may be tens of kilometers apart to look seamless to users.

Much data center traffic is packaged as 100-gigabit Ethernet transmitted on one of about a hundred wavelengths in an optical fiber. But operators want to squeeze more data through the cables. The number of ports installed in data centers to carry 200 gigabits or more is expected to increase more than 50 percent a year over the next three to five years, says Jimmy Yu, a vice president at the market research firm Dell’Oro group. 

SINCE 1980, the number of bits per second that can be sent down an optical fiber has increased by some 10 millionfold. That’s remarkable even by the standards of late-20th-century electronics. It’s more than the jump in the number of transistors on chips during that same period, as described by Moore’s Law. There ought to be a law here, too. Call it Keck’s Law, in honor of Donald Keck. He’s the coinventor of low-loss optical fiber and has tracked the impressive growth in its capacity. Maybe giving the trend a name of its own will focus attention on one of the world’s most unsung industrial achievements.

Moore’s Law may get all the attention. But it’s the combination of fast electronics and fiber-optic communications that has created “the magic of the network we have today,” according to Pradeep Sindhu, chief technical officer at Juniper Networks. The strongly interacting electron is ideal for speedy switches used in logic and memory. The weakly interacting photon is perfect for carrying signals over long distances. Together they have fomented the technological revolution that continues to shape and define our times. By Jeff Hecht | IEEE Spectrum

Still Trending and a HOT choice with Data Centers are the QSFP-DD and QSFP112. Both support up to 400 Gb/s in aggregate 4, respectively, over 8 lanes of 50 Gb/s and 4 lanes of 100 Gb/s electrical interfaces. The QSFP-DD800 5 supports up to 800 Gb/s in aggregate over 8 lanes of 100 Gb/s electrical interface. The QSFP-DD/QSFP6 DD800 cage and connector designs with 8 lanes are compatible with the 4 lanes QSFP28/QSFP112. The 7 QSFP-DD800 cage and connector is an incremental design with enhanced signal integrity and thermal, which is 8 backward compatible to 8 lanes QSFP-DD and 4 lanes QSFP28. The QSFP112 cage and connector is a 9 incremental design with enhanced signal integrity and thermal, which is backward compatible to 4 lanes 10 QSFP28/QSFP+. The QSFP-DD800/QSFP112 supports up to 112 Gb/s (56 GBd) per lane electrical operation 11 based on PAM4 signaling and is expected to be compliant with IEEE 802.3ck [17] and OIF 112G-VSR [22] when 12 is published.

This overlap of key markets all wanting to deploy 400GbE simultaneously means the industry cannot spend the next five years optimizing form factors as it did for 100GbE. With every system vendor now building 400GbE products based on QSFP-DD, 400GbE will be the first speed transition where the initial form factor will also be the high-volume, dense form factor that can support all reaches and media.

There were some critical lessons learned from the 100GbE journey that should be applied to 400GbE. Even though a dense form factor called CFP4 was defined and built, it lacked backward compatibility with the dense 40GbE QSFP+ module and, as a result, was ignored. There is no reason to suggest something similar won’t happen in the 400GbE module market if we don’t learn from the past.

It’s important to consider how it was possible to extend the QSFP form factor from 40GbE to 400GbE, in order to better understand what’s possible beyond 400GbE.

Mark Npowell … Cisco

Also, a HOT choice(s) is/are two new connector assemblies CS and SN from Senko and MDC from US Conec, which are 3.3 and 3.8 times smaller than the smallest connectors (LC) in use today. The advantage of these connectors is not the only form factor size. They also benefit from 4x less power loss and 4x the bandwidth (400Gb/s) than current standards allowing for much less power required by each of the transmitters (3.6W each down to ~1.3W)

Also, a HOT choice is the Corning EDGE™ Rapid Connect™ Consolidator Frame

Install Trunk Cables Between Data Centers up to 70% Faster

Enable fast; easy data center interconnects (DCI) deployments with the Corning EDGE™ Rapid Connect™ Solution – one step in Corning’s journey toward a Data Center in a Day.

Designed to facilitate DCI deployments and connections between data halls, EDGE™ Rapid Connect™ Solution utilizes trunk cables with the all-new Fast Track MTP® Connector. Developed by Corning and US Conec, the Fast Track MTP® Connector’s small profile enables pre-terminated trunks with a reduced diameter pulling grip that can be pulled through crowded conduits.

Want to pull over 10,000 connectorized fibers through a four-inch conduit, and complete the project faster than ever? Now it’s possible with EDGE Rapid Connect Solutions – available with outdoor-only, indoor-only, and indoor-/outdoor-compatible trunks.

In outside plant environments, the smaller two-inch diameter pulling grip allows EDGE™ Rapid Connect™ Solutions to be easily pulled through existing conduits – streamlining installation and giving data center operators a new path to extreme density. The grip is waterproof, capable of handling up to 600 pounds of pulling tension, and compatible with MaxCell Innerduct. Best of all, installation is straightforward – no special processes or equipment are required.

Understanding fiber polarity and how to connect and maintain a system with absolute certainty is key to a successful installation. The complex part, however, is there is no ‘right’ way to approach fiber polarity as each manufacturer usually provides its own fiber polarity solution.
Consult your supplier as needed TO MAINTAIN

The EDGE™ Lockable Uniboot Jumper (sometimes referred to as a Uniboot Patch Cord) is the newest addition to the Corning EDGE™ Uniboot Jumper family. Maintains reversed polarity capability without exposing internal components. In the field with no tooling requirements. This state-of-the-art assembly showcases the value that comes with the LC Uniboot connector, but now, with no special skills or training required, can easily be locked in the field, eliminating partial connection risk and accidental disconnects.

Safe Simple Polarity Management” (video)

Inspect, Clean, and Re-Inspect
… 100% Spec compliance

Visit Wet/Dry Cleaning Solutions for adapter Cleaning and end-face Cleaning for more product info.

or more of “My World” for more application notes

Email or call if you have questions.