Can Fibre Channel (FC) Go the Distance?

I have been a long running Fibre Channel (FC) enthusiast since I deployed my first two SAN switches back in 1999 but it seems that since then basic facts about FC have gotten lost along the way. We can all get caught up in the hoopla of new and slick storage technology features and lose sight of some the most important and basic details that keep our storage fabrics up and humming.

Among these are the Fibre Channel cabling infrastructures and the distance limitations incurred by continued increases in FC speeds. These are the ones that can be easily, but inappropirately, overlooked inside the data center.

In Base-2 deployments of Fibre Channel, the following distances apply to the cable plant being deployed and the speed at which the protocol is rated. As demonstrated by the numbers below, if running 62.5-micron fibre in your cable plant, there are some serious considerations to be made when moving into 8Gb/s.

In all 62.5-micron deployments, if you plan to continue moving forward with the FC environment, some strong thoughts need to be placed into cable plant updates of 50-micron fibre. In some cases, however, depending on the distance the fibre flies, 50-micron may not be enough either. This particularly holds true if you are deployed in a structured cable plant with connect boxes and structured tubes of glass.

Multi-mode 62.5 micron

  • 8 Gb/s Distance = 2m – 21m
  • 4 Gb/s Distance = 2m – 50m
  • 2 Gb/s Distance = 2m – 90m
  • 1 Gb/s Distance = 2m – 300m

Multi-mode 50 micron

  • 8 Gb/s Distance = 2m – 150m
  • 4 Gb/sDistance = 2m – 175m
  • 2 Gb/s Distance = 2m – 300m
  • 1 Gb/s Distance = 2m – 500m

Single-Mode 9 micron

  • 8 Gb/s Distance = 2m – 1.4 km
  • 4 Gb/s Distance = 2m – 2 km
  • 2 Gb/s Distance = 2m – 2 km

Most data-center managers or infrastructure people really dislike dealing with the cable-plants inside their facilities, especially if they are asked by the storage team to start ripping out their cabling infrastructure. Their disdain stems from the fact that cable plant costs are extremely expensive, particularly raw cable, installation labor, splicing and test equipment.

However taking no action over time is not an option either as intermittent, uncorrectable symptoms will start to occur if cabling upgrades are not performed and you start to move to 8 Gb/s or beyond.

Examples of problems you might start to experience include things like No-Sync lights, switch ports negotiating at a lower speed, no light transmission at all, etc. Obviously, the time to explore these issues is before you already have deployed new 8 Gb/s switches, HBA’s, and storage into your environment.

This problem will continually get worse as plans for the deployment of 16 Gb/s begin in 2011 and scale up to 128 Gb/s in 2020, depending on market demand.

While the FCIA (Fibre Channel Industry Association) says that all FC deployments will be backward compatible for at least two versions, keep in mind that every time a new speed is deployed into the market, new and fresh transceivers need to be added to your switches, storage, and HBA’s in order to drive those new speeds (Translation: More Expense).

What’s the answer you ask? Not 9-micron fibre – unless there are significant reductions in costs of deploying and maintaining that infrastructure. Especially since most HBA, Storage, and Tape manufacturers don’t support a single-mode interface into their devices; the default standard is multi-mode.

The answer I believe is a fundamental shift in the way we look at storage and server interconnects. Here we can take a page out of the book from our Infiniband friends and begin to build storage networks that can not only service I/O but compute as well. I’ll get more into why I feel this shift is coming and the direction in may take in my next blog entry.

Editor’s Note: This blog entry was originally published on May 30, 2008.

Tim Anderson

About Tim Anderson

Tim serves as a Senior Analyst HA/Infrastructure for DCIG

2 Comments

Leave a Reply