Discover. Understand. Anticipate.


Report written by Hugo Bonnaffé in North America: an XXL Network for BHS

End of November 2012. In order to enhance its North American backbone capacity, the hosting provider sends two members of its network team to cover America from East to West. A three week journey during which's network infrastructure was improved through the upgrade of 7 of its strategic points of presence. Antoine Guenet and Nicolas Platto speak of their “ePoPee”.

Anticipating American Activity Development

While BHS was being established, the hosting provider was weaving its own network across the United States, thus ensuring complete control over American user data transmission up to its clients' servers. This also means is no longer forced to rely on the many local telecom companies and ISPs who were tasked with absorbing traffic until then. “Since our goal is to supply quality bandwidth in order to irrigate the 400 000 servers BHS will be holding, this is a strategic decision. The network is also meant to connect American and Asian users to's 140 000 servers in France via our transatlantic optical fiber links (a total of 200Gbps via two dozen links installed in different providers’ cable ducts (1), editor's note)” says North American network administrator Antoine Guénet.

13 PoPs to Cover North American Territory

Owning your own network means managing your own routing equipment among a territory's many different PoPs (which stands for point of presence). Simply put, they're network crossroads where traffic flows. In such places, telecom and ISP equipment lay beside that of Internet content providers (such as, Amazon, Google or Facebook). They are located within vast data centers, and sometimes more... unusual spots. Such is the case with this old school in Palo Alto, whose exterior appearance hardly reveals the nature of its current purpose, “despite its high security level”, says hardware responsible Nicolas Piatto. Furthermore, manages the links that connect PoPs one to another. “Most of the time, they're wavelengths we've negotiated with telecom companies (2) who own the fiber. Such is the case with Montreal's PoP and the Beauharnois data center, towards which all of our traffic will be directed”, says Antoine Guenet.

« “The network maps are designed and drawn with the idea of covering the whole territory and optimizing user (our customers' customers) latency, wherever they may be.” »

Designing the Network: Making the Right Strategic Choices

Octave Klaba himself designs and draws the network maps, with the idea of covering the whole territory and optimizing user (our customers' customers) latency, wherever they may be. He also ensures wavelength diversification, multiplying routes in order to protect the network against any kind of service provider failure, or if a cable was cut off. The latter case is rather rare, but it recently happened to a Roubaix-Paris link when highway maintenance severed a cable. “In the case of an incident, thanks to the BGP and OSPF protocols (and the parameters's network team set) the routers dynamically recalculate an alternative route (much like a GPS does), and redirect user traffic accordingly”, Antoine Guenet explains. This whole process, of course, means the connections' capacities are systematically required to be able to handle data overflow. The PoP locations were chosen in regard to this last criterion: the potential for traffic exchange with others, through peering or IP transit. By opening the doors to its network, gains access to other networks that complement its own. By connecting to local ISPs, is able to provide excellent latency times to American users that live far from great cities. “In order to successfully negotiate peering or IP transit, our network has to be interesting to other providers, has to lead them to this exchange of services”, Antoine Guenet points out. “This is why we need to be strategic, and sometimes put up a fight for a place in the most popular and coveted PoP locations. The Palo Alto location, for example, is as small as it is important, being a main exchange venue with Asia.”

The Goal: Triple's Network Bandwidth in North America

Antoine Guenet's and Nicolas Piatto' expedition in the US aimed to upgrade's backbone capacity in North America. The first step was upgrading the connections of 7 out of's 13 American PoPs. 100Gbps were added between Newark and Miami, 100Gbps between Newark and Los Angeles, 100Gbps between Chicago and Montreal, 100Gpbs between Chicago and Palo Alto, and 100Gbps between Newark and Beauharnois, for a total of 500Gbps. The second step was upgrading the routers so they could properly exploit these new pipes. Network cards were changed and new computer cases were added in order to plug in new the optic fiber cables. Lastly, it was necessary that we double up the number the routers for certain PoPs, ensuring they would have 2 inputs and 2 outputs. A security measure we felt necessary, in case of hardware failure. “Normally, both routers share the load, but if there is any kind of physical failure, one of the two routers takes over until a team makes the necessary repairs”, Antoine Guenet clarifies.

NW, NY, CHI, PAL, SNJ, LAX and MIA: 7 POPs Upgraded in 3 Weeks

This adventure's first stop was in Newark, followed by New-York. These 2 PoPs are of great importance, being the network's front door for's European network. The 2 routers in Newark are connected to their twins in London, and together, they represent 80% of's transatlantic network capacity. In New-York, 2 distinct peering points can be found. They are NYIIX and EquiniX NY.
“In Newark, the operation was about replacing the old cards with the twice as powerful new Cisco Sup2T cards. You could say that it didn't start so well, seeing that some of the equipment came in late. This first operation was the basis for the whole trip! Because of the delay, the first steps were rushed through”, Antoine Guenet remembers. The two teammates had reduced time to do the same job, which consisted of powering off the case, replacing the cards, making sure everything “pinged”, and, along with the network team in France, ensuring that the traffic that had been temporarily redirected was flowing back normally. In order to minimize risk and affect the smallest number of users, everything had to be done within the very limited time frame during which traffic is lowest. “Downtime was not to exceed 1 or 2 hours, lest our customers start noticing the maintenance. Shutting down a case in Newark, knowing that you just powered off half of's transatlantic network capacity makes a guy nervous”, Antoine Guenet confides.
“Still, everything ended well, and we've even gotten closer to many of the local data center teams. They were amused with how our PoP was “clinical”, or rather perfectly clean and well wrapped up, without any cables sticking out. The local technicians committed to ensuring everything would remain that way if they ever had to manipulate the cables, for cross connecting, for example.” Nicolas Piatto justified this method of work saying that “beyond the aesthetic reasons, about which we are very concerned (laughs),'s best practices indicate that we need to facilitate future interventions, be they upgrades or maintenance. We've seen pretty messy PoPs in other locations, in which cables had been stretched to their utmost capacity. It “pings” just as well, but the day you need do some maintenance, downtime may be longer and clients may feel more of an impact on their performances.”
The same kind of maintenance was quickly applied in New-York, Chicago, Palo Alto and San Jose. A few difficulties in Los Angeles and Miami broke the monotony that would have otherwise crept in. “Being our gateways to South America, the Los Angeles and Miami PoPs are of considerable importance. In both cases, for lack of space, we had to move our racks from one floor to another before any kind of maintenance could begin. On top of that, we realized the cross-connect cables that had been delivered to Los Angeles had LC connectors, while we were expecting SC connectors. Then again, it was the friendly relationship we had developed with local techs that saved us. They agreed to swap their SC jumpers against our LC ones. A reminder that even in a world of machines, human contact is essential”, Nicolas Piatto says.

“We connected directly to's network from our hotel room's Wi-Fi!”

If one would ask what the most moving memory Antoine and François keep of their journey, it would be... a traceroute. “This was in Chicago, on the night following our PoP maintenance. We connected our laptops to the hotel Wi-Fi and performed a traceroute to see what path our packets would follow in order to reach the network management machines in Roubaix. We were then able to see clear results of our work: after two jumps on local nodes, we fell directly onto's network, which led us straight to Roubaix Valley. A moment of pride.”

Thanks to this global upgrade, the North American network now boasts a 520Gbps capacity (for a total of 2,21Tbps worldwide). These figures illustrate's exchange capacity with other providers, and means the company can rest easy as far as connecting its first servers in BHS is concerned. The network team is already working towards the implementation of the Infinera technology in France. This in turn, will eventually be applied to the American network. This will certainly be another occasion to tell a tale about a trip in America.

(1) Transatlantic carriers: Hibernia, Global Crossing, Level 3, Tata, Reliance Global Com.
(2) North American wavelength providers: Above, Allstream, Fibre Noire, Global Crossing, Hibernia, Level3, Reliance Globalcom, Tata, TeliaSonera.