Skip to main content

CommTech upgrades data center network


As of October 2019, ITS Communication Technologies was wrapping up most of the work on the first major design change to the data center network in more than a decade.

The new design significantly increases the network speed and adds redundancy for ITS and the UNC-Chapel Hill schools and departments that use the data centers at ITS Manning and ITS Franklin. In fact, the bandwidth is more than 10 times that of the old network.

Substantial project

Switching to a new architecture has been one of the biggest projects that Communication Technologies has ever undertaken in terms of people, money and time, said Ryan Turner, CommTech’s project manager on this major effort.

Ryan Turner stands in front of the pods portion of the new architecture
Ryan Turner with the pods portion of the new architecture

This project has been about two years in the making. CommTech’s Danny Shue designed the architecture and Jerry Woodside and Robert Henderson handled configuring, testing and validation of the new design. From ITS Infrastructure & Operations, the Data Center Operations group provided much assistance. In addition, many units and schools across the University have helped with CommTech’s testing and validation of the platform.

This new design — called spine and leaf data center network architecture — is considered the gold standard for traditional data centers. In addition to the tremendous increase in bandwidth, the new architecture also provides considerable redundancy, meaning there would have to be multiple faults in a given area for a disruption of service to occur.

“If everything’s working as it should, it adds a substantial amount of redundancy that we haven’t had before,” Turner said.

Because of the insufficient redundancy of the previous design, certain switches ran nonstop for more than four years because updates and patches couldn’t be made without disrupting other services. Consider the equivalent — not running an update on your computer for four years.

In yet another benefit to campus customers of the data center network, ITS will no longer charge for 10 gigabits per second of connectivity to all the systems attached to it, and cross data center connectivity at 400 gigabits per second. As long as customers conform to ITS standards for how their servers are connected to the data center network, the 10 gigs of connectivity is free.

Users appreciate extra capacity

By mid-October, 12 schools and departments had migrated or were in the process of migrating the hosting of their servers over to the new architecture, in addition to ITS. Just to name a few, some of the users are the School of Public Health, the Department of Computer Science, the Libraries, the Development Office and the Hussman School of Journalism and Media.

The spine portion of the new architecture
The spine portion of the new architecture

Customers have embraced the new platform. They’re happy to have the additional redundancy and capacity, Turner said.

“Moving to the ITS spine and leaf data center network has allowed us to maintain near 100% network uptime for our virtualization and storage infrastructure,” said Travis Matthews, University Libraries Storage Administrator. “We’ve been through several switch updates now, and they have all been completely hands off on our end with no network disruption. Working with the networking team has been great throughout the transition to this new architecture.”

For campus entities that aren’t hosting their servers on ITS’s data centers, “this makes an extremely compelling case for schools and departments to co-host in our data centers,” Turner said.

Old pipes experienced clogs

The legacy design began operating at ITS Franklin 15 years ago and at ITS Manning about a dozen years ago. As was common during that time, the design was a layered architecture that looks like a tree when diagramed. With that three-layered architecture, one link going down or one piece of equipment failing can cause an outage.

“The old platform,” Turner said, “was subject to port channel saturation that the new design essentially eliminates.” That caused numerous service outages over the years, and motivated ITS to move to the new platform, he added.

If you look at a diagram of the new spine and leaf data center network architecture, the switches in the top layer — the spine — are connected to every access switch — the leaves. The design has paths of travel every which way.

As a result of that topology, there should be fewer outages resulting from network constraints, Turner said.

“On the legacy design, it was possible for multiple people to contend for limited bandwidth, which could saturate the pipes and cause service disruptions,” he said. “Now it’s unlikely.”

Equipment details

CommTech bought the routing components of the new design in 2017. They consist of Nexus 7706s. In early summer 2018, the division purchased the spine layer components — Extreme SLX 9850s. As ITS has built out the data center, it has purchased on demand the edge system pods, which consist of pairs of Extreme 690s switches.

The drawbacks of the design are the cost and complexity. “It’s not cheap to have this kind of redundancy,” Turner said. In addition, the staff has to learn the new complex design.

Design meant to last

CommTech anticipated finishing up this project in September 2018. Instead, the division now expects to mostly wrap up the project by the end of 2019 and then tie up any loose ends in 2020.

“We’ve had a number of setbacks beyond our control that has slid the completion date closer to the end of 2019,” Turner said. “Chiefly among those setbacks were firmware bugs in the spine layer hardware as well as feature requests that were incomplete or delayed by the vendor.”

The end goal of the project, Turner said, “is to provide system administrators the ability to push very large amounts of data across the data centers without the disruption of other services, provide them an additional level of redundancy for network faults, and provide the networking staff flexibility in doing in-service upgrades to equipment without requiring outage windows.”

Turner expects the data center architecture to last well into the future.


Comments are closed.