Growth of cloud-based services shows no real signs of slowing down. This
adoption rate is propelling providers of cloud services to construct new data
center capacity, work to make data centers they already have run more
efficiently and improve how they network their data centers internally.
In early adoption phases of cloud,
there have been two dominant uses of data center interconnection (DCI). First is
for connecting enterprise data centers to service providers’ data centers for
hybrid and public cloud computing services. The second use has been to connect providers’ ecosystem
partner data centers to SPs’ data centers to mash up applications and federate cloud
services.
As usage has grown, though, a new
set of DCI requirements has emerged. These involve connecting providers’ own data centers at very high capacities.
Two scenarios dominate this trend. The first is in metro or nearby data center
connections, and the second is in hyper-scale data centers deployed at great
distances from other sites and running at remarkable scale.
In the first use case operators will
run out of power or space in existing sites and need to create additional
capacity nearby. This can be in a metro area footprint or in an extended
campus. DCI is critical in these deployments because many cloud applications
work in a highly distributed model. They often need access to resources in
neighboring data centers many times over before responding to a single user’s
request. Thus, interconnections need to be simple and fast.
In the second deployment scenario,
hyper-scale operators such as Google, Facebook and Microsoft search for remote
locations where land and power are less expensive and build some of the world’s
largest data centers there to run their services. Server counts in these sites
range from 200,000 to 500,000 or more. The need for integration with systems in
the providers’ other data centers is strong in mega-site deployments as well. This
leads to extremely large capacities of DCI bandwidth being deployed both
locally in clustered DC locations as well as over long haul transport for sites
that are a half a continent or more away.
DCI capacities required in the intra-provider
configurations range from 10s of tb/s in medium-to-large scale sites to several
hundred tb/s in the largest mega-center locations. Because of the ongoing growth
in the use of providers’ services, the unique needs of these DCI deployments
have led to the emergence of a new type of high-capacity DCI solution.
Five requirements define the new
breed:
- Efficient and
flexible scaling to 100s of tb/s of transport
- Compact, rackable
form factors
- Low power
consumption
- Simple operation
- Programmability for
integration with service automation
Underpinnings of these requirements
A dominant aspect of cloud data
centers is use of infrastructure such as servers and storage systems that are
modest in unit size but able to be pooled in wide ranges of capacity to serve
the needs of application or service. This leads to a bias for systems
installable in compact, rackable form factors that are easy to install and expand,
often leveraging auto-configuration for integration into infrastructures at
very large scale.
Form factor compactness demands low
power consumption. If an individual server consumes, say, 150 watts in ongoing
use, a rack of 40 such servers might consume 6 kilowatts, sustained. A data
center with 100,000 such servers might consume 15 megawatts (approximately
estimated). It’s easy to understand why cloud providers focus on wringing every
possible watt out of solutions they deploy. DCI platforms designed in a more
server-like package (versus a telco office orientation) are likely to consume less
power, perhaps drawing a third less power per rack than alternatives. Across 10
racks’ worth of devices, if 150-200 kilowatts of power can be saved, a solution
is heading in the right direction.
A final objective that fits with
the ability to pool resources goal is to support open, programmable software for
DCI capacity to be dynamically provisioned according to application needs. A
variety of approaches can be taken to achieve this, including plug-ins for
service control software rapidly evolving for use in cloud and virtual
networking infrastructures as well as API toolkits to let large cloud providers
integrate with their own service management platforms. In the end,
programmability to support adaptation to providers’ goals for resiliency, path
allocation, and application-driven solutions are the key requirements.
This new breed of DCI solution
will complement other transport solutions that implement shared network transport
of various types in metro and long haul configurations. The two styles will be
used by providers for different types of connections. Both will be used to
support higher level service requirements for customer, partner, and internal operator
data center connections.
The Cloud Xpress family
introduced by
Infinera is an innovative example of the kind of high capacity,
small form factor, programmable DCI platform cloud operators are leaning toward
for their internal DCI deployments. Cloud Xpress is initially targeted for
metro deployments. Leveraging optical innovations Infinera has previously
introduced and engineering them into a platform capable of 20+ tb/s in a single
rack, Cloud Xpress is an impressive contribution to the state of the DCI art. If
trials prove out successfully, Cloud Xpress has every prospect of helping cloud
operators scale out their data center deployments and interconnect them with
the capacity and elasticity they desire.
Paul Parker-Johnson