ACG Research

ACG Research
We focus on the Why before the What

Friday, April 27, 2012

Advancing the SDN Security Conversation



Recently, the Open Networking Summit brought together people from academia, the private sector (both vendors and potential customers), and the standards world to discuss the current state of software defined networking (SDN). Sarah Sorensen, security analyst, sat down with Matt Palmer from Wiretap Ventures, who spoke at the event, to get his thoughts on what’s happening in the space from a security perspective.


First of all, what would you say was the main take-away from the event? 
I think we saw that software defined networking has hit a tipping point. It’s real and people are starting to see a path where they could actually deploy it. It was fascinating to hear Urs Hölzle, SVP of technical infrastructure from Google, talk about how they are running all the traffic for their internal data center WAN network on a SDN built using OpenFlow. That was a real eye-opener for a lot of people. You really can use this stuff to run a network. Google talked about how they are able to optimize flows and achieve better QoS visibility and predictability, which starts to deliver on some of the promise of SDNs, and they are hopeful they will be able to use it for some customer-facing applications soon. 

So, where is security in all of this? It seems to me it’s been missing from a lot of the SDN discussions. 
It’s true, security hasn’t been a large part of the discussion. However, we are starting to see a lot of people raise the questions and concerns you have been talking about (http://www.acgresearch.net/knowledge-insights/articles/preparing-for-the-ipv6-evolution-and-the-security-implications.aspx) around “how to secure this environment, “how to secure the controller,” “how to secure the connections between the controllers and the switches/routers,” “how to ensure I don’t have malicious things injected in the mix,” etc. 

I think the reason we haven’t seen more of these discussions up to now is that we haven’t been close enough to deployments, so the security guys haven’t been brought in to validate the architecture or poke holes in the technologies.  

Isn't that one of the major stumbling blocks to wide-scale adoption? Security can’t really be an afterthought; it needs to be considered from the start. 
True, but right now the trials are in the lab, and the people working on them are focused on solving the next generation networking problems. A lot of it is theoretical, making it hard to understand the real implications and security issues. There are a limited number shipping products out there to test, but as products start to ship, people will want to know what the risks are. 

If they haven’t thought about those risks, it will probably be a nonstarter. CIOs I have spoken with are looking for those companies that are thinking about and know how to solve for security before they will consider putting anything in a production environment. They can’t afford to disrupt their networks or increase their risks. There’s just too much at stake. 

Agree, I think we are just now learning what security issues they may encounter and the security questions we need to be asking. I think you will see security and SDNs ramp up over the next phase of this evolution. We also see a number of early use-cases being in either single tenant, back-office deployments (ala Google), which means that SDN is deployed behind an already secure network. The second common early use-case we see are deployments in test and development environments where security is less of a concern because it is not outward facing and is in the experimental areas of the network.

What do you think SDNs can mean for security? 
SDNs have the potential in certain situations to streamline security processes or even change how security is deployed, say from physical appliances to virtual appliances, or even virtual security services embedded within the infrastructure. While these are exciting and important developments, we do need to temper the excitement until customers and vendors have experimented and learned how to execute SDN to enhance network security. 

One thing is for certain, it’s an interesting time and an interesting space to be in!


Sarah Sorensen
ssorensen@acgresearch.net
www.acgresearch.net

Tuesday, April 24, 2012

Connection-Oriented Strategies for Next-Gen Networks

FierceTelecom has recently released its e-book Connection-Oriented Ethernet. In "Ethernet Strategies for Next-Gen Networks" Michael Kennedy, business case analyst, provides a deep dive into Connection-Oriented Ethernet, its benefits, how it exceeds SONET and discusses Ethernet-centric and MPLS-centric approaches and the challenges each poses for service providers.

To read Michael Kennedy's chapter or download the FierceTelecom e-book, click here.



Michael Kennedy
mkennedy@acgresearch.net
www.acgresearch

Friday, April 20, 2012

Economic Benefits of Cisco CloudVerse

Cisco CloudVerse is a comprehensive architectural approach. It treats multiple data centers and networks as a vast pool of dynamic resources that constitute the cloud. It uses the visibility of applications, users, devices and availability of infrastructure to optimize service delivery from the cloud.

CloudVerse facilitates the world of many clouds: private, public and hybrid, by enabling guaranteed application experience, security and rapid scaling across the infrastructure by uniquely integrating data centers, networks and applications.

ACG Research presents an economic modeling exercise of the journey from individual virtualized data centers to the full CloudVerse implementation CloudVerse and the resulting benefits to Enterprises and Service Providers.




Michael Kennedy
mkennedy@acgresearch.net
www.acgresearch

Are You Ready for the Cloud?

A recent study of providers across the globe concluded that traditional carrier and Telco providers are not ready or not able to offer cloud alternatives. Even those providers that have acquired cloud companies still are challenged to find the right sales force to migrate enterprise subsystems to the cloud.


Approximately 70 percent of 600 CIOs surveyed indicated that they spend 13 percent of their budgets on service provider public cloud as an infrastructure in a managed offer, and they estimate their demand will grow to more than 40 percent in three years. This development is forcing enterprises to evaluate their business processes across all departments and identify how cloud can support them.

The decisions are not clear cut. For example, a company may need higher security on premise managed by local or onsite resources owned by the company, but for communication and collaboration the company needs public cloud offers. For back-up and recovery a relationship with a provider or MSP may provide a solution that makes the cloud attractive for reducing resources and access to data. Before selecting which cloud solution is right, each process should have identified requirements and risk rate, and the solution should meet those demands by process or client.

Providers, therefore, must address the obstacles they face when dealing with enterprise customers. Cloud offers from such companies as Amazon, Microsoft and Google can almost fully support the small-medium business self-service customers, but their models are completely inadequate for enterprises, which have often relied on traditional providers for SLAs for connectivity. It is not certain that these providers can manage the cloud SLAs. To effectively meet enterprises’ requirements, providers will have to initiate major restructuring of their go-to–market, sales and delivery systems.

Some providers have already filled their gaps in cloud offers: OpSource, Terramark and Savvis and other providers have purchased other companies to acquire cloud offers. But it still remains to be seen if they can they sell their cloud offers. Only NTT, which purchased Dimension Data and OpSource, has the system integration skills to sell the offer in a consultative fashion. Even if each of these companies have cloud offers, getting the offer to market and selling it will still take several years. To fill their cloud gaps, Telcos must acquire, partner or build to meet the demand.

‪Which companies will be the winners in off-premise cloud? How will the market evolve between SPs (Verizon), asset-heavy system integrators (SI) (CSC), over the top players (Amazon, Google), cloud pure plays (Rackspace)?‬ It is unlikely that Rackspace’s, Amazon’s or Google’s offers would be considered an enterprise infrastructure offer as they have limited ability to address the SLAs of enterprises. Service providers address SLAs for connectivity; however, they will need to develop the consulting skills to enable migration of subsystems to public or private cloud.

Obstacles preventing companies from delivering vary. Verizon has more than 300 SIs or professional service staff that must be trained, and the company must deal with changes in leadership, alignment with Terramark, and lack of processes. Savvis/Centurylink has the same challenges. NTT is quickly retooling DiData to sell cloud offers. This company is the only one with the SI, cloud and connectivity of SLAs for enterprises. Some outsourcers such as CSC have good white label vBlock (VCE) stacks that will be a standard offer in the cloud. Just as Ericsson, IBM and HP do with different infrastructure technologies, some just as these Sis do will also manage infrastructure as an outsourcer.

‪As it currently stands, partner-to-partner partnering with vendors is the primary strategy that would really change the cloud market, because it is the quickest way for providers to move from connectivity providers to full offers of cloud enablement. Vendors have deep relationships with system integrators and have created loyalty and preference with top-level integrators, for example, Accenture, IBM and HP. Service providers, Tier 2 and cable operators that create partnerships with low-end vendors, which have SI Lite VAR partners, will be able to target the mid- to low-end market with loyalty, incentives, training and partnering offers. Partnering for risk-adverse Telcos enables them to change, invest and move to new technologies. Partnering 1) allows for white labels for cloud offers and the acquisition of go-to-market sales skills; 2) reduces investment up front for cloud-based offers (if white label is leveraged); 3) reduces sales staff training; and 4) derisks the cloud for the provider’s sales team, which is generally focused on connectivity.

‪Providers need to refine the lifecycle of their offers to increase their chances of offering targeted cloud services. They need to really understand their subscribers, for example, are they wireless or wire line (they know which but not what the business is)? Service providers must identify connectivity requirements, for example, size, and most importantly, understand their customers’ business IT processes, needs and systems. Once these factors are fully understood, providers can develop consulting and migration strategies and successfully deliver cloud to enterprises.


Thursday, April 12, 2012

CloudSigma: Tackling the Big Data Challenge

Lauren Robinette recently talked with Robert Jenkins, co-founder and CTO of of CloudSigma, about its CloudSigma on demand utility platform, which is a unique utility cloud that is generating a buzz within the media/entertainment and other markets.

CloudSigma offers utility cloud hosting just like RackSpace and Amazon do, but CloudSigma does it with any operating system and has the flexibility to scale as needed. Best of all — some would say — the platform has a transparent cost structure, which can be accessed on the company’s website http://www.cloudsigma.com/.

The CloudSigma platform is a true Infrastructure as a Service product that enables outstanding performance for applications, such as storage, and delivers the flexibility via easy configuration to address different workloads. For example, the system can offer a different experience for various types of workloads: storage for SSD at the core, object storage for high performance jobs, and I/O intensive jobs.

Consider the workflow of processing and transferring of large files in the movie production industry. Today, shipping via mail of the large files for each production step takes time, and if there is an error, must be redone. With the Big Media System, which is an ecosystem built on the CloudSigma platform, users upload huge amount of data into the cloud, leverage processing functions in the cloud and provide the link for other personnel to access and address the next tasks of the workflow. The benefits of this system are immediate: faster access at less of a cost.

This system is also not limited to the media industry; it can be utilized by verticals such as financial, medical and scientific institutions. For example, an insurance company can publish data that allows changes in the pricing of policies based on traffic and where accidents happen. This type of public data can be used to create an asset that then is monetized to create a new product. Another benefit, as users’ behavior adapt, will be the increase in turnaround for iteration cycles and byproducts that require more processing. This is similar to what Google did when it improved the quality of searching, which, in turn, created changes in consumers’ behavior.

CloudSigma offers a white label to providers or the brand can be leveraged by CloudSigma providers. A partnership with CloudSigma allows providers to develop a new cloud resource, public and private clouds that target the big data industry, which reduces time to market. As Robert Jenkins succinctly stated, “It’s like selling gold pans to miners.”

For more information about ACG Research cloud service, click here.

For more information about cloud best practices, click here.


Thursday, April 5, 2012

OFC/NFOEC 2012 Wrap Up

Even before OFC/NFOEC 2012 officially got underway key messages about data center innovation and the critical need for photonic integration for low-cost, low-power interconnects, as well as the need for increased interest and investment in this integrated component area were the buzz at the conference.

Workshops featured deep dives into data center architecture, and discussions varied from the distributed all-to-all Hadoop approach to thegather-scatter centralized approach.

In other workshops participants pondered issues such as having to throttle back data rates between servers and potentially turn off processors (sleep mode) to save power or how to determine the right network design that supports fast data flow and also utilizes optical pipe switching at lower power levels.

Highlights

OFC/NFOEC 2012 was all about the data center. Inter DC continues to focus on scalable and simple solutions in the long haul and metro networks connecting DCs worldwide. Metro deployments are expected to be in full force in 2013 and 2014, especially if low-cost coherent technology is available.

Data interconnects are clearly the major source of bottleneck within data centers today. The industry is looking for low-cost optical interconnects to replace copper because of distance and bandwidth limitations. By using silicon photonics, optics can advance the state of art in the computing environment into a faster and more scalable data center. Silicon photonics will likely be the building block for a growing number of computing and networking products. Reduced size, power and high-volume deployment will enable more low-cost options.

Other discussions centered on the bandwidth gap. With 40 percent average traffic increase, optimistic improvements in physical hardware can help by about 10 percent. The industry needs long-term research to deliver some innovative changes as we push towards Shannon's limit for capacity.

A Final Note

The entire ecosystem is strapped for dollars and seems to be operating on much lower margins. But the feeling at the conference was very upbeat. While margins are not great for everyone, it does feel like winter is over and investment is back into start-ups, although slight, and positive outlooks are everywhere.

Paul Saville, Level 3
Eve Griliches, ACG Research
Donn Lee, Facebook
Bikash Koley, Google

Monday, April 2, 2012

ACG Announces Cloud Outsourcing Report

All major vendors offer some outsourcing capabilities in either advanced services or outsourcing of management of the network operations center. The goal of outsourcing is to focus on customer acquisition, increase value to customers and deliver service level agreement management.

Service providers are either true telco or a carrier and tend to be very slow to move to a new technology or offering. Their internal silos and sales teams are set up to sell connectivity or pipes and access and less able to sell the advanced offers, such as unified communications, cloud offers and video services, demanded by the market.

ACG Research has interviewed the major partners and vendors that offer outsourcing services to support service providers’ migration to cloud. This document provides comprehensive information about the market offerings. This report covers Cisco, HP, IBM, Globecomm, Avaya, CSC, Ericsson, Alcatel-Lucent, and NSN.

The report covers:
  • Executive Summary
  • Introduction
  • Objective of Report
  • Strategy
  • Strategic Opportunities for Outsources
  • What Is Working
  • Avaya Outsourcing and Out-tasking Services
  • Alcatel-Lucent Managed Services Outsourcing
  • CSC Cloud Strategies to Speed Time to Market for Service Providers and Enterprises
  • Ericsson Global Services
  • Globecomm Business Outsourcing
  • HP Service Outsourcing Consulting, Implementation and Management
  • Cloud Playbook from ACG Research