ACG Research

ACG Research
We focus on the Why before the What

Friday, September 30, 2011

The Demand Drivers and Economics of 100G Ports

A broad spectrum of routing, switching and transport vendors is now adding 100 Gbps ports to their product lines. These high speed ports are arriving just in time to meet the capacity requirements of large service providers’ core networks. In addition, 100G technology will help to drive down both CapEx and OpEx which will help service providers to control their costs in an environment where revenue growth is not keeping up with traffic growth.

IP network traffic is growing in a range of 35% to 85% per year across the world. It is growing in all market segments—residence, mobile and enterprise. Residential service usage will be the biggest contributor to overall IP traffic growth. Residential traffic growth is driven by the widespread acceptance of broadband service and the rapid adoption of Over-The-Top (OTT) video content. OTT video has a major impact on network traffic for three reasons. First video content requires much more bandwidth than other media. For example, traditional voice telephony requires 64 Kbps while Verizon FiOS HDTV service requires 18 Mbps. Secondly OTT video as well as Video on Demand are unicast services—each viewer receives a unique flow of video content. In contrast, broadcast TV is multicast—all viewers are connected to the same flow of video content. This has an overwhelming impact on network traffic. Video is expected to comprise over 90% of total traffic within the next three years.

Read the entire article in FierceTelecom's e-book.


Michael Kennedy
mkennedy@acgresearch.net
www.acgresearch.net

OpenFlow: Software-Defined Network Eases Cloud Management for Providers

Traditional routing and switching platforms haven't met cloud providers' requirements for building and managing cloud services -- a reality that equipment vendors were once unwilling to accept. What cloud providers want is an application programming interface (API) for the network layer so they can control the flow of their specific applications across their infrastructures. We now see this capability emerging with the software-defined network and OpenFlow specification.

Read more...



Marshall Bartoszek
mbartoszek@acgresearch.net
www.acgresearch.net

Amazon Lights a Fire and Their Competitors Are Hopping

I like what Amazon is doing with their new Kindle Fire. Everyone likes new hardware and last year’s Kindle is so “yesterday.” There was a very interesting quote from Jeff Bezos that bares further exploration. He was talking about one of the reasons other tablet manufactures have not succeeded in the marketplace, and he thinks it is “because they built tablets instead of services.” Think about that for a moment. Why have Google and Facebook exploded onto the high-tech scene? It’s about content or in Amazon’s case serving the content appetite of the masses. Remember net neutrality debates? The service providers thought the content providers were getting a free ride on their networks. The last time I checked, I pay for my broadband at home and I know the content providers pay for their “pipes.” Who is getting what for free?

Traditional service providers are sitting on incredible assets. They have data centers; they have “pipes”; they sell managed services; they have call centers, billing systems and skilled employees. They invented data centers formerly called center offices or COs. So what gives? Why can’t the service providers compete with Amazon or Google? They have the potential to be content kings. I say it’s because the largess within their operations. The leadership recognizes it and understands that it is better to slowly turn the crank and not make any waves. The leadership has the will but they lack support from the rank and file because of their sheer size. I remember when SBC heralded the news that it was going to introduce a video on demand service. After years of efforts and supposedly billions of dollars of investment, can I download a video from AT&T? Sure, five years after I could from YouTube. There is very little internal creativity within the service provider market. They move too slowly, and they rarely innovate anything. At one time service providers were the true communications innovators. What happened? Today, if they see an emerging trend, they buy into a market. So what you get is a large, desperate organization with very little synergy between it parts.

If just one of the service providers could think outside their box for a moment, like Amazon, they could step out of the commodity business of providing wired and wireless “pipes” and move into content services. Think about some of the innovations that Amazon has done. They wanted to make buying over the Internet efficient so they built a better mousetrap and patented a shopping cart process. Then, due to their success in selling stuff online, they had to build massive distribution centers and data centers to support their growth. Low and behold, Amazon was one of the most influential innovators in cloud computing. No one thought people would want to read a book or the newspaper on a Kindle. I see people using them every time I go through an airport. Millions have been sold. Now, they Amazon has completely rethought the browser and have introduced Silk as an innovation that has the potential to solve some very practical issues with mobile computing. I love it.

In the movie “The Graduate” a character gives Dustin Hoffman one word for the future, “plastics.” Today, the word for the future is “content.”



Marshall Bartoszek
mbartoszek@acgresearch.net
www.acgresearch.net

Friday, September 23, 2011

Meshed Optical Topologies

As network architectures become more agile and dynamic, they enable faster provisioning and service creation. However, migration issues are increasingly apparent as operators move from static and point-to-point networks to transparent mesh architectures that deploy ROADMs, optical cross-connects and dynamic control planes.

Operators are discussing mesh topologies and dynamic networks but are overlooking important technical and physical issues. Power transients and chromatic dispersion effects are far more prominent at higher bit rates and in mesh topologies. Specifically, standard mitigation is simply not sufficient or sophisticated enough to handle the increased rollout of high-capacity links at 40G and 100G in transparent mesh networks.

Want to read more? Click here to download the PDF.



egriliches@acgresearch.net
www.acgresearchLink

Tuesday, September 20, 2011

Practical Steps to Jump Starting Cloud-Based Services

I recently attended MSPWorld in Austin, Texas, and participated on a panel that discussed how to jump start cloud-based services. Members of my panel, Rob Bissett, VP Product Management, 6fusion; Brian Hierholzer, President and CEO, Artisan Infrastructure; Brian Hierholzer, founder and chief executive officer of Artisan Infrastructure; and Bud Walder, Enterprise Marketing Director, Dialogic Corporation, answered four good questions related to cloud services.

Many MSPs were born from a time of break/fix. In the last 10 years, many MSP moved into managed services to increase customer “stickiness” and enhanced margin. Today, the cloud offers a new opportunity/challenge for the MSPs and is going to be very disruptive to the traditional IT market. How do you see the MSPs evolving in the next few years ?

The consensus from members of the panel was that the cloud is just the next generation of managed services. If the VAR/SI partners in the audience have been providing managed services, the cloud represents an evolution of managed services. The MSPs can “roll their own” cloud offering but there complexities in the software infrastructure. The MSPs can take advantage of cloud providers that allow for a private labeling of their infrastructure and stick to the applications they know based on their current expertise. Although the margins maybe be a bit lower, the MSPs do not have to manage the infrastructure, which will free up the MSPs to focus on requirements of their customers.

With the emergence of Microsoft “cloud” going to direct to the SMB market and with Google and other major telcos serving the SMB market, how can the MSP’s build a brand identity in the cloud market ?

The advantage the MSPs have is localized knowledge and long-term customer relationships. In addition, the offerings from the large cloud providers will be generic in nature; this provides an opportunity for the local MSP to customize its offering based on customers' requirements. There was also consensus that Microsoft will have to eventually provide a cloud channel program.

What is the recommendation of the panel when an MSP is building a business to decide if it should offer a cloud offering for the SMB market?

The key points an MSP should consider is 1) knowledge of the potential service offering; 2) knowledge of what the market is willing to pay for a particular offering; 3) is its force multiplier based on current “on the bench” knowledge within a MSP; 4) clear understanding of the potential SLAs it will be offering and the potential impact to its business if an SLA is missed; and 5) application road map on what will be the first offering and what are the subsequent offerings to its customer base.

How can cloud providers overcome concerns about security and privacy when talking to customers about moving to the cloud?

MSPs are in an excellent position to “demystify” the cloud for their customers. With localized market knowledge and their trusted adviser status with their clients, the MSPs have access to specific knowledge of various vertical markets. Through a collaborative sales approach, the MSPs can work through specific security and compliance issues with and for their clients. This is a big advantage when SMBs are considering moving into cloud-based applications and they have a true partner to qualify their opportunity/risk profile when moving to the cloud.



Marshall Bartoszek
mbartoszek@acgresearch.net
www.acgresearch.net


Monday, September 19, 2011

Targets, Towers and Terrorists

I was recently in NY during the week is the 10th anniversary of 9/11. I drove by Ground Zero and realized what they are doing there is illustrative of what has to happen within IT. Where the two towers once stood, they are building a single, taller, larger, stronger tower called the Freedom tower.

Not so long ago enterprises’ data center and security teams didn’t even know each other and were independent silos of responsibility, analytical information and processes. The modis operandi was “never the twain shall meet”! It wasn’t an overnight event, but as businesses became more information centric and less system centric, best practices required the dissolution of those two silos.

Information has to have two attributes, availability and security. Information that is secure but not available is worthless. Information that is available but not secure is suspect. IT had to deliver both. The need for the data center/security merger was probably best demonstrated in 2005 when Symantec (then a pure play security vendor) merged with Veritas, which was all about the data center. Initially the industries on both sides of that aisle scoffed. Since then, EMC has bought RSA, Intel has bought McAfee and IBM has been buying security technologies by the handful.

Six years later, it is simply a given that risk definition and risk tolerance have to include consideration for both. The differences between today and 2005, however, are significant. Today, the craftsmanship of the malware writers is much better; therefore, security in one form or another must be a part of every IT decision. We are dealing with an economy that has pressed IT to squeeze as much efficiency out of the existing infrastructure as possible. With limited IT staff and funding, efficiency in the forms of automation, outsourcing and the redefinition of risk tolerance are now parts of the equation.

Note, at no time have I used the words cloud computing, virtualization or mobility. Those three business functions are the best contemporary indicators of why the lines on an IT’s organizations chart must be dotted, erased or blurred to a faint vestige of what they once were. In their place there needs to be a single, stronger, efficient, effective and secure organization that can resist and repel the attacks your company may be facing.


Neils Johnson
Security@acgresearch.net
www.acgresearch.net

Friday, September 16, 2011

An Open Letter to Research in Motion

I was typing away about mobile security when I kept coming back to my Blackberry, a weathered Bold 9000. It's no longer connected to a Blackberry Enterprise Server (BES), most of the plastic chrome finish is worn off, and I never did find a decent RSS reader, it is actually rusted through in parts, the camera made every picture look like it was of the Loch Ness monster, the volume buttons don't work, and the web browser makes me miss using the old text-only Lynx browser. By Velveteen Rabbit standards, this phone is about as real as anything could ever hope to be!

This phone managed my entire life, two jobs, a small business, and a long-distance relationship. Not to mention bills, bank accounts, passwords, bookmarks, schedule, social networking, recipes, music, family, and the GPS helped me finally find my way out of that damned paper bag. Still, I need a new phone.

My wife has an Apple iPhone 4, my friends all have iPhones or obscenely powerful Android devices, and I must confess I've coveted other phones. I love the Nokia’s N8 camera, but Symbian? Grappling with the menus is a full-time hobby. As for the iPhone, I know that Apple has made great strides, but I'm a security professional. I can't, I just can't.

If Cisco offered an ice cream sandwich powered mobile phone version of its Cius tablet with all the included connectivity, collaboration, and security goodies and with Three Laws of Mobility and all their enterprise-focused Android security extensions having been acquired by Google via Motorola, I would be very, tempted to jump ship.

I know what you're thinking. Blackberry Messenger (BBM) and the keyboard, thousands of revolutionaries and rioters can't be wrong (Have you ever tried to type on a touch screen while driving on a desert road, being shot at?). I admit those two things are great, but no.

1. Over a decade ago, I was handed a 1.44mb floppy disc and told “You have to check this out.” What I saw amazed me, a full operating system, kernel, device drivers, middleware, and GUI, a web browser, and visual text editor, a web server even —all in half the size of a typical MP3. I am speaking of the new Blackberry OS, QNX, an OS that held my fascination until I realized I couldn't run any of the applications on it that I needed.

It's ironic that RIM now finds itself in the exact same predicament, rumors of delays and one incremental update (with no legacy support or upgrade path) after another as they fall further behind because they can't port smoothly their native e-mail application to QNX.

2. Aside from my excitement over the potential of QNX, I love the idea of Blackberry Balance. The ability to separate personal and professional objects on the device is fantastic. It's about time somebody simply accepted the fact that mobile devices are by their very nature, dual-use and that this is a good thing!

Unfortunately, Balance isn't very good at what it aims to do. The more time I spend wondering about what is on that i-luv-u.jar file I found, the less productive I am. I know it might be malware, but come on, phones come and go; love is eternal. Until I can use my phone personally any way I want without regard to impacting the business uses/resources it's not truly living up to its obligations as a dual-use device.

How would I fix all these problems? By looking to the cloud: X-as-a-Service on the Internet and up into the literal sky where planes fly by virtualization. I know, normally my response to virtualization as a solution is “now you have two problems.” I really think this is a rare instance where there is value beyond the buzz, specifically, a separation kernel that allows deaggregation within a single device, a single chip even. Make no mistake, I am in no way implying that virtualization is a silver bullet. There are countless problems it cannot address. However, I believe this could address RIM's immediate and specific problems beautifully. As a general example, create some compartments:

Personal : For all my personal resources, friends' and family's contact info, personal social networking, private media files, and all the apps I want, even if they might contain malware and have been blacklisted by my employer.

Professional: This secure container would be managed by the BES as normal and would contain all my professional data and apps. It would not be possible for me to share this with my personal container, nor would it be possible for my personal container to share my personal viral friends with my coworkers.

Legacy: An execution environment for legacy apps such as the native e-mail application and any custom Blackberry OS apps my employer might have created. This container could read and write to either the personal or professional environment that invoked it.

Versatility: A native execution environment for Android, Symbian, Windows Phone, Debian, etc., to open up a whole world of apps, which again could be tied to the personal or professional containers respectively.

You may think that I'm a dreamer, but this technique is already being used on “ObamaBerry” style Android devices, and this would be an outstanding opportunity for RIM to demonstrate that security can and, in fact, must equate to usability.

Just think, a sexy black device, stainless steel, and lightning fast QNX environment with the ability to run applications from just about anywhere without worrying about repackaging or legacy support. A device that truly understands and isolates based on the real world requirements of dual-use mobility, backed with the solid security reputation and enterprise reception of Blackberry.

Put a halfway decent camera on it and I'd buy it. In fact, slap on a power-on pass code and RIM would have essentially resolved every enterprise concern I've ever heard about mobile devices.

Comments, contact security@acgresearch.net.

Thursday, September 15, 2011

Infinera Introduces the DTN-X: Innovation from the Ground Up

At 5T Infinera's DTN-X is the largest integrated WDM/OTN product today; it is upgradable to 10T per bay and will include a 100 Tb/s system in the future. In various configurations the DTN-X can support up to 24T on a single fiber.

With 150 patents backing it up, Infinera is introducing the DTN-X with a "clean slate design." The DTN-X uses the 500G coherent technology Photonic Integrated Circuit (PIC) combined with large-scale OTN switching and grooming. The backplane supports 1 Tb/s/slot and is designed to support a multibay approach to larger configurations. This is the largest OTN/WDM switched transport product announced to date.

Eventually, the DTN-X will support MPLS with LSR functionality to enable statistical multiplexing. Perhaps the most unique quality of this new product is its uncompromising scale. It can support up to 5T of WDM line side 500G super channels or 5T of client side services, all with nonblocking OTN switching with the ability to “dial up or down” without tradeoffs. In various configurations the DTN-X can support up to 24T on a single fiber.

Click here to download the PDF of Eve Griliches' Market Impact.



egriliches@acgresearch.net
www.acgresearch.netLink

Thursday, September 8, 2011

A Business Case for Scaling the Next-Generation Network with the Cisco ASR 9000 System: Now with Converged Services

Cisco’s nV technology when deployed across the edge, aggregation and access networks that supports residential, enterprise and mobile services.

Extending nV from the edge to access reduces total cost of ownership as compared to a leading competitor’s converged services solution. Integrated traffic analytics offer traffic generation and reporting capabilities without using an external platform. This saves service providers thousands of dollars per service turn-up. To support the massive growth in mobile devices nV technology is now topology agnostic and its latest addition provides support for both ring, and hub and spoke topologies. The nV architecture allows for effective scaling of converged services while maintaining operational simplicity.

ACG Research has developed a business case analysis for Cisco’s nV technology when deployed across the edge, aggregation and access networks that supports residential, enterprise and mobile services. The analysis captures the market’s evolutionary change in the access network from E1/T1 to Ethernet-based technology. The model addresses this change for backhaul of 2G, 3G, and 4G radio technologies and for enterprise Internet access and L3 VPN services.

To download the PDF of the business case (September), click here.

To download the PDF of the business case for scaling the next-generation network with the Cisco ASR 9000 System (June), click here.

Wednesday, September 7, 2011

Could the Linux kernel incident happen to your company?

In late August kernel.org, the organization that maintains the Linux kernel, announced that they had been compromised. At the time of writing, the following is:

Known:
  • The attacker gained access to a kernel.org user account via a developer’s compromised system.
  • An escalation in rights from this user account to a root account has occurred.
  • The vector may have been incidentally closed in the recent release candidate kernel (3.1-RC2).
Unknown:
  • The exact attack vector: how the attacker escalated their rights from the developer's user account to a system administration account.
  • How (and if) changes in the release candidate kernel (3.1-RC2) have resolved the vulnerability.
With so much still unknown, a deeper analysis would be futile; however, this event has raised three points in my mind regarding security:

1. Unknown vulnerabilities
A recent article by Neils Johnson, security analyst, ACG, stated that there were 6253 potentially exploitable vulnerabilities (PEVs) discovered in 2010; that's 17 a day.

With such an overwhelming number of PEVs presenting themselves, I long ago stopped paying attention. It's even worse than you think; these vulnerabilities are never discovered the instant they are created. They lay dormant in the software for extended periods of time. It's not unheard of for newly discovered flaws to have been introduced nearly a decade earlier. You want some fear, uncertainty, and doubt? I bet that the system you are using right now has dozens if not hundreds of vulnerabilities that have not been published and may or may not be altogether unknown at this time. What's more, there are likely still other flaws that are not currently exploitable, but some future change to your system will make them so.

You can't anticipate, much less fix them all. Don't even bother trying. Try not to think about it at all, it'll only depress, frustrate, and anger you.

What you can do, however, is to think about software differently. Don't count its known security flaws; just accept the inevitability of failure. Security concerns should be placed to the side for a moment. Ask: Does this tool best support core business? Do you like the software's features? Does it make your staffs’ lives easier? Use productivity as a decider in software selection, not a fear of PEVs.

Obviously we can't keep security off the table, as much as we might like to. So, now, starting with the assumption that the application will fail, how much can be leveraged from that? Can you live with it? Can you minimize and isolate it? Can you detect it? At the end of the day, this is the best you can really hope for. So, why bother with all the song and dance?

2. Ideal considerations
Is your system configured or maintained as well as kernel.org is? They literally authored Linux from the ground up. You’d be hard-pressed to find a more expert, not to mention passionate, staff. Such a team would typically only be found in the wildest of CISO dreams, yet here we are discussing their failure.

Sometimes things just break. Does it matter? I am always amazed by the number of companies that have not undertaken even the most rudimentary risk assessment, much less, what I’d classify as a good one. You need to quantify your situation. In the face of the unknown, especially when feeling violated or hopeless, it's natural to assume the worst, get overwhelmed, and do nothing.

Focus on what you can fix and start with small changes that have the greatest impact. Do you have a sane password policy? Who controls what can run on the desktops? Has anyone from your security staff actually spoken to people involved with core business about what they do and how they do it?

3. Unforeseen modifications consequences
The newest release candidate kernel (3.1-RC2) appears to be immune to this attack without an explicit security patch; this is particularly interesting to me. They state that they “don't know if this is intentional or a side effect of another bug fix or change.”

The linear implication is that the software is naturally improving in overall quality, and the byproduct of this is the elimination of security-relevant flaws. However, unless this flaw was present in the original Linux release, 20 years ago, it was added in at a later point as side a effect of a prior fix/upgrade.

Smart organizations known that every attempt to fix a known problem may surreptitiously create entirely new problems (as well as fixing other unknown problems). Rapid/automatic patching is, unfortunately, somewhat of a crapshoot. Instead, seek to structure your environment in such a way that flaws are survivable/contained/isolated as they arise. Use this time to evaluate the patch. Your own modifications may also break stuff, but at least you know exactly what was changed.

Essentially, everything you have is vulnerable. A perfect effort isn't going to prevent a compromise, and any response may make you worse off. This may seem like doom and gloom if not outright hopelessness. Advice: don't panic! Ignore the kernel.org incident. Also, ignore the next time you hear about attackers gaining access to Microsoft, Oracle, Google or your bank.

It doesn't impact you. To drop your OS/bank/cloud provider would be misguided and unproductive. Your best bet is to:
  • Understand your infrastructure; conduct a risk assessment.
  • Know you potential problems; accept the inevitability of failure.
  • Focus on survivability; employ defense-in-depth and isolation/minimization efforts.
  • Try to just maintain an environment of continual, incremental, improvement.
  • Learn from your own failures and those of best practices.
  • Employ a poised touch with regard to change.
  • Be proactive!
If any of the previous points are unclear or unfamiliar, a little bit of panic might be appropriate and bring in some outside help.

Comments, contact security@acgresearch.net.